Back to the talks Previous by track: An experimental Emacs core in Rust Next by track: Exploring shared philosophies in Julia and Emacs Track: Development

p-search: a local search engine in Emacs

Zac Romero - zacromero@posteo.com

Format: 23-min talk ; Q&A: BigBlueButton conference room
Status: TO_CAPTION_QA

Talk

00:00.000 Search in daily workflows 01:24.200 Problems with editor search tools 03:58.233 Information retrieval 04:34.296 Search engine in Emacs: the index 06:21.757 Search engine in Emacs: Ranking 06:43.553 tf-idf: term-frequency x inverse-document-frequency 07:41.160 BM25 08:41.200 Searching with p-search 10:41.457 Flight AF 447 16:06.771 Modifying priors 20:40.405 Importance 21:38.560 Complement or inverse

Duration: 22:42 minutes

Q&A

00:22.970 Q: Do you think a reduced version of this functionality could be integrated into isearch? 02:45.360 Q: Any idea how this would work with personal information like Zettlekastens? 04:22.041 Q: How good does the search work for synonyms especially if you use different languages? 05:15.092 Plurals 05:33.883 Different languages 06:40.200 Q: When searching by author I know authors may setup a new machine and not put the exact same information. Is this doing anything to combine those into one author? 08:50.720 Q: Have you thought about integrating results from using cosine similarity with a deep-learning based vector embedding? 10:01.940 Q: Is it possible to save/bookmark searches or search templates so they can be used again and again? 12:02.800 Q: You mentioned about candidate generators. Could you explain about to what the score is assigned to? 16:32.302 Q: easy filtering with orderless - did this or something like this help or infulce the design of psearch? 17:56.960 Q: Notmuch with the p-search UI 19:13.600 Info 22:14.880 project.el integration 22:56.477 Q: How happy are you with the interface? 25:50.280 gptel 28:01.480 Saving a search 28:41.800 Workflows 31:27.857 Transient and configuration 32:25.391 Problem space 33:39.856 consult-omni 33:52.800 orderless 35:46.268 User interface 40:04.120 Q: Do you think the Emacs being kinda slow will get in the way of being able to run a lot of scoring algorithms? 43:08.640 Boundary conditions

Listen to just the audio:

Description

Search is an essential part of any digital work. Despite this importance, most tools don't go beyond simple string/regex matching. Oftentimes, a user knows more about what they're looking for: who authored the file, how often it's modified, as well as search terms that the user is only slightly confident exist.

p-search is a search-engine designed to combine the various prior knowledge about the search target, presenting it to the user in a systematic way. In this talk, I will present this package as well as go over the fundamentals of inforation retrieval.

Details:

In this talk, I will go over the p-search. p-search is a search-engine to assist users in finding things, with a focus on flexibility and customizablity.

The talk will begin by going over concepts from the field of information retrieval such as indexing, querying, ranking, and evaluating. This will provide the necessary background to describe the workings of p-search.

Next, an overview of the p-search package and its features will be given. p-search utilizes a probabilistic framework to rank documents according to prior beliefs as to what the file is. So for example, a user might know for sure that the file contains a particular string, might have a strong feeling that it should contain another word, and things that some other words it may contain. The user knows the file extension, the subdirectory, and has known that a particular person works on this file a lot. p-search allows the user to express all of these predicates at once, and ranks documents accordingly.

The talk will then progress to discuss assorted topics concerting the project, such as design considerations and future directions.

The aim of the talk is to expand the listeners' understanding of search as well as inspire creativity concerning the possibilities of search tools.

Code: https://github.com/zkry/p-search

Discussion

Questions and answers

  • Q: Do you think a reduced version of this functionality could be integrated into isearch?  Right now you can turn on various flags when using isearch with M-s \<key>, like M-s SPC to match spaces literally.  Is it possible to add a flag to "search the buffer semantically"? (Ditto with M-x occur, which is more similar to your buffer-oriented results interface)
    • A: it's essencially a framwork so you would create a generator; but it does not exist yet.
  • Q: Any idea how this would work with personal information like Zettlekastens? 
    • A: Useable as is, because all the files are in directory. So only have to set the files to search in only. You can then add information to ignore some files (like daily notes). Documentation is coming.
  • Q: How good does the search work for synonyms especially if you use different languages?
    • A: There is an entire field of search to translate the word that is inputted to normalize it (like plural -> singular transformation). Currently p-search does not address this. 
    • A: for different languages it gets complicated (vector search possible, but might be too slow in Elisp).
  • Q: When searching by author I know authors may setup a new machine and not put the exact same information. Is this doing anything to combine those into one author?
    • A: Currently using the git command. So if you know the emails the author have used, you can add different priors.
  • Q: A cool more powerful grep "Rak" to use and maybe has some good ideas in increasing the value of searches, for example using Raku code while searching. is Rak written in Raku. Have you seen it? 
  • Q: Have you thought about integrating results from using cosine similarity with a deep-learning based vector embedding?  This will let us search for "fruit" and get back results that have "apple" or "grapes" in them -- that kind of thing.  It will probably also handle the case of terms that could be abbreviated/formatted differently like in your initial example.
    • A: Goes back to semantic search. Probably can be implemented, but also probably too slow. And it is hard to get the embeddings and the system running on the machine.
  • Q:  I missed the start of the talk, so apologies if this has been covered - is it possible to save/bookmark searches or search templates so they can be used again and again?
    • A: Exactly.  I just recently added bookmarking capabilities, so we can bookmark and rerun our searches from where we left off.  I tried to create a one-to-one mapping from the search object to the search object - there is a command to do this- to get a data representation of the search, to get a custom plist and resume the search where we left off, which can be used to create command to trigger a prior search.
  • Q: You mentioned about candidate generators. Could you explain about to what the score is assigned to. Is it to a line or whatever the candidate generates? How does it work with rg in your demo?

   FOLLOW-UP: How does the git scoring thingy hook into this?\

    • A: Candidate generator produces documents. Documents have properties (like an id and a path). From that you get subproperties like the content of the document. Each candidate generator know how to search in the files (emails, buffers, files, urls, ...). There is only the notion of score + document.
    • Then another method is used to extract the lines that matches in the document (to show precisely the lines that matches).
  • Q: Hearing about this makes me think about how nice the emergent workflow with denote using easy filtering with orderless. It is really easy searching for file tags, titles etc. and do things with them. Did this or something like this help or infulce the design of psearch?

    • A: You can search for whatever you want. No hardcoding is possible for anything (file, directories, tags, titlese...).
  • Q: [comments from IRC] \<NullNix> git covers the "multiple names" thing itself: see .mailmap  10:51:19 

    • \<NullNix> thiis is a git feature, p-search shouldn't need to implement it  10:51:34 
    • \<NullNix> To me this seems to have similarities to notmuch -- honestly I want notmuch with the p-search UI :) (of course, notmuch uses a xapian index, because repeatedly grepping all traffic on huge mailing lists would be insane.)  10:55:30 
    • \<NullNix> (notmuch also has bookmark-like things as a core feature, but no real weighting like p-search does.)  10:56:07 
      • A: I have not used notmuch, but many extensions are possible. mu4e is using  a full index for the search. This could be adapted here to with the SQL database as source. 
  • Q: You can search a buffer using ripgrep by feeding it in as stdin to the ripgrep process, can't you?

    • A: Yes you can. But the aim is to search many different things in elisp. So there is a mechanism in psearch anyway to be able to represent anything including buffers. This is working pretty well.
  • Q:  Thanks for making this lovely thing, I'm looking forward to trying it out.  Seems modular and well thought out. Questions about integreation and about the interface

    • A: project.el is used to search only in the local files of the project (as done by default)
  • Q: how happy are you with the interface?

    • A: psearch is going over the entire files trying to find the best. Many features can be added, e.g., to improve debuggability (is this highly ranked due to a bug? due to a high weight? many matching documents?)
    • A: hopefully will be on ELPA at some point with proper documentation.
  • Q: Remembering searches is not available everywhere (rg.el? but AI package like gptel already have it). Also useful for using the document in the future.

    • A: Retrievel augmented generation: p-search could be used for the search, combining it with an AI to fine-tune the search with a Q-A workflow. Although currently no API.  
    • (gptel author here: I'm looking forward to seeing if I can use gptel with p-search)
    • A: as the results are surprisingly good, why is that not used anywhere else? But there is a lot of setup to get it right. You need to something like emacs with many configuration (transient is helping to do that) without scaring the users. 
    • Everyone uses emacs differently, so unclear how people will really use it. (PlasmaStrike) For example consult-omni (elfeed-tube, ...) searching multiple webpages at the same time, with orderless. However, no webpage offers this option. Somehow those tools stay in emacs only. (Corwin Brust) This is the strength of emacs: people invest a lot of time to improve their workflow from tomorrow. [see xkcd on emacs learning curve vs nano vs vim]
    • https://github.com/armindarvish/consult-omni
    • https://github.com/karthink/elfeed-tube
    • https://www.reddit.com/r/ProgrammerHumor/comments/9d6f19/text_editor_learning_curves_fixed/
    • A: emacs is not the most beginner friendly, but the solution space is very large
    • (Corwin Brust) Emacs supports all approaches and is extensible. (PlasmaStrike) Youtube much larger, but somehow does not have this nice sane interface.
  • Q: Do you think the Emacs being kinda slow will get in the way of being able to run a lot of scoring algorithms?

    • A: The code currently is dumb in a lot of places (like going of all files to calculate a score), but that is not that slow surprisingly. Elisp enumerating all files and multiplying numbers in the emacs repo isn't really slow. But if you have to search in files, this will be slow without relying on ripgrep on a faster tool. Take for example the search in info files / elisp info files, the search in elisp is almost instant. For human-size documents, probably fast enough -- and if not, there is room for optimizations. For coompany-size documents (like repos), could be too small.
  • Q: When do you have to make something more complicated to scale better?

    • A: I do not know yet really. I try to automate tasks as much as possible, like in the emacs configuration meme "not doing work I have to do the configuration". Usually I do not add web-based things into emacs.

Notes

  • I like the dedicated-buffer interface (I'm assuming using magit-section and transient).
  • Very interesting ideas. I was very happy when I was able to do simple filters with orderless, but this is great [11:46]
  • I dunno about you, but I want to start using p-search yesterday. (possibly integrating lsp-based tokens somehow...)
  • Awesome job Ryota, thank you for sharing! 
  • Very interesting ideas. I was very happy when I was able to do simple filters with orderless, but this is great
  • git covers the "multiple names" thing itself: see .mailmap
  • thiis is a git feature, p-search shouldn't need to implement it
  • To me this seems to have similarities to notmuch -- honestly I want notmuch with the p-search UI :) (of course, notmuch uses a xapian index, because repeatedly grepping all traffic on huge mailing lists would be insane.)
  • (notmuch also has bookmark-like things as a core feature, but no real weighting like p-search does.)
  • YouTube comment: thats novel and intersting . The ship wrek analogy was perfect too
  • YouTube comment: Wow... thank you

Transcript

[00:00:00.000] Search in daily workflows
Hello, my name is Zachary Romero, and today I'll be going over p-search, a local search engine in Emacs. Search these days is everywhere in software, from text editors, to IDEs, to most online websites. These tools tend to fall into one of two categories. One are tools that run locally, and work by matching string to text. The most common example of this is grep. In Emacs, there are a lot of extensions which provide functionality on top of these tools, such as projectile-grep, deadgrep, consult-ripgrep. Most editors have some sort of search current project feature. Most of the time, some of these tools have features like regular expressions, or you can specify file extension, or a directory you want to search in, but features are pretty limited. The other kind of search we use are usually hosted online, and they usually search a vast corpus of data. These are usually proprietary online services such as Google, GitHub, SourceGraph for code.
[00:01:24.200] Problems with editor search tools
The kind of search feature that editors usually have have a lot of downsides to them. For one, a lot of times you don't know the exact search string you're searching for. Some complicated term like this high volume demand partner, you know, do you know if... Are some words abbreviated, is it capitalized, is it in kebab case, camel case, snake case? You often have to search all these variations. Another downside is that the search results returned contain a lot of noise. For example, you may get a lot of test files. If the tool hits your vendor directory, it may get a bunch of results from libraries you're using, which most are not helpful. Another downside is that the order given is, well, there's no meaning to the order. It's usually just the search order that the tool happens to look in first. Another thing is, so when you're searching, you oftentimes have to keep the state of the searches in your head. For example, you try one search, you see the results, find the results you think are relevant, keep them in your head, run search number two, look through the results, kind of combine these different search results in your head until you get an idea of which ones might be relevant. Another thing is that the search primitives are fairly limited. So yeah, you can search regular expressions, but you can't really define complex things like, I want to search files in this directory, and this directory, and this directory, except these subdirectories, and accept test files, and I only want files with this file extension. Criteria like that are really hard to... I'm sure they're possible in tools like grep, but they're pretty hard to construct. And lastly, there's no notion of any relevance. All the results you get back, I mean, you don't know, is the search more relevant? Is it twice as relevant? Is it 100 times more relevant? These tools usually don't provide such information.
[00:03:58.233] Information retrieval
There's a field called information retrieval, and this deals with this exact problem. You have lots of data you're searching for. How do you construct a search query? How do you get results back fast? How do you rank which ones are most relevant? How do you evaluate your search system to see if it's getting better or worse? There's a lot of work, a lot of books written on the topic of information retrieval. If one wants to improve searching in Emacs, then drawing inspiration from this field is necessary.
[00:04:34.296] Search engine in Emacs: the index
The first aspect of information retrieval is the index. The reverse index is what search engines use to find results really fast. Essentially, it's a map of search term to locations where that term is located. You'll have all the terms or maybe even parts of the terms, and then you'll have all the locations where they're located. Any query could easily look up where things are located, join results together, and that's how they get the results to be really fast. For this project, I decided to forgo creating an index altogether. An index is pretty complicated to maintain because it always has to be in sync. Any time you open a file and save it, you would have to re-index, you would have to make sure that file is re-indexed properly. Then you have the whole issue of, well, if you're searching in Emacs, you have all these projects, this directory, that directory, how do you know which? Do you always have to keep them in sync? It's quite a hard task to handle that. Then on the other end, tools like ripgrep can search very fast. Even though they can't search maybe on the order of tens of thousands of repositories, for a local setting, they should be plenty fast enough. I benchmarked. Ripgrep, for example, is on the order of gigabytes per second. Definitely, it can search a few pretty big size repositories.
[00:06:21.757] Search engine in Emacs: Ranking
Next main task. We decided not to use an index. Next task is how do we rank search results? So there's two main algorithms that are used these days. The first one is tf-idf, which stands for term frequency, inverse target frequency. Then there's BM25, which is sort of a modified tf-idf algorithm.
[00:06:43.553] tf-idf: term-frequency x inverse-document-frequency
tf-idf, without going into too much detail, essentially multiplies two terms. One is the term frequency, and then you multiply it by the inverse document frequency. The term frequency is a measure of how often that search term occurs. The inverse document frequency is a measure of how much information that term provides. If the term occurs a lot, then it gets a higher score in the term frequency section. But if it's a common word that exists in a lot of documents, then its inverse document frequency goes down. It kind of scores it less. You'll find that words like the, in, is, these really common words, since they occur everywhere, their inverse document frequency is essentially zero. They don't really count towards a score. But when you have rare words that only occur in a few documents, they're weighted a lot more. So the more those rare words occur, they boost the score higher. BM25 is a modification of this. It's essentially TF, it's essentially the previous one, except it dampens out terms that occur more often. Imagine you have a bunch of documents. One has a term 10 times, one has a term, that same term a hundred times, another has a thousand times. You'll see the score dampens off as the number of occurrences increases. That prevents any one term from overpowering the score. This is the algorithm I ended up choosing for my implementation. So with a plan of using a command line tool like ripgrep to get term occurrences, and then using a scoring algorithm like BM25 to rank the terms, we can combine this together and create a simple search mechanism.
[00:08:41.200] Searching with p-search
Here we're in the directory for the Emacs source code. Let's say we want to search for the display code. We run the p-search command, starting the search engine. It opens up. We notice it has three sections, the candidate generators, the priors, and the search results. The candidate generators generates the search space we're looking on. These are all composable and you can add as many as you want. So with this, it specifies that here we're searching on the file system and we're searching in this directory. We're using the ripgrep tool to search with, and we want to make sure that we're searching only on files committed to Git. Here we see the search results. Notice here is their final probability. Here, notice that they're all the same, and they're the same because we don't have any search criteria specified here. Suppose we want to search for display-related code. We add a query: display. So then it spins off the processes, gets the search term counts and calculates the new scores. Notice here that the results that come on top are just at first glance appear to be relevant to display. Remember, if we compare that to just running a ripgrep raw, notice here we're getting 53,000 results and it's pretty hard to go through these results and make sense of it. So that's p-search in a nutshell.
[00:10:41.457] Flight AF 447
Next, I wanted to talk about the story of Flight 447. Flight 447 going from Rio de Janeiro to Paris crashed somewhere in the Atlantic Ocean on June 1st, 2009, killing everyone on board. Four search attempts were made to find the wreckage. None of them were successful, except the finding of some debris and a dead body. It was decided that they really wanted to find the wreckage to retrieve data as to why the search occurred. This occurred two years after the initial crash. With this next search attempt, they wanted to create a probability distribution of where the crash could be. The only piece of concrete data they had was a GPS signal from the ship at 210 containing the GPS location of the plane was at 2.98 degrees north, 30.59 degrees west. That was the only data they had to go off of. So they drew a circle around that point with a radius of 40 nautical miles. They assumed that anything outside the circle would have been impossible for the ship to reach. This was the starting point for creating the probability distribution of where the wreckage occurred. Anything outside the circle, they assumed it was impossible to reach. The only other pieces of data were the four failed search attempts and then some of the debris found. One thing they did decide was to look at similar crashes where control was lost to analyze where the crashes landed, compared to where the loss of control started. This probability distribution, the circular normal distribution was decided upon. Here you can see that the center has a lot higher chance of finding the wreckage. As you go away from the center, the probability of finding the wreckage decreases a lot. The next thing they looked at was, well, they noticed they had retrieved some dead bodies from the wreckage. So they thought that they could calculate the backward drift on that particular day to find where the crash might've occurred. If they found bodies at a particular location, they can kind of work backwards from that in order to find where the initial crash occurred. So here you can see the probability distribution based off of the backward drift model. Here you see the darker colors have a higher probability of finding the location. So with all these pieces of data, so with that circular 40 nautical mile uniform distribution, with that circular normal distribution of comparing similar crashes, as well as with the backward drift, they were able to combine all three of these pieces in order to come up with a final prior distribution of where the wreckage occurred. So this is what the final model they came upon. Here you can see it has that 40 nautical mile radius circle. It has that darker center, which indicates a higher probability because of the crash similarity. Then here you also see along this line has a slightly higher probability due to the backward drift distribution. So the next thing is, since they had performed searches, they decided to incorporate the data from those searches into their new distribution. Here you can see places where they searched initially. If you think about it, you can assume that, well, if you search for something, there's a good chance you'll find it, but not necessarily. Anywhere where they searched, the probability of it finding it there is greatly reduced. It's not zero because obviously you can look for something and miss it, but it kind of reduces the probability that we would expect to find it in those already searched locations. This is the posterior distribution or distribution after counting observations made. Here we can see kind of these cutouts of where the previous searches occurred. This is the final distribution they went off of to perform the subsequent search. In the end, the wreckage was found at a point close to the center here, thus validating this methodology.
[00:16:06.771] Modifying priors
We can see the power of this Bayesian search methodology in the way that we could take information from all the sources we had. We could draw analogies to similar situations. We can quantify these, combine them into a model, and then also update our model according to each observation we make. I think there's a lot of similarities to be drawn with searching on a computer in the sense that when we search for something, there's oftentimes a story we kind of have as to what search terms exist, where we expect to find the file. For example, if you're implementing a new feature, you'll often have some search terms in mind that you think will be relevant. Some search terms, you might think they have a possibility of being relevant, but maybe you're not sure. There's some directories where you know that they're not relevant. There's other criteria like, well, you know that maybe somebody in particular worked on this code. What if you could incorporate that information? Like, I know this author, he's always working on this feature. What if I just give the files that this person works on a higher probability than ones he doesn't work on? Or maybe you think that this is a file that's committed too often. You think that maybe the amount of times of commits it receives should change your probability of this file being relevant. That's where p-search comes in. Its aim is to be a framework in order to incorporate all these sorts of different prior information into your searching process. You're able to say things like, I want files authored by this user to be given higher probability. I want this author to be given a lower priority. I know this author never works on this code. If he has a commit, then lower its probability, or you can specify specific paths, or you can specify multiple search terms, weighing different ones according to how you think those terms should be relevant. So with p-search, we're able to incorporate information from multiple sources. Here, for example, we have a prior of type git author, and we're looking for all of the files that are committed to by Lars. So the more commits he has, the higher probability is given to that file. Suppose there's a feature I know he worked on, but I don't know the file or necessarily even key terms of it. Well, with this, I can incorporate that information. So let's search again. Let's add display. Let's see what responses we get back here. We can add as many of these criteria as we want. We can even specify that the title of the file name should be a certain type. Let's say we're only concerned about C files. We add the file name should contain .c in it. With this, now we notice that all of the C files containing display authored by Lars should be given higher probability. We can continue to add these priors as we feel fit. The workflow that I found helps when searching is that you'll add criteria, you'll see some good results come up and some bad results come up. So you'll often find a pattern in those bad results, like, oh, I don't want test files, or this directory isn't relevant, or something like that. Then you can update your prior distribution, adding its criteria, and then rerun it, and then it will get different probabilities for the files. So in the end, you'll have a list of results that's tailor-made to the thing you're searching for.
[00:20:40.405] Importance
There's a couple of other features I want to go through. One thing is that each of these priors, you can specify the importance. In other words, how important is this particular piece of information to your search? So here, everything is of importance medium. But let's say I really care about something having the word display in it. I'm going to change its importance. Instead of medium, I'll change its importance to high. What that does essentially is things that don't have display in it are given a much bigger penalty and things with the word display in it are rated much higher. With this, we're able to fine-tune the results that we get.
[00:21:38.560] Complement or inverse
Another thing you can do is that you can add the complement or the inverse of certain queries. Let's say you want to search for display, but you don't want it to contain the word frame. With the complement option on, when we create this search prior, now it's going to be searching for frame, but instead of increasing the search score, it's going to decrease it if it contains the word frame. So here, things related to frame are kind of deprioritized. We can also say that we really don't want the search to contain the word frame by increasing its importance. So with all these composable pieces, we can create kind of a search that's tailor-made to our needs. That concludes this talk. There's a lot more I could talk about with regards to research, so definitely follow the project if you're interested. Thanks for watching, and I hope you enjoy the rest of the conference.

Captioner: sachac

Q&A transcript (unedited)

...starting the recording here in the chat, and I see some questions already coming in. So thank you so much for your talk, Zac, and I'll step out of your way and let you field some of these questions. Sounds good. All right, so let's see. I'm going off of the question list.
[00:00:22.970] Q: Do you think a reduced version of this functionality could be integrated into isearch?
So the first one is about having reduced version of the functionality integrated into iSearch. So yeah, with the way things are set up, it is essentially a framework. So you can create a candidate. So just a review from the talk. So you have these candidate generators which generate search candidates. So you can have a file system candidate which generates these file documents, which have text content in them. In theory, you could have like a website candidate generator, and it could be like a web crawler. I mean, so there's a lot of different options. So one option, it's on my mind, and I hope to get to this soon, is create a defun, like a defun candidate generator. So basically it takes a file, splits it up into like defunds, kind of like just like what iSearch would do. and then use each of those, the body of those, as a content for the search session. So, I mean, essentially you could just, you could start up a session, and there's like programmatic ways to start these up too. So you could, if such a candidate generator was created, you could easily, and just like, you know, one command. Get the defunds, create a search session with it, and then just go straight to your query. So, definitely, something just like this is in the works. And I guess another thing is interface. The whole dedicated buffer is helpful for searching, but with this isearch case, there's currently not a way to have a reduced UI, where it's just like, OK, I have these function defuns for the current file. I just want them to pop up at the bottom so I can quickly go through it. So currently, I don't have that. But such a UI is definitely, yeah, thinking about how that could be done.
[00:02:45.360] Q: Any idea how this would work with personal information like Zettlekastens?
Alright, so yeah. So next question. Any idea how this will work with personal information like Zettelkasten? So this is, this is like, I mean, it's essentially usable as is with Zettelkasten method. So, I mean, that I mean basically what like for example org-roam, and I think other ones like Denote, they put all these files in the directory, and so with the already existing file system candidate generator all you'd have to do is set that to be the directory of your Zettelkasten system and then it would just pick up all the files in there and then add those as search candidates. So you could easily just search whatever system you have. Based off of the ways it's set up, if you had maybe your dailies you didn't want to search, it's just as easy to add a criteria saying, I don't want dailies to be searched. Like give, like just eliminate the date, like the things from the daily from the sub directory. And then there you go. you have your Zettelkasten search engine, and you could just copy the, you know, there's, I mean, I need, I'm working on documentation for this to kind of set this up easily, but, you know, you could just create your simple command, just like, your simple command, just like, just take in a text query, run it through the system, and then just get your search results right there. So yeah, definitely that is a use case that's on top of my mind.
[00:04:22.041] Q: How good does the search work for synonyms especially if you use different languages?
So next one, how good does a search work for synonyms, especially if you use different languages? Okay, this is a good question because with the way that VM25 works, it's essentially just like trying to find where terms occur and just counts them up. I mean, this is something I couldn't get into. There's just too much on the topic of information retrieval to kind of go into this, but there is a whole kind of field of just like, how do you, given a search term, how do you know what you should search for? So like popular kind of industrial search engines, like they have kind of this feature where you can like define synonyms, define, term replacement. So whenever you see this term, it should be this. And it even gets even further. If someone searches for a plural string, how do you get the singular from that and search for that? So this is a huge topic that currently p-search doesn't address, but it's on the top of my mind as to how. So that's one part.
[00:05:33.883] Different languages
The next part is for different languages, one thing that kind of seems like it's promising is vector search, which, I mean, with the way p-search is set up, you could easily just create a vector search prior, plug it into the system, and start using it. The only problem is that kind of the vector search functions, like you have to do like cosine similarity, like if you have like 10,000 documents, If you're writing Elisp to calculate the cosine similarity between the vectors, that's going to be very slow. And so now the whole can of worms of indexing comes up. And how do you do that? And is that going to be native elisp? And so that's a whole other can of worms. So yeah, vector search seems promising. And then hopefully maybe other traditional synonyms, stemming, that kind of stuff for alternate terms, that could also be incorporated.
[00:06:40.200] Q: When searching by author I know authors may setup a new machine and not put the exact same information. Is this doing anything to combine those into one author?
Okay, next one. When searching by author, I know authors may set up a new machine and not put the exact same information. Is this doing anything to combine these two in one author? Okay, so for this one, it's not. So it's like the way the get prior is currently set up is that it just does like a get command to get all the get authors. You select one and then it just uses that. But the thing is, is if you knew the two emails that user might have used, the two usernames, you could just set up the two priors. One for the old user's email, and then just add another prior for the new user's email. And then that would be a way to just get both of those set up. So that's kind of a running theme throughout p-search is that It's made to be very flexible and very kind of like Lego block ish kind of like you can just, you know, if you need, you know, if something doesn't meet your needs, you know, it's easy to put pieces in, create new components of the search engine. Let's see, a cool powerful grep "Rak" to maybe have some good ideas. I have searches record code while searching. Okay. So. Okay, that's interesting. I'll have to look into this tool. I haven't seen that. I do kind of keep my eyes out for these kind of things. One thing I have seen that was kind of that, I mean, looked interesting was kind of like AST, like the treesitter, the treesitter grep tools. But like, you can grep for a string in the language itself. So that's something I think would be cool to implement either, because I mean, there's treesitter in Emacs, so it's possible to do a new list. If not, there are those kind of like treesitter. So that's, that's something that I think would be cool to incorporate.
[00:08:50.720] Q: Have you thought about integrating results from using cosine similarity with a deep-learning based vector embedding?
Let's see. Have you thought about integrating results from using cosine similarity with a deep learning based vector embedding? Yeah, exactly. So yeah, this kind of goes back to the topic before it. Definitely the whole semantic search with vector embeddings, that's something that, I mean, it would be actually kind of trivial to implement that in p-search. But like I said, computing the cosine similarity in elisp, it's probably too slow. And then also there's a whole question of how do you get the embeddings? Like, how do you get the system running locally on your machine if you want to run it that or, I mean, so that's actually another kind of aspect that I need to look into. Okay, so let's see.
[00:10:01.940] Q: Is it possible to save/bookmark searches or search templates so they can be used again and again?
Okay, next question. Let's see. I'm sorry if this has been covered. Is it possible to save/bookmark searches or search templates so they can be used again and again? Exactly. So just recently I added bookmarking capabilities. So you can essentially just bookmark whatever search session you have. And yeah, and it's just, it was just a bookmark. You can just open and just like reopen that, rerun that search from where you left off. So there's that. And then also, I tried to set this up so that there is a one-to-one mapping of a Lisp object to the search session. So from every search session you make, you should be able to get a, there's a command to do this, to get a data representation of the search. So it would just be like some plist. All you have to do is just take that plist, call this function p-search-setup-buffer with that data. And then that function should set up the session as you left off. So then like, you know, you could make your commands easy. You can make custom search commands super easy. You just get the data representation of that search, find what pieces you want the user to be able to, you know, the search term, make that a parameter in the command, in the interactive code. So you'd have like print on top and then there you go. You have, you have a command to do the search just like just right there. So, so there's a lot of those things and there's a lot more that could be done. Like maybe having, you know, there's kind of in the works and like thinking about having groups of groups of these things, like maybe you can set up like, Oh, I always add these three criteria together. So I, you know, maybe I can make a preset out of these and make them easy, easily addable. So yeah. A lot of things like that are, you know, I'm thinking about a lot of things about that, so.
[00:12:02.800] Q: You mentioned about candidate generators. Could you explain about to what the score is assigned to?
Okay, so next question. You mentioned about candidate generators. Could you explain about what the score is assigned to? Is this to a line or whatever the candidate generates? How does it work with our junior demo? Okay, yeah, so this is a, this is, so actually I had to implement, I had to rewrite p-search just to get this part right. So the candidate generator generates documents. Documents have properties. So the most notable property is the content property. So essentially what happens is that when you create a file system candidate generator and give it a directory, the code goes into the directory, kind of recursively goes through all the directories, and generates a candidate, which is just like a simple list form. It's saying, this is a file, the file path is this. So that's the document ID. So this is saying, this is a file, it's a file, and its file path is this. And so from that, you get all of the different properties, the sub properties. If you're given that, you know how to get the content. If you're given that, you know how to... So all these properties come out. And then also the candidate generator is the thing that knows how best to search for the terms. So for example, there is a buffer candidate generator. What that does is it just puts all your buffers as search candidates. So obviously you can't, you can't run ripgrep on buffers like you can't you can't do that, you can't run ripgrep on just like yeah just just like buffers that don't have files attached or, for example, maybe there's like an internet search candidate generator, like a web crawler thing. You just imagine it goes to a website, kind of crawls all the links and all that, and then just gets your web pages for the candidates. Obviously, you can't use ripgrep for that either. So, every candidate generator knows how best to search for the terms of what candidate it's generating. So, the file system candidate generator will say, okay, I have a base directory. So, if you ask me, the file system candidate generator, how to get the terms, it knows it's set up to use ripgrep. And so, it runs ripgrep, and so then it goes through, it runs the command, gets the counts, and then store those counts. So, the lines have nothing. At this point, the lines have nothing. There's no notion of lines at all. It's just document, document ID with the amount of times it matched. And that's all you need to run this BM25 algorithm. But then when you get the top results, you obviously want to see the lines that matched. And so there's another thing, another method to kind of get the exact thing, to kind of match out the particular lines. And so that's a separate mechanism. And that can be done in Elist, because if you're not displaying, that's kind of a design decision of P-Search, is that it only displays like maybe 10 or 20. It doesn't display all the results. So you can have Elist just go crazy with just like highlighting things, picking the best kind of pieces to show. So yeah, that's how that's set up. So, here's perhaps a good moment for me to just jump in and comment that in a minute or so we will break away with the live stream to give people an hour of less content to make sure everybody goes and takes their lunch and break a little bit. But if you would like to keep going in here, Love to love to take as many questions. And, of course, we will include that all when we publish the Q and A. Sounds good. Yeah, I'll go and stick around on the stream as we cut away, as we've got a little video surprise we've all prepared to play, just some comments from an Emacs user dated in 2020 or something like this. I forget the detail. Thank you again so much, Zac, for your fascinating talk. Yeah, so, okay.
[00:16:32.302] Q: easy filtering with orderless - did this or something like this help or infulce the design of psearch?
This makes me really think about the emergent workflows with Denote and easy filtering with orderless. Did this or something like this help influence the design of p-search? Yeah, exactly. So, I mean, yeah, I mean, there's just so many different searches. Like, it's just kind of mind-boggling. Like, you could search for whatever you want on your computer. Like, there's just so much, like, you can't, yeah, you can't just like, you can't just like hard code any of these things. It's all malleable. Like maybe somebody wants to search these directories. And so, yeah, like exactly like that use case of having a directory of files where they contain your personal knowledge management system. Yeah, that use case definitely was at the top of my mind. Let's see. Let's see, so Git covers the multiple names thing itself.
[00:17:56.960] Q: Notmuch with the p-search UI
Okay, yeah, so something about notmuch with p-search UI. Actually, interestingly, I think notmuch is, I haven't used it myself, but that's the, email something about yeah so i mean this is like these things are just like these these kind of extensions could kind of go go forever but one thing i thought about is like i use mu4e for email and that uses a full-fledged index. And so having some method to kind of reach into these different systems and kind of be kind of like a front end for this. Another thing is maybe SQL database. You can create a candidate generator from a SQLite query and then... yeah... I've had tons of ideas of different things you could incorporate into the system. Slowly, they're being implemented. Just recently, I implemented an info file candidate generator. So it lists out all the info files, and then it creates a candidate for each of the info nodes. So it turns out, yeah, I mean, it works pretty, I mean, just as well as Google. So I'm up for my own testing. Let's see, you can search a buffer using ripgrep feeding in as standard in to the ripgrep process, can't you? Yep, yeah, you can definitely search a buffer that way. So, yeah, I mean, based off of I mean, if this, yeah, so one thing that came up is that the system wants, I mean, I wanted the system to be able to search a lot of different things. And so it came up that I had, you know, implementing, doing these search things, having an Elist implementation, despite it being slow, would be necessary. So like anything that isn't represented as a file, Elisp, there's a mechanism in p-search to search for it. So, yeah, so having that redundancy kind of lets you get into the, you know, using kind of ripgrep for the big scale things. But then when you get to the individual file, you know, just going back to Elisp to kind of get the finer details seems to, you know, seems to end up working pretty well. Thank you all for listening. Yeah, sounds like we're about out of questions. Hi, Zacc. I have a question or still a question. I just want to thank everybody one more time for their participation, especially you for speaking, Zack. I look forward to playing with p-search myself. Thank you. Yeah, there might be one last question. Is there someone? Yes, there is. I don't know if you can understand me, but thank you for making this lovely thing I feel inspired to try it out and I'm thinking about how to integrate it because it sounds modular and nicely thought out. One small question. Have you thought about Project L integration? And then I have a little bigger question about the interface.
[00:22:14.880] project.el integration
Yeah, project.el integration, it's used in a couple of ways. It's kind of used to kind of as like kind of like a default. This is the directory I want to search for the default p-search command. It does, yeah, it kind of goes off of project.el. If there is a project, it kind of says, okay, this, I want to search this project. And so it kind of, it used that as a default. So there's that. Because I use the project-grep or git-grep search a lot and maybe this is a better solution to the search and the interface you have right now for the search results.
[00:22:56.477] Q: How happy are you with the interface?
How happy are you with it and have you thought about improving or have you ideas for improvements? Yeah, well actually what you see in the demo in the video isn't... There's actually, there is an improvement in the current code. Basically, what it does is it scans there's the current default as it scans the entire file for all of the searches. It finds the window that that has the highest score. So it kind of goes through entire file and just says... And it kind of finds like the piece of the section of text that has the most matches with the terms that score the best. So it's, I mean, that section is pretty good. I mean, that, so yeah, that, that ends up working pretty well. So I mean, in terms of other UI stuff, there's, there's tons, there's tons more that could be done, like, especially like debug ability or like introspection. Like, so this, this result, like, for example, this result ranks really high. Maybe you don't know why though. It's like, because of this, this text query arrow, was it because of this criteria? I think there's some UI elements that could kind of help the user understand why results are scoring high or low. So that's definitely... And that makes a lot of sense to me. You know, a lot of it is demystifying, like understanding what you're learning better and not just finding the right thing. A lot of it is, you know, kind of exploring your data. I love that. Thanks. Okay. I'm not trying to hurry us through either by any stretch. I would be happy to see this be a conversation. I also want to be considerate of your time. And I also wanted to make a quick shout out to everybody who's been updating and helping us capture the questions and the comments and the etherpad. That's just a big help to the extent that people are jumping in there and you know, revising and extending and just doing the best job we can to capture all the thoughtful remarks. Yeah, thank you, Zac. I'm not too sure what to ask anymore, but yes, would love to try it out now. Yeah, I mean, definitely feel free to... any feedback, here's my mail, or issues... I mean I'm happy to get any any feedback. It's still in the early stages, so still kind of a lot of documentation that needs to be writing. There's a lot. There's a lot on the roadmap, but yeah, I mean, hopefully, I could even publish this to ELPA and have a nice manual so yeah hopefully yeah those come soon. Epic. That sounds great, yes. The ability to save your searches kind of reminds me of like the gptel package for the AI, where you can save searches, which makes it feel a lot more different. And yeah, we don't have something for that with search, but yeah, that's a whole different dynamic where it's like, okay, yeah, and makes it a unique tool that is, I guess would be unique to Emacs where you don't see that with like this AI package where the gptel is kind of unique because it's not just throw away. It's how did I get this? How did I search for it? And be an organic search, kind of like the orderless and vertico and... Yeah, that's a good, I mean, that brings me to another thing in that, so, I mean, you could easily... you could create bridges from p-search to these different other packages, like, for example, kind of a RAG search, like there's this RAG, there's this thing called a RAG workflow, which is kind of popular these days. It's like retrieval augmented generation. So, you do a search and then based off the search results you get, then you pass those into LLM. So, the cool thing is that like you could use p-search for the retrieval. And so you could even like, I mean, you could even ask an LM to come up with the search terms and then have it search. There's no programmatical interface now to do this exact workflow. But I mean, there's another kind of direction I'm starting to think about. So like you could have maybe a question answer kind of workflow where it does like an initial search for the terms and then you get the top results and then you can put that through maybe gptel or all these other different systems. So that's, and that seems like a promising thing. And then another thing is like,
[00:28:01.480] Saving a search
well, you mentioned the ability to save a search. One thing I've noticed kind of like with the DevOps workflows is, I'll write a CLI command that I do, or like a calculator command. Then I end up in the org mode document, write what I wrote, had the results in there, and then I'll go back to that. It's like, oh, this is why, this is that calculation I did and this is why I did it. I'll have run the same tool three different times to get three different answers, if it was like a calculator, for example.
[00:28:41.800] Workflows
But yeah, that's a very unique feature that isn't seen and will make me look at it and see about integrating it into my workflow. Yeah, I think you get on some interesting, you know, kind of what makes Emacs really unique there and how to... interesting kind of ways to exploit Emacs to learn in the problem. I'm seeing a number of ways you're getting at that. For example, if I think about like an automation workflow, and there's just a million we'll say, assumptions that are baked into a search product, so to speak, like represented by a Google search or Bing or what have you. And then as I unpack that and repack it from an Emacs workflow standpoint, thinking about, well, first of all, what is the yak I'm shaving? And then also, what does doing it right mean? How would I reuse this? How would I make the code accessible to others for their own purposes in a free software world kind of way? and all of the different sort of say like orthogonal headspacey kind of things, right? Emacs brings a lot to the table from a search standpoint because I'm going to want to think about. I'm going to want to think about where does the UI come in? Where might the user want to get involved interactively? Where might the user want to get involved declaratively with their configuration, perhaps based on the particular environment where this Emacs is running? And there's just a lot of what Emacs users think about that really applies. I'll use the word again, orthogonally across all my many workflows as an Emacs user. You know, the search is just such a big word. Yeah, that's actually, this exact point I was thinking about with this. It's like, I mean, it seems kind of obvious, like just like using grep or something, just like to get search counts, like, okay, you can just run the command, get the term counts and you could just run it through a relatively simple algorithm. to get your search score. So if it's this easy, though, why don't we see this in other... And the results are actually surprisingly good. So why don't we see this anywhere, really? And it occurred to me that just the amount of configuration... The amount of setup you have to do to get it right. It's above this threshold that you need something like Emacs to kind of get pushed through that configuration.
[00:31:27.857] Transient and configuration
So for example, that's why I rely heavily on transient to set up the system. 'Cause like, if you want to get good search results, you're going to have to configure a lot of stuff. I want this directory. I want this, I don't want this directory. I want these search terms, you know, there's a lot to set up. And in most programs, I mean, they don't have an easy way to, I mean, they'll often try and try to hide all this complexity. Like they say, okay, our users too, you know, we don't want to, you know, we don't wanna, you know, make our users, we don't wanna scare our users with like, complicated search engine configuration. So we're just going to do it all in the background and we're just not going to let the user even know that it's happening. I mean, that's the third time you've made me laugh out loud. Sorry for interrupting you, but yeah, you're just spot on there. You're some people's users. Am I right? like, you know, and also some people's workflows.
[00:32:25.391] Problem space
And, you know, another case where just like, if you're thinking about Emacs, you either have to pick a tunnel to dive into and be like, no, this is going to be right for my work, or your problem space is never ending in terms of discovering the ways other people are using Emacs and how that breaks your feature. and how that breaks your conceptualization of the problem space, right? Or you just have to get so narrowed down that can actually be hard to find people that are quite understand you, right? You get into the particular, well, it solves these three problems for me. Well, what are these three problems again? And this is a month to unpack. You have Emacs and I don't know, it's like you got a lot of, they all agree is like we're going to use elisp to set variables every emacs package is going to do that we're going to use elisp and have a search in place to put our documentation and like it does also eliminate a lot of confusion and gives a lot of expectations of what they want. One thing that I'm surprised I haven't seen elsewhere is you have the
[00:33:39.856] consult-omni
consult-omni package which allows you to search multiple websites simultaneously for multiple web search engines. and put them in one thing and it's like, and then you use orderless.
[00:33:52.800] orderless
Why would you use orderless? Because that's what you configured and you know exactly what you wanna use and you use the same font and your same mini buffer and you use all that existing configuration because, well, you're an Emacs user or like you're a command line user. You know how you want these applications to go. You don't want them to be reinvented the wheel 1600 times in 1,600 different ways, you want it to use your mini buffer, your font, your et cetera, et cetera, et cetera. But I haven't seen a website where I can search multiple websites at the same time in something like Emacs before. And it's like, yeah, with my sorting algorithm, Yeah, exactly. Yeah. Yeah. Yeah. I mean, just setting the bar for configuration and set up just like, yeah, you have to have a list. Yeah. I mean, it, it does, obviously it's not, it's not most beginner beginner friendly, but I mean, it, yeah, it definitely widens the amount of the solution space you can have to such problems. Oh my gosh, you used the word solution space. I love it. But on the flip side, it's like, why does Emacs get this consult-omni package? Or let's see, you have elfeed-youtube where it will put a flowing transcript on a YouTube video or you got your package. Why does it get all these applications? And I don't see applications like this as much outside of Emacs. So there's a way that it just makes it easier.
[00:35:46.268] User interface
It's because user interface is the, you know, it's the economy stupid of technology, right? If you grab people by the UX, you can sell a million of any product that solves problem that I didn't think technology could solve, or that I didn't think I had the patience to use technology to solve, which is a lot of times what it comes down to. And here exactly is the, you know, the the Emacs sort of conundrum, right? How much time should I spend today updating my Emacs so that tomorrow I can just work more, right? And, you know, I love that little graph of the Emacs learning curve, right? Where it's this concentric, it becomes this concentric spiral, right? The Vim learning curve is like a ladder, right? Or, you know, and And the nano learning curve is like just a flat plane, you know, or a ladder, a vertical ladder or a horizontal ladder. There we go. And the Emacs learning curve is this kind of straight up line until it curves back on itself and eventually spirals. And the more you learn, the harder it is to learn the next thing. And are you really moving forward at all? Like, it just works for me. What a great analogy. And that's my answer, I think. Yeah. You know, it's because we... The spiral is great. Sorry. There are each of these weird little packages that some of us, you know, it solves that one problem and lets us get back to work. And for others, it makes us go, gosh, now that makes me rethink a whole bunch of things because there's... Like I don't even know what you're talking about with some of your conceptualizations of UI. Maybe it comes from Visual Studio, and I've not used that or something. So for you, it's a perfectly normal UX paradigm that you kind of lean on for others. It's like you know occupying some screen space and I don't know what the gadgets do and when I open them up... They're thinking about... they have... they imply their own abstractions let's say logically against a programming language. This would be tree sitter, right. If i'm not used to thinking in terms of an abstract abstract syntax tree, some of the concepts just aren't as natural for me. If i'm used to like emacs at a more fundamental level is, or the old modes right, we're used to them thinking in terms of progressing forward through some text, managing a stack of markers into the text, right? It's a different paradigm. The world changes. Emacs kind of supports it all. That's why all the apps are built there. That's why when you're talking about that spiral. what that hints at is that this is really just a different algorithm that you're transferring out that makes some things a lot easier and some things a lot harder. That's why I was bringing in those three packages, because in some way it's making these search terms with reusable... Let's see... saveable buffers or interactive buffers in a way that... in a way, that is bigger than what I think it should have, especially in comparison to like how many people use YouTube, but I don't see very many YouTube apps that will show Rolling subtitle list that you can click on to move up and down the video even though YouTube's been around for years. Why does Emacs have a very good implementation that was duct taped together? So before I let you respond to that, Zac, let me just say we're coming up on eating up a whole half hour of your lunchtime and thank you for giving us that extra time. But let me just say, let's, you know, if I could ask you to take like up to another five minutes and then I'll try to kick us off here and make sure everybody does remember to eat. Yeah, so yeah, it looks like there's one other question. So
[00:40:04.120] Q: Do you think the Emacs being kinda slow will get in the way of being able to run a lot of scoring algorithms?
yeah, do you think Emacs being kind of slow will get in the way of being able to run a lot of scoring algorithms? So this is actually a thought I had. Yeah, Emacs, because the code currently kind of does, I mean, it kind of does, it's kind of dumb in a lot of places. a lot of times it just, it does just go through all the files and then just compute some score for them. But I'm surprised that it's, that part actually isn't that slow. Like, like it turns out like, okay, like if you take, for example, Emacs, like the Emacs directory or the Emacs Git repository, or maybe another big Git repository, like you could have an Elisp function enumerate those, and multiply some numbers, maybe multiply 10 numbers together. And that isn't that slow. And that's the bulk of what the only thing that Elisp has to do is just like multiply these numbers. Obviously, if you have to resort to Elisp to search all the files and you have like 10 or 100,000 files, then yeah, Emacs will be slow to manually search, like if you're not using ripgrep or any faster tool and you have, and you have millions of files and yeah, it will be slow. But what I noticed though is like, for example, let's say you want to search for, let's say you want to search like info directory, like info files for Emacs and the Emacs info file and the Elisp info file. So those are two decently sized kind of books, kind of like reference material on Emacs. Relying on Elisp to search both of those together, it's actually pretty, it's actually like almost instant. I mean, it's not slow enough. So I think that's another thing is like scale. Like I think on, on kind of like individual human level scales, I think Elisp can be good enough. if you're going on the scale of like enterprise, like all the repositories, all the Git repositories of an enterprise, then yeah, that scale might, it might, it might be too much. But I think on, on the scale of what most individuals have to deal with on a daily basis, like for example, maybe somebody has some, yeah, I mean, I think it should, I think it hopefully should be enough. And if not, there's always room for optimizations. Yeah, so so I'll redirect you a little bit because based on a couple of things I got into, you know, or if you want to be done be like, you know, give me the hi sign by all means and we can we can shut up shop, but I'm curious, you know, what are what
[00:43:08.640] Boundary conditions
are your boundary conditions? What what tends to cause you to to to write something more complicated and what what causes you to? So to work around it with more complex workflow in Emacs terms, like where do you break out the big guns? Just thinking about, like search, we talked about, maybe that's too abstract a question, but just general usage. Search is an example where almost all of us have probably written something to go find something, right? Yeah, I mean, this is a good question. I'm actually of the idea, at my work, for example, I tried to get rid of all, I mean, this is probably a typical Emacs user thing, but like, I mean, I think that just like getting, just like having Emacs expand to whatever it can get into and whatever it can automate, like any task, any, like, just like the more you can kind of get that coded, I actually find that kind of like, I mean, it is kind of like a meme. Like, yeah, I have to configure my Emacs until it's fun, and then I'll do it. But I actually I actually think that maybe for like a normal software developer, if you invest, if you invest, maybe, maybe you have like some spare time after you've done all your tasks, if you invest all that time in, in just like kind of going through all the workflows, all the, you know, just, just getting all of that in, in Emacs, then I think that that, that acts as kind of like a, it kind of like a productivity multiplier. And so. So I found that, I mean, I found to not have those boundaries. I mean, obviously there's things you can't do, like web-based things. I mean, that's a hard boundary, but that's more because... Yeah, there's really not much to do about that. Nobody's written a front-end engine, and too much of the forebrain is occupied with things that should happen on the "end-users infrastructure", so to speak. So with like 40 seconds left, I was going to say a minute, but I guess, any final thoughts? Yeah, I mean, just thank you for listening, and And thank you for putting this on. It's a really nice conference to have, and I'm glad things like this exist. So thank you. Yeah, it's you and the other folks on this call. Thank you so much, PlasmaStrike, and all the rest of you for hopping on the BBB and having such an interesting discussion. Keeps it really fun for us as organizers. And thanks, everybody, for being here.

Questions or comments? Please e-mail zacromero@posteo.com

Back to the talks Previous by track: An experimental Emacs core in Rust Next by track: Exploring shared philosophies in Julia and Emacs Track: Development