00:00.000 Search in daily workflows
01:24.200 Problems with editor search tools
03:58.233 Information retrieval
04:34.296 Search engine in Emacs: the index
06:21.757 Search engine in Emacs: Ranking
06:43.553 tf-idf: term-frequency x inverse-document-frequency
07:41.160 BM25
08:41.200 Searching with p-search
10:41.457 Flight AF 447
16:06.771 Modifying priors
20:40.405 Importance
21:38.560 Complement or inverse
00:22.970 Q: Do you think a reduced version of this functionality could be integrated into isearch?
02:45.360 Q: Any idea how this would work with personal information like Zettlekastens?
04:22.041 Q: How good does the search work for synonyms especially if you use different languages?
05:15.092 Plurals
05:33.883 Different languages
06:40.200 Q: When searching by author I know authors may setup a new machine and not put the exact same information. Is this doing anything to combine those into one author?
08:50.720 Q: Have you thought about integrating results from using cosine similarity with a deep-learning based vector embedding?
10:01.940 Q: Is it possible to save/bookmark searches or search templates so they can be used again and again?
12:02.800 Q: You mentioned about candidate generators. Could you explain about to what the score is assigned to?
16:32.302 Q: easy filtering with orderless - did this or something like this help or infulce the design of psearch?
17:56.960 Q: Notmuch with the p-search UI
19:13.600 Info
22:14.880 project.el integration
22:56.477 Q: How happy are you with the interface?
25:50.280 gptel
28:01.480 Saving a search
28:41.800 Workflows
31:27.857 Transient and configuration
32:25.391 Problem space
33:39.856 consult-omni
33:52.800 orderless
35:46.268 User interface
40:04.120 Q: Do you think the Emacs being kinda slow will get in the way of being able to run a lot of scoring algorithms?
43:08.640 Boundary conditions
Search is an essential part of any digital work. Despite this
importance, most tools don't go beyond simple string/regex matching.
Oftentimes, a user knows more about what they're looking for: who
authored the file, how often it's modified, as well as search terms that
the user is only slightly confident exist.
p-search is a search-engine designed to combine the various prior
knowledge about the search target, presenting it to the user in a
systematic way. In this talk, I will present this package as well as go
over the fundamentals of inforation retrieval.
Details:
In this talk, I will go over the p-search. p-search is a search-engine
to assist users in finding things, with a focus on flexibility and
customizablity.
The talk will begin by going over concepts from the field of information
retrieval such as indexing, querying, ranking, and evaluating. This
will provide the necessary background to describe the workings of
p-search.
Next, an overview of the p-search package and its features will be
given. p-search utilizes a probabilistic framework to rank documents
according to prior beliefs as to what the file is. So for example, a
user might know for sure that the file contains a particular string,
might have a strong feeling that it should contain another word, and
things that some other words it may contain. The user knows the file
extension, the subdirectory, and has known that a particular person
works on this file a lot. p-search allows the user to express all of
these predicates at once, and ranks documents accordingly.
The talk will then progress to discuss assorted topics concerting the
project, such as design considerations and future directions.
The aim of the talk is to expand the listeners' understanding of search
as well as inspire creativity concerning the possibilities of search
tools.
Q: Do you think a reduced version of this functionality could be
integrated into isearch? Right now you can turn on various flags
when using isearch with M-s \<key>, like M-s SPC to match spaces
literally. Is it possible to add a flag to "search the buffer
semantically"? (Ditto with M-x occur, which is more similar to your
buffer-oriented results interface)
A: it's essencially a framwork so you would create a generator;
but it does not exist yet.
Q: Any idea how this would work with personal information like
Zettlekastens?
A: Useable as is, because all the files are in directory. So
only have to set the files to search in only. You can then add
information to ignore some files (like daily notes).
Documentation is coming.
Q: How good does the search work for synonyms especially if you use
different languages?
A: There is an entire field of search to translate the word that
is inputted to normalize it (like plural -> singular
transformation). Currently p-search does not address this.
A: for different languages it gets complicated (vector search
possible, but might be too slow in Elisp).
Q: When searching by author I know authors may setup a new machine
and not put the exact same information. Is this doing anything to
combine those into one author?
A: Currently using the git command. So if you know the emails
the author have used, you can add different priors.
Q: A cool more powerful grep "Rak" to use and maybe has some good
ideas in increasing the value of searches, for example using Raku
code while searching. is Rak written in Raku. Have you seen it?
A: I have to look into that. Tree-sitter AST would also be cool
to include to have a better search.
Q: Have you thought about integrating results from using cosine
similarity with a deep-learning based vector embedding? This will
let us search for "fruit" and get back results that have "apple"
or "grapes" in them -- that kind of thing. It will probably also
handle the case of terms that could be abbreviated/formatted
differently like in your initial example.
A: Goes back to semantic search. Probably can be implemented,
but also probably too slow. And it is hard to get the embeddings
and the system running on the machine.
Q: I missed the start of the talk, so apologies if this has been
covered - is it possible to save/bookmark searches or search
templates so they can be used again and again?
A: Exactly. I just recently added bookmarking capabilities, so
we can bookmark and rerun our searches from where we left off.
I tried to create a one-to-one mapping from the search object to
the search object - there is a command to do this- to get a data
representation of the search, to get a custom plist and resume
the search where we left off, which can be used to create
command to trigger a prior search.
Q: You mentioned about candidate generators. Could you explain about
to what the score is assigned to. Is it to a line or whatever the
candidate generates? How does it work with rg in your demo?
FOLLOW-UP: How does the git scoring thingy hook into this?\
A: Candidate generator produces documents. Documents have
properties (like an id and a path). From that you get
subproperties like the content of the document. Each candidate
generator know how to search in the files (emails, buffers,
files, urls, ...). There is only the notion of score +
document.
Then another method is used to extract the lines that matches in
the document (to show precisely the lines that matches).
Q: Hearing about this makes me think about how nice the emergent
workflow with denote using easy filtering with orderless. It is
really easy searching for file tags, titles etc. and do things with
them. Did this or something like this help or infulce the design of
psearch?
A: You can search for whatever you want. No hardcoding is
possible for anything (file, directories, tags, titlese...).
Q: [comments from IRC] \<NullNix> git covers the "multiple
names" thing itself: see .mailmap 10:51:19
\<NullNix> thiis is a git feature, p-search shouldn't need to
implement it 10:51:34
\<NullNix> To me this seems to have similarities to notmuch --
honestly I want notmuch with the p-search UI (of course,
notmuch uses a xapian index, because repeatedly grepping all
traffic on huge mailing lists would be insane.) 10:55:30
\<NullNix> (notmuch also has bookmark-like things as a core
feature, but no real weighting like p-search does.) 10:56:07
A: I have not used notmuch, but many extensions are
possible. mu4e is using a full index for the search. This
could be adapted here to with the SQL database as source.
Q: You can search a buffer using ripgrep by feeding it in as stdin
to the ripgrep process, can't you?
A: Yes you can. But the aim is to search many different things
in elisp. So there is a mechanism in psearch anyway to be able
to represent anything including buffers. This is working pretty
well.
Q: Thanks for making this lovely thing, I'm looking forward to
trying it out. Seems modular and well thought out. Questions about
integreation and about the interface
A: project.el is used to search only in the local files of the
project (as done by default)
Q: how happy are you with the interface?
A: psearch is going over the entire files trying to find the
best. Many features can be added, e.g., to improve debuggability
(is this highly ranked due to a bug? due to a high weight? many
matching documents?)
A: hopefully will be on ELPA at some point with proper
documentation.
Q: Remembering searches is not available everywhere (rg.el? but AI
package like gptel already have it). Also useful for using the
document in the future.
A: Retrievel augmented generation: p-search could be used for
the search, combining it with an AI to fine-tune the search with
a Q-A workflow. Although currently no API.
(gptel author here: I'm looking forward to seeing if I can use
gptel with p-search)
A: as the results are surprisingly good, why is that not used
anywhere else? But there is a lot of setup to get it right. You
need to something like emacs with many configuration (transient
is helping to do that) without scaring the users.
Everyone uses emacs differently, so unclear how people will
really use it. (PlasmaStrike) For example consult-omni
(elfeed-tube, ...) searching multiple webpages at the same
time, with orderless. However, no webpage offers this option.
Somehow those tools stay in emacs only. (Corwin Brust) This is
the strength of emacs: people invest a lot of time to improve
their workflow from tomorrow. [see xkcd on emacs learning curve
vs nano vs vim]
A: emacs is not the most beginner friendly, but the solution
space is very large
(Corwin Brust) Emacs supports all approaches and is extensible.
(PlasmaStrike) Youtube much larger, but somehow does not have
this nice sane interface.
Q: Do you think the Emacs being kinda slow will get in the way of
being able to run a lot of scoring algorithms?
A: The code currently is dumb in a lot of places (like going of
all files to calculate a score), but that is not that slow
surprisingly. Elisp enumerating all files and multiplying
numbers in the emacs repo isn't really slow. But if you have to
search in files, this will be slow without relying on ripgrep on
a faster tool. Take for example the search in info files / elisp
info files, the search in elisp is almost instant. For
human-size documents, probably fast enough -- and if not, there
is room for optimizations. For coompany-size documents (like
repos), could be too small.
Q: When do you have to make something more complicated to scale
better?
A: I do not know yet really. I try to automate tasks as much as
possible, like in the emacs configuration meme "not doing work
I have to do the configuration". Usually I do not add web-based
things into emacs.
Notes
I like the dedicated-buffer interface (I'm assuming using
magit-section and transient).
Very interesting ideas. I was very happy when I was able
to do simple filters with orderless, but this is great [11:46]
I dunno about you, but I want to start using p-search
yesterday. (possibly integrating lsp-based tokens
somehow...)
Awesome job Ryota, thank you for sharing!
Very interesting ideas. I was very happy when I was able to do simple filters with orderless, but this is great
git covers the "multiple names" thing itself: see .mailmap
thiis is a git feature, p-search shouldn't need to implement it
To me this seems to have similarities to notmuch -- honestly I want notmuch with the p-search UI (of course, notmuch uses a xapian index, because repeatedly grepping all traffic on huge mailing lists would be insane.)
(notmuch also has bookmark-like things as a core feature, but no real weighting like p-search does.)
YouTube comment: thats novel and intersting . The ship wrek analogy was perfect too
Hello, my name is Zachary Romero, and today I'll be goingover p-search, a local search engine in Emacs.Search these days is everywhere in software, from text editors,to IDEs, to most online websites. These tools tend to fallinto one of two categories. One are tools that run locally,and work by matching string to text. The most commonexample of this is grep. In Emacs, there are a lot ofextensions which provide functionality on top of thesetools, such as projectile-grep, deadgrep,consult-ripgrep. Most editors have some sort ofsearch current project feature. Most of the time,some of these tools have features like regular expressions,or you can specify file extension,or a directory you want to search in,but features are pretty limited.The other kind of search we use are usually hosted online,and they usually search a vast corpus of data.These are usually proprietaryonline services such as Google, GitHub,SourceGraph for code.
The kind of search feature that editorsusually have have a lot of downsides to them. For one, a lotof times you don't know the exact search string you'researching for. Some complicated term like thishigh volume demand partner, you know, do you know if...Are some words abbreviated, is it capitalized,is it in kebab case, camel case, snake case?You often have to search all these variations.Another downside is that the search results returnedcontain a lot of noise. For example,you may get a lot of test files.If the tool hits your vendor directory,it may get a bunch of results from librariesyou're using, which most are not helpful. Another downsideis that the order given is, well, there's no meaning to theorder. It's usually just the search order that the toolhappens to look in first.Another thing is, so when you're searching, you oftentimeshave to keep the state of the searches in your head. Forexample, you try one search, you see the results, find theresults you think are relevant, keep them in your head, runsearch number two, look through the results, kind ofcombine these different search results in your head untilyou get an idea of which ones might be relevant.Another thing is that the search primitives are fairly limited.So yeah, you can search regular expressions, but you can'treally define complex things like, I want to search files inthis directory, and this directory, and this directory,except these subdirectories, and accept test files, and Ionly want files with this file extension. Criteria likethat are really hard to... I'm sure they're possible in toolslike grep, but they're pretty hard to construct.And lastly, there's no notion of any relevance. All theresults you get back, I mean, you don't know, is the searchmore relevant? Is it twice as relevant? Is it100 times more relevant? These tools usually don't providesuch information.
There's a field called information retrieval,and this deals with this exact problem.You have lots of data you're searching for.How do you construct a search query?How do you get results back fast? How do yourank which ones are most relevant? How do you evaluateyour search system to see if it's getting better or worse?There's a lot of work, a lot of books written on the topic ofinformation retrieval. If one wants to improvesearching in Emacs, then drawing inspiration from thisfield is necessary.
The first aspect of information retrieval is the index.The reverse index is what search engines use to find results really fast.Essentially, it's a map of search termto locations where that term is located.You'll have all the terms or maybe even parts ofthe terms, and then you'll have all the locations wherethey're located. Any query could easily look upwhere things are located, join results together, andthat's how they get the results to be really fast. For thisproject, I decided to forgo creating an index altogether.An index is pretty complicated to maintain becauseit always has to be in sync. Any time you open a file and saveit, you would have to re-index, you would have to make surethat file is re-indexed properly. Then you have thewhole issue of, well, if you're searching in Emacs,you have all these projects, this directory,that directory, how do you know which? Do you always have tokeep them in sync? It's quite a hard task to handlethat. Then on the other end, tools like ripgrep cansearch very fast. Even though they can't search maybe on theorder of tens of thousands of repositories, for a localsetting, they should be plenty fast enough.I benchmarked. Ripgrep, for example, ison the order of gigabytes per second.Definitely, it can search a few pretty big sizerepositories.
Next main task. We decided not to use anindex. Next task is how do we rank search results? So there'stwo main algorithms that are used these days. The firstone is tf-idf, which stands for term frequency, inversetarget frequency. Then there's BM25, which is sort of amodified tf-idf algorithm.
[00:06:43.553]tf-idf: term-frequency x inverse-document-frequency
tf-idf, without going intotoo much detail, essentially multiplies two terms. Oneis the term frequency, and then you multiply it by theinverse document frequency. The term frequency is ameasure of how often that search term occurs. Theinverse document frequency is a measure of how muchinformation that term provides. If the term occurs a lot,then it gets a higher score in the term frequency section.But if it's a common word that exists in a lot of documents,then its inverse document frequency goes down.It kind of scores it less. You'll find that words like the,in, is, these really common words, since they occureverywhere, their inverse document frequency isessentially zero. They don't really count towards ascore. But when you have rare words that only occur in afew documents, they're weighted a lot more. So the morethose rare words occur, they boost the score higher.
BM25 is a modification of this. It's essentially TF, it'sessentially the previous one, except it dampens out termsthat occur more often. Imagine you have a bunch ofdocuments. One has a term 10 times, one has a term, that sameterm a hundred times, another has a thousand times.You'll see the score dampens off as the number ofoccurrences increases. That prevents any one term fromoverpowering the score. This is the algorithm I ended upchoosing for my implementation. So with a plan of using acommand line tool like ripgrep to get term occurrences, andthen using a scoring algorithm like BM25 to rank the terms,we can combine this together and create a simple searchmechanism.
Here we're in the directory for the Emacs source code.Let's say we want to search for the display code. Werun the p-search command, starting the search engine. Itopens up. We notice it has three sections, the candidategenerators, the priors, and the search results. Thecandidate generators generates the search space we'relooking on. These are all composable and you can add asmany as you want. So with this, it specifies that herewe're searching on the file system and we're searching inthis directory. We're using the ripgrep tool to searchwith, and we want to make sure that we're searching only onfiles committed to Git. Here we see the search results.Notice here is their final probability. Here, noticethat they're all the same, and they're the same because wedon't have any search criteria specified here. Supposewe want to search for display-related code. We add aquery: display.So then it spins off the processes, gets the search termcounts and calculates the new scores. Notice here thatthe results that come on top are just at first glance appearto be relevant to display. Remember, if we comparethat to just running a ripgrep raw, notice here we'regetting 53,000 results and it's pretty hard to go throughthese results and make sense of it.So that's p-search in a nutshell.
Next, I wanted to talk about the story of Flight 447.Flight 447 going from Rio de Janeiro to Pariscrashed somewhere in the Atlantic Oceanon June 1st, 2009, killing everyone on board.Four search attempts were made to find the wreckage.None of them were successful, except the finding of some debrisand a dead body. It was decided that they really wantedto find the wreckage to retrieve data as to why the searchoccurred. This occurred two years after theinitial crash. With this next search attempt, theywanted to create a probability distribution of where thecrash could be. The only piece of concrete data they hadwas a GPS signal from the ship at 210 containing the GPSlocation of the plane was at 2.98 degrees north, 30.59degrees west. That was the only data they had to go off of.So they drew a circle around that pointwith a radius of 40 nautical miles. They assumed thatanything outside the circle would have been impossible forthe ship to reach. This was the starting point forcreating the probability distribution of where thewreckage occurred. Anything outside the circle, theyassumed it was impossible to reach.The only other pieces of data were the four failed searchattempts and then some of the debris found. One thing theydid decide was to look at similar crashes where control waslost to analyze where the crashes landed, compared to wherethe loss of control started. This probabilitydistribution, the circular normal distribution wasdecided upon. Here you can see that the center has a lothigher chance of finding the wreckage. As you go awayfrom the center, the probability of finding the wreckagedecreases a lot. The next thing they looked at was, well,they noticed they had retrieved some dead bodies from thewreckage. So they thought that they could calculate thebackward drift on that particular day to find where thecrash might've occurred. If they found bodies at aparticular location, they can kind of work backwards fromthat in order to find where the initial crash occurred.So here you can see the probability distribution based off ofthe backward drift model. Here you see the darker colorshave a higher probability of finding the location. Sowith all these pieces of data, so with that circular 40nautical mile uniform distribution, with that circularnormal distribution of comparing similar crashes, as wellas with the backward drift, they were able to combine allthree of these piecesin order to come up with a final prior distribution of wherethe wreckage occurred. So this is what the final modelthey came upon. Here you can see it has that 40 nauticalmile radius circle. It has that darker center, whichindicates a higher probability because of thecrash similarity. Then here you also see along this linehas a slightly higher probability due to the backward driftdistribution.So the next thing is, since they had performed searches,they decided to incorporate the data from those searchesinto their new distribution. Here you can see placeswhere they searched initially. If you think about it,you can assume that, well, if you search for something,there's a good chance you'll find it, but not necessarily.Anywhere where they searched, the probability of itfinding it there is greatly reduced. It's not zero becauseobviously you can look for something and miss it, but it kindof reduces the probability that we would expect to find it inthose already searched locations. This is theposterior distribution or distribution after countingobservations made.Here we can see kind of these cutouts of where theprevious searches occurred. This is the finaldistribution they went off of to perform the subsequentsearch. In the end, the wreckage was found at a point close tothe center here, thus validating this methodology.
We can see the power of this Bayesian search methodologyin the way that we could take information from all the sources we had.We could draw analogies to similar situations.We can quantify these, combine them into a model,and then also update our model according to each observation we make.I think there's a lot of similarities to be drawn withsearching on a computer in the sense that when we search forsomething, there's oftentimes a story we kind of have as towhat search terms exist, where we expect to find the file.For example, if you're implementing a new feature, you'lloften have some search terms in mind that you think will berelevant. Some search terms, you might think they have apossibility of being relevant, but maybe you're not sure.There's some directories where you know that they're notrelevant. There's other criteria like, well, you know thatmaybe somebody in particular worked on this code.What if you could incorporate that information? Like, I knowthis author, he's always working on this feature. What ifI just give the files that this person works on a higherprobability than ones he doesn't work on? Or maybe you thinkthat this is a file that's committed too often. You thinkthat maybe the amount of times of commits it receivesshould change your probability of this file beingrelevant. That's where p-search comes in.Its aim is to be a framework in order to incorporate all thesesorts of different prior information into your searchingprocess. You're able to say things like, I want filesauthored by this user to be given higher probability. I wantthis author to be given a lower priority. I know this authornever works on this code. If he has a commit, then lower itsprobability, or you can specify specific paths, or you canspecify multiple search terms, weighing different onesaccording to how you think those terms should be relevant.So with p-search, we're able to incorporate informationfrom multiple sources. Here, for example, we have a priorof type git author, and we're looking for all of the filesthat are committed to by Lars. So the more commits he has,the higher probability is given to that file. Supposethere's a feature I know he worked on, but I don't know thefile or necessarily even key terms of it. Well, with this, Ican incorporate that information.So let's search again. Let's add display.Let's see what responses we get back here. We can addas many of these criteria as we want. We can even specify thatthe title of the file name should be a certain type. Let'ssay we're only concerned about C files. We add the filename should contain .c in it. With this, now wenotice that all of the C files containing display authoredby Lars should be given higher probability. We cancontinue to add these priors as we feel fit. The workflowthat I found helps when searching is that you'll addcriteria, you'll see some good results come up and some badresults come up. So you'll often find a pattern in thosebad results, like, oh, I don't want test files, or thisdirectory isn't relevant, or something like that. Thenyou can update your prior distribution, adding itscriteria, and then rerun it, and then it will get differentprobabilities for the files. So in the end, you'll have alist of results that's tailor-made to the thing you'researching for.
There's a couple of other features Iwant to go through. One thing is that each of these priors,you can specify the importance. In other words, howimportant is this particular piece of information to yoursearch? So here, everything is of importance medium. Butlet's say I really care about something having the worddisplay in it. I'm going to change its importance.Instead of medium, I'll change its importance to high.What that does essentially is things that don't havedisplay in it are given a much bigger penalty and things withthe word display in it are rated much higher.With this, we're able to fine-tune the results that we get.
Another thing you can do is that you can add the complement orthe inverse of certain queries. Let's say you want tosearch for display, but you don't want it to contain the wordframe. With the complement option on, when we create thissearch prior, now it's going to be searching for frame, butinstead of increasing the search score, it's going todecrease it if it contains the word frame.So here, things related to frame are kind ofdeprioritized. We can also say that we really don't wantthe search to contain the word frame by increasing itsimportance. So with all these composable pieces, we cancreate kind of a search that's tailor-made to our needs.That concludes this talk. There's a lot more I could talkabout with regards to research, so definitely follow theproject if you're interested. Thanks for watching, and Ihope you enjoy the rest of the conference.
Captioner: sachac
Q&A transcript (unedited)
...starting the recording here in the chat, and I see somequestions already coming in. So thank you so much for yourtalk, Zac, and I'll step out of your way and let you fieldsome of these questions.Sounds good. All right, so let's see. I'm going off of thequestion list.
[00:00:22.970]Q: Do you think a reduced version of this functionality could be integrated into isearch?
So the first one is about having reducedversion of the functionality integrated into iSearch. Soyeah, with the way things are set up, it is essentially aframework. Soyou can create a candidate. So just a review from the talk. Soyou have these candidate generators which generate searchcandidates. So you can have a file system candidate whichgenerates these file documents, which have text content inthem. In theory, you could have like a website candidategenerator, and it could be like a web crawler. I mean, sothere's a lot of different options. So one option, it's on mymind, and I hope to get to this soon, is create a defun, like adefun candidate generator. So basically it takes a file,splits it up into like defunds, kind of like just like whatiSearch would do. and then use each of those, the body ofthose, as a content for the search session. So, I mean,essentially you could just, you could start up a session,and there's like programmatic ways to start these up too. Soyou could, if such a candidate generator was created, youcould easily, and just like, you know, one command. Get thedefunds, create a search session with it, and then just gostraight to your query. So, definitely, somethingjust like this is in the works. And I guess another thing isinterface.The whole dedicated buffer is helpful for searching, butwith this isearch case, there's currently not a way to have areduced UI, where it's just like, OK, I have these functiondefuns for the current file. I just want them to pop up at thebottom so I can quickly go through it. So currently, I don'thave that. But such a UI is definitely, yeah, thinking abouthow that could be done.
[00:02:45.360]Q: Any idea how this would work with personal information like Zettlekastens?
Alright, so yeah. So next question. Any idea how thiswill work with personal information like Zettelkasten?So this is, this is like, I mean, it's essentially usable asis with Zettelkasten method. So, I mean, that I meanbasically what like for example org-roam, and I think otherones like Denote, they put all these files in thedirectory, and so with the already existing file systemcandidate generator all you'd have to do is set that to be thedirectory of your Zettelkasten system and then it wouldjust pick up all the files in there andthen add those as search candidates. So you could easilyjust search whatever system you have.Based off of the ways it's set up, if you had maybe yourdailies you didn't want to search, it's just as easy to add acriteria saying, I don't want dailies to be searched. Likegive, like just eliminate the date, like the things from thedaily from the sub directory. And then there you go. you haveyour Zettelkasten search engine, and you could just copythe, you know, there's, I mean, I need, I'm working ondocumentation for this to kind of set this up easily, but,you know, you could just create your simple command, justlike, your simple command, just like, just take in a textquery, run it through the system, and then just get yoursearch results right there. So yeah, definitely that is ause case that's on top of my mind.
[00:04:22.041]Q: How good does the search work for synonyms especially if you use different languages?
So next one, how good does asearch work for synonyms, especially if you use differentlanguages? Okay, this is a good question because with theway that VM25 works, it's essentially just like trying tofind where terms occur and just counts them up.I mean, this is something I couldn't get into. There's justtoo much on the topic of information retrieval to kind of gointo this, but there is a whole kind of field of just like, howdo you, given a search term, how do you know what you shouldsearch for? So like popular kind of industrial searchengines, like they have kind of this feature where you canlike define synonyms, define, term replacement. Sowhenever you see this term, it should be this. And it evengets even further.
If someone searches for a plural string,how do you get the singular from that and search for that? Sothis is a huge topic that currently p-search doesn'taddress, but it's on the top of my mind as to how. So that's onepart.
The next part is for different languages, one thingthat kind of seems like it's promising is vector search,which, I mean, with the way p-search is set up, you couldeasily just create a vector search prior, plug it into thesystem, and start using it. The only problem is that kind ofthe vector search functions, like you have to do like cosinesimilarity, like if you have like 10,000 documents, Ifyou're writing Elisp to calculate the cosine similaritybetween the vectors, that's going to be very slow. And so nowthe whole can of worms of indexing comes up. And how do you dothat? And is that going to be native elisp? And so that's awhole other can of worms. So yeah, vector search seemspromising. And then hopefully maybe other traditionalsynonyms, stemming, that kind of stuff for alternateterms, that could also be incorporated.
[00:06:40.200]Q: When searching by author I know authors may setup a new machine and not put the exact same information. Is this doing anything to combine those into one author?
Okay, next one. When searching by author, I know authors mayset up a new machine and not put the exact same information.Is this doing anything to combine these two in one author?Okay, so for this one, it's not. So it's like the way the getprior is currently set up is that it just does like a getcommand to get all the get authors. You select one and then itjust uses that. But the thing is, is if you knew the two emailsthat user might have used, the two usernames, you could justset up thetwo priors. One for the old user's email, and then just addanother prior for the new user's email. And then that wouldbe a way to just get both of those set up. So that's kind of arunning theme throughout p-search is that It's made to bevery flexible and very kind of like Lego block ish kind oflike you can just, you know, if you need, you know, ifsomething doesn't meet your needs, you know, it's easy toput pieces in, create new components of the searchengine. Let's see, a cool powerful grep "Rak" to maybe havesome good ideas. I have searches record code whilesearching. Okay. So. Okay, that's interesting. I'll haveto look into thistool. I haven't seen that. I do kind of keep my eyes out forthese kind of things. One thing I have seen that was kind ofthat, I mean, looked interesting was kind of like AST, likethe treesitter, the treesitter grep tools. But like, youcan grep for a string in the language itself. So that'ssomething I think would be cool to implement either,because I mean, there's treesitter in Emacs, so it'spossible to do a new list. If not, there are those kind of liketreesitter. So that's, that's something that I think wouldbe cool to incorporate.
[00:08:50.720]Q: Have you thought about integrating results from using cosine similarity with a deep-learning based vector embedding?
Let's see. Have you thought about integrating results fromusing cosine similarity with a deep learning based vectorembedding? Yeah, exactly. So yeah, this kind of goes back tothe topic before it. Definitely the whole semantic searchwith vector embeddings, that's something that, I mean, itwould be actually kind of trivial to implement that inp-search. But like I said, computing the cosine similarityin elisp, it's probably too slow.And then also there's a whole question of how do you get the embeddings?Like, how do you get the system running locally on yourmachine if you want to run it that or, I mean, so that'sactually another kind of aspect that I need to look into.Okay, so let's see.
[00:10:01.940]Q: Is it possible to save/bookmark searches or search templates so they can be used again and again?
Okay, next question. Let's see. I'm sorry if this has beencovered. Is it possible to save/bookmark searches or searchtemplates so they can be used again and again? Exactly. Sojust recently I added bookmarking capabilities. Soyou can essentially just bookmark whatever search session youhave. And yeah, and it's just, it was just a bookmark. You canjust open and just like reopen that, rerun that search fromwhere you left off. So there's that. And then also, I tried toset this up so that there is a one-to-one mapping of a Lispobject to the search session. So from every search sessionyou make, you should be able to get a, there's a command to dothis, to get a data representation of the search. So it wouldjust be like some plist. All you have to do is just take thatplist, call this function p-search-setup-buffer with thatdata. And then that function should set up the session as youleft off. So then like, you know, you could make yourcommands easy. You can make custom search commands supereasy. You just get the data representation of that search,find what pieces you want the user to be able to, you know, thesearch term, make that a parameter in thecommand, in the interactive code. So you'd have likeprint on top and then there you go. You have,you have a command to do the searchjust like just right there. So, sothere's a lot of those things and there's a lot more thatcould be done. Like maybe having, you know, there's kind ofin the works and like thinking about having groups of groupsof these things, like maybe you can set up like, Oh, I alwaysadd these three criteria together. So I, you know, maybe Ican make a preset out of these and make them easy, easilyaddable. So yeah. A lot of things like that are, you know, I'mthinking about a lot of things about that, so.
[00:12:02.800]Q: You mentioned about candidate generators. Could you explain about to what the score is assigned to?
Okay, so next question. You mentioned about candidategenerators. Could you explain about what the score isassigned to? Is this to a line or whatever the candidategenerates? How does it work with our junior demo? Okay,yeah, so this is a, this is, so actually I had to implement, Ihad to rewrite p-search just to get this part right. So thecandidate generator generates documents. Documents haveproperties. So the most notable property is the contentproperty. So essentially what happens is that when youcreate a file system candidate generator and give it adirectory, the code goes into the directory, kind ofrecursively goes through all the directories, andgenerates a candidate, which is just like a simple listform. It's saying, this is a file, the file path is this. Sothat's the document ID. So this is saying, this is a file,it's a file, and its file path is this. And so from that, youget all of the different properties, the sub properties. Ifyou're given that, you know how to get the content. If you'regiven that, you know how to... So all these properties comeout. And then also the candidate generator is the thing thatknows how best to search for the terms. So for example, thereis a buffer candidate generator. What that does is it justputs all your buffers as search candidates. So obviouslyyou can't, you can't run ripgrep on buffers like you can't youcan't do that, you can't run ripgrep on just like yeah justjust like buffers that don't have files attached or, forexample, maybe there's like an internet search candidategenerator, like a web crawler thing. You just imagine itgoes to a website, kind of crawls all the links and all that,and then just gets your web pages for the candidates.Obviously, you can't use ripgrep for that either. So, everycandidate generator knows how best to search for the termsof what candidate it's generating. So, the file systemcandidate generator will say, okay, I have a basedirectory. So, if you ask me, the file system candidategenerator, how to get the terms, it knows it's set up to useripgrep. And so, it runs ripgrep, and so then it goesthrough, it runs the command, gets the counts, and thenstore those counts. So, the lines have nothing. At thispoint, the lines have nothing. There's no notion of lines atall. It's just document, document ID with the amount oftimes it matched. And that's all you need to run this BM25algorithm. But then when you get the top results, youobviously want to see the lines that matched. And so there'sanother thing, another method to kind of get the exactthing, to kind of match out the particular lines. And sothat's a separate mechanism. And that can be done in Elist,because if you're not displaying, that's kind of a designdecision of P-Search, is that it only displays like maybe 10or 20. It doesn't display all the results. So you can haveElist just go crazy with just like highlighting things,picking the best kind of pieces to show. So yeah, that's howthat's set up.So, here's perhaps a good moment for me to just jump in andcomment that in a minute or so we will break away with the livestream to give people an hour of less content to make sureeverybody goes and takes their lunch and break a little bit.But if you would like to keep going in here, Love to love totake as many questions. And, of course, we will includethat all when we publish the Q and A. Sounds good. Yeah, I'll goand stick around on the stream as we cut away, as we've got alittle video surprise we've all prepared to play, just somecomments from an Emacs user dated in 2020 or something likethis. I forget the detail. Thank you again so much, Zac, foryour fascinating talk.Yeah, so, okay.
[00:16:32.302]Q: easy filtering with orderless - did this or something like this help or infulce the design of psearch?
This makes me really think about theemergent workflows with Denote and easy filtering withorderless.Did this or something like this help influence the design ofp-search? Yeah, exactly. So, I mean, yeah, I mean, there'sjust so many different searches. Like, it's just kind ofmind-boggling. Like, you could search for whatever you wanton your computer. Like, there's just so much, like, youcan't, yeah, you can't just like, you can't just like hardcode any of these things. It's all malleable. Like maybesomebody wants to search these directories. And so, yeah,likeexactly like that use case of having a directory of fileswherethey contain your personal knowledge management system.Yeah, that use case definitely was at the top of my mind.Let's see.Let's see, so Git covers the multiple names thing itself.
Okay, yeah,so something about notmuch with p-search UI. Actually,interestingly, I think notmuch is, I haven't used itmyself, but that's the, email something about yeah so i meanthis is like these things are just like these these kind ofextensions could kind of go go forever but one thing ithought about is like i use mu4e for emailand that uses a full-fledged index. And so havingsome method to kind of reach into these different systemsand kind of be kind of like a front end for this.Another thing is maybe SQL database.You can create a candidate generator from a SQLite queryand then... yeah...I've had tons of ideas of different things you couldincorporate into the system. Slowly,they're being implemented. Just recently, I implemented
an info file candidate generator. So it lists out all theinfo files, and then it creates a candidate for each of theinfo nodes. So it turns out, yeah, I mean, it works pretty, Imean, just as well as Google. So I'm up for my own testing.Let's see, you can search a buffer using ripgrep feeding inas standard in to the ripgrep process, can't you? Yep, yeah,you can definitely search a buffer that way. So, yeah, Imean, based off of I mean, if this, yeah, so one thing thatcame up is that the system wants, I mean, I wanted the systemto be able to search a lot of different things. And so it cameup that I had, you know, implementing,doing these search things, having an Elistimplementation, despite it being slow, would benecessary. So like anything that isn't represented as afile, Elisp, there's a mechanism in p-search to search forit.So, yeah, so having that redundancy kind of lets you get intothe, you know, using kind of ripgrep for the big scalethings. But then when you get to the individual file, youknow, just going back to Elisp to kind of get the finerdetails seems to, you know, seems to end up working prettywell.Thank you all for listening. Yeah, sounds like we're aboutout of questions. Hi, Zacc. I have a question or still aquestion. I just want to thank everybody one more time fortheir participation, especially you for speaking, Zack. Ilook forward to playing with p-search myself. Thank you.Yeah, there might be one last question. Is there someone?Yes, there is. I don't know if you can understand me, butthank you for making this lovely thingI feel inspired to try it out and I'm thinking about how tointegrate it because it sounds modular and nicely thoughtout. One small question. Have you thought about Project Lintegration? And then I have a little bigger question aboutthe interface.
Yeah, project.el integration, it's used in a couple of ways.It's kind of used to kind of as like kind of like a default.This is the directory I want to search for the defaultp-search command. It does, yeah, it kind of goes off ofproject.el. If there is a project, it kind of says, okay, this,I want to search this project. And so it kind of, it used thatas a default. So there's that. Because I use the project-grepor git-grep search a lot and maybe this is a better solution tothe search and the interface you have right now for thesearch results.
How happy are you with it and have youthought about improving or have you ideas forimprovements? Yeah, well actually what you see in the demoin the video isn't... There's actually, there is animprovement in the current code. Basically, what itdoes is it scans there's the current default as it scansthe entire file for all of the searches.It finds the window that that has the highest score. So it kindof goes through entire file and just says... And it kind of findslike the piece of the section of text that has the mostmatches with the terms that score the best. So it's, I mean,that section is pretty good. I mean, that, so yeah, that,that ends up working pretty well. So I mean, in terms of otherUI stuff, there's, there's tons, there's tons more thatcould be done, like, especially like debug ability or likeintrospection. Like, so this, this result, like, forexample, this result ranks really high. Maybe you don'tknow why though. It's like, because of this, this text queryarrow, was it because of this criteria? I thinkthere's some UI elements that could kind of help the userunderstand why results are scoring high or low. So that'sdefinitely... And that makes a lot of sense to me. You know, alot of it is demystifying, like understanding what you'relearning better and not just finding the right thing. A lotof it is, you know, kind of exploring your data. I love that.Thanks. Okay. I'm not trying to hurry us through either byany stretch. I would be happy to see this be a conversation.I also want to be considerate of your time. And I also wanted tomake a quick shout out to everybody who's been updating andhelping us capture the questions and the comments and theetherpad. That's just a big help to the extent that peopleare jumping in there and you know, revising and extendingand just doing the best job we can to capture all thethoughtful remarks.Yeah, thank you, Zac. I'm not too sure what to ask anymore,but yes, would love to try it out now. Yeah, I mean,definitely feel free to...any feedback, here's my mail, or issues...I mean I'm happy to get any any feedback. It'sstill in the early stages, so still kind of a lot ofdocumentation that needs to be writing. There's a lot.There's a lot on the roadmap, but yeah, I mean, hopefully, Icould even publish this to ELPA and have a nicemanual so yeah hopefully yeah those come soon. Epic.That sounds great, yes.
The ability to save your searches kind of reminds me of likethe gptel package for the AI, where you can save searches,which makes it feel a lot more different. And yeah, we don'thave something for that with search, but yeah, that's awhole different dynamic where it's like, okay, yeah, andmakes it a unique tool that is, I guess would be unique toEmacs where you don't see that with like this AI packagewhere the gptel is kind of unique because it's not just throwaway. It's how did I get this? How did I search for it? And be anorganic search, kind of like the orderless and verticoand...Yeah, that's a good, I mean, that brings me to another thingin that, so,I mean, you could easily...you could create bridges from p-search to these differentother packages, like, for example, kind of a RAG search,like there's this RAG, there's this thing called a RAGworkflow, which is kind of popular these days. It's likeretrieval augmented generation. So, you do a search andthen based off the search results you get, then you passthose into LLM. So, the cool thing is that like you could usep-search for the retrieval. And so you could even like, Imean, you could even ask an LM to come up with the search termsand then have it search. There's noprogrammatical interface now to do this exact workflow.But I mean, there's another kind of direction I'm startingto think about. So like you could have maybea question answer kind of workflow where it doeslike an initial search for the terms and then you get the topresults and then you can put that through maybe gptel or allthese other different systems. So that's, and that seemslike a promising thing. And then another thing is like,
well, you mentioned the ability to save a search.One thing I've noticedkind of like with the DevOps workflows is, I'll write aCLI command that I do, or like a calculator command. Then I endup in the org mode document, write what I wrote, had theresults in there, and then I'll go back to that.It's like, oh, this is why, this is that calculation I didand this is why I did it.I'll have run the same tool three differenttimes to get three different answers, if it was like acalculator, for example.
But yeah, that's a very unique feature that isn't seen andwill make me look at it and see about integrating it into myworkflow. Yeah, I think you get on some interesting, youknow, kind of what makes Emacs really unique there and howto... interesting kind of ways to exploitEmacs to learn in the problem. I'm seeing a number ofways you're getting at that. For example, if I think aboutlike an automation workflow, and there's just a millionwe'll say, assumptions that are baked into a searchproduct, so to speak, like represented by a Google search orBing or what have you. And then as I unpack that and repack itfrom an Emacs workflow standpoint, thinking about, well,first of all, what is the yak I'm shaving? And then also, whatdoes doing it right mean? How would I reuse this? How would Imake the code accessible to others for their own purposes ina free software world kind of way? and all of the differentsort of say like orthogonal headspacey kind of things,right? Emacs brings a lot to the table from a searchstandpoint because I'm going to want to think about. I'mgoing to want to think about where does the UI come in? Wheremight the user want to get involved interactively? Wheremight the user want to get involved declaratively withtheir configuration, perhaps based on the particularenvironment where this Emacs is running? And there's just alot of what Emacs users think about that really applies.I'll use the word again, orthogonally across all my manyworkflows as an Emacs user. You know, the search is just sucha big word. Yeah, that's actually, this exact point I wasthinking about with this. It's like, I mean, it seems kind ofobvious, like just like using grep or something, just like toget search counts, like, okay, you can just run the command,get the term counts and you could just run it through arelatively simple algorithm. to get your search score. Soif it's this easy, though, why don't we see this in other... Andthe results are actually surprisingly good. So why don't wesee this anywhere, really? And it occurred to me that justthe amount of configuration... The amount of setup you have todo to get it right.It's above this threshold that you need something likeEmacs to kind of get pushed through that configuration.
So for example, that's why I rely heavily on transientto set up the system. 'Cause like, if you want to get goodsearch results, you're going to have to configure a lotof stuff. I want this directory. I want this, I don'twant this directory. I want these search terms, you know,there's a lot to set up. And in most programs, I mean, theydon't have an easy way to, I mean, they'll often try and try tohide all this complexity. Like they say, okay, our userstoo, you know, we don't want to, you know, we don't wanna, youknow, make our users, we don't wanna scare our users withlike, complicated search engine configuration. So we'rejust going to do it all in the background and we're just notgoing to let the user even know that it's happening. I mean,that's the third time you've made me laugh out loud. Sorryfor interrupting you, but yeah, you're just spot on there.You're some people's users. Am I right? like, you know, andalso some people's workflows.
And, you know, another casewhere just like, if you're thinking about Emacs, you eitherhave to pick a tunnel to dive into and be like, no, this isgoing to be right for my work, or your problem space is neverending in terms of discovering the ways other people areusing Emacs and how that breaks your feature. and how thatbreaks your conceptualization of the problem space,right? Or you just have to get so narrowed down that canactually be hard to find people that are quite understandyou, right? You get into the particular, well, it solvesthese three problems for me. Well, what are these threeproblems again? And this is a month to unpack. You have Emacsand I don't know, it's like you got a lot of, they all agree islike we're going to use elisp to set variables every emacspackage is going to do that we're going to use elisp and have asearch in place to put our documentation and like it doesalso eliminate a lot of confusion and gives a lot ofexpectations of what they want. One thing that I'msurprised I haven't seen elsewhere is you have the
consult-omni package which allows you to search multiple websitessimultaneously for multiple web search engines. and putthem in one thing and it's like, and then you use orderless.
Why would you use orderless? Because that's what youconfigured and you know exactly what you wanna use and youuse the same font and your same mini buffer and you use allthat existing configuration because, well, you're anEmacs user or like you're a command line user. You know howyou want these applications to go. You don't want them to bereinvented the wheel 1600 times in 1,600 different ways,you want it to use your mini buffer, your font, your etcetera, et cetera, et cetera. But I haven'tseen a website where I can search multiple websites at thesame time in something like Emacs before. And it's like,yeah, with my sorting algorithm,Yeah, exactly. Yeah. Yeah. Yeah. I mean, just setting thebar for configuration and set up just like, yeah, you have tohave a list. Yeah. I mean, it, it does, obviously it's not,it's not most beginner beginner friendly, but I mean, it,yeah, it definitely widens the amount of the solution spaceyou can have to such problems. Oh my gosh, you used the wordsolution space. I love it. But on the flip side, it's like,why does Emacs get this consult-omni package? Or let's see,you have elfeed-youtube where it will put a flowingtranscript on a YouTube video or you got your package. Whydoes it get all these applications? And I don't seeapplications like this as much outside of Emacs. So there'sa way that it just makes it easier.
It's because userinterface is the, you know, it's the economy stupid oftechnology, right? If you grab people by the UX, you can sella million of any product that solves problem that I didn'tthink technology could solve, or that I didn't think I hadthe patience to use technology to solve, which is a lot oftimes what it comes down to. And here exactly is the, youknow, the the Emacs sort of conundrum, right? How much timeshould I spend today updating my Emacs so that tomorrow I canjust work more, right? And, you know, I love that littlegraph of the Emacs learning curve, right? Where it's thisconcentric, it becomes this concentric spiral, right? TheVim learning curve is like a ladder, right? Or, you know, andAnd the nano learning curve is like just a flat plane, youknow, or a ladder, a vertical ladder or a horizontal ladder.There we go. And the Emacs learning curve is this kind ofstraight up line until it curves back on itself andeventually spirals. And the more you learn, the harder it isto learn the next thing. And are you really moving forward atall? Like, it just works for me. What a great analogy. Andthat's my answer, I think. Yeah. You know, it's becausewe... The spiral is great. Sorry. There are each of theseweird little packages that some of us, you know, it solvesthat one problem and lets us get back to work. And for others,it makes us go, gosh, now that makes me rethink a whole bunchof things because there's... Like I don't even know whatyou're talking about with some of your conceptualizationsof UI. Maybe it comes from Visual Studio, and I've notused that or something. So for you, it's a perfectly normal UXparadigm that you kind of lean on for others. It's like youknow occupying some screen space and I don't know what thegadgets do and when I open them up... They're thinkingabout... they have... they imply their ownabstractions let's say logically against a programminglanguage. This would be tree sitter, right. If i'm not used tothinking in terms of an abstract abstract syntax tree, someof the concepts just aren't as natural for me. If i'm used tolike emacs at a more fundamental level is, or the old modesright, we're used to them thinking in terms of progressingforward through some text, managing a stack of markers intothe text, right? It's a different paradigm. The worldchanges. Emacs kind of supports it all. That's why all theapps are built there. That's why when you're talking aboutthat spiral. what that hints at is that this is really just adifferent algorithm that you're transferring out thatmakes some things a lot easier and some things a lot harder.That's why I was bringing in those three packages, becausein some way it's making these search terms with reusable...Let's see... saveable buffers or interactive buffers in a waythat... in a way, that is bigger than what I think it should have,especially in comparison to like how many people useYouTube, but I don't see very many YouTube apps that willshow Rolling subtitle list that you can click on to move upand down the videoeven though YouTube's been around for years.Why does Emacs have a very good implementationthat was duct taped together? So before I let you respond tothat, Zac, let me just say we're coming up on eating up awhole half hour of your lunchtime and thank you for giving usthat extra time. But let me just say, let's, you know, if Icould ask you to take like up to another five minutes and thenI'll try to kick us off here and make sure everybody doesremember to eat.Yeah, so yeah, it looks like there's one other question. So
[00:40:04.120]Q: Do you think the Emacs being kinda slow will get in the way of being able to run a lot of scoring algorithms?
yeah, do you think Emacs being kind of slow will get in the wayof being able to run a lot of scoring algorithms? So this isactually a thought I had. Yeah, Emacs, because the codecurrently kind of does, I mean, it kind of does, it's kind ofdumb in a lot of places. a lot of times it just, it does just gothrough all the files and then just compute some score forthem. But I'm surprised that it's, that part actually isn'tthat slow. Like, like it turns out like, okay, like if youtake, for example, Emacs, like the Emacs directory or theEmacs Git repository, or maybe another big Git repository,like you could have an Elisp function enumerate those, andmultiply some numbers, maybe multiply 10 numberstogether. And that isn't that slow. And that's the bulk ofwhat the only thing that Elisp has to do is just like multiplythese numbers. Obviously, if you have to resort to Elisp tosearch all the files and you have like 10 or 100,000 files,then yeah, Emacs will be slowto manually search, like if you're not using ripgrep or anyfaster tool and you have, and you have millions of files andyeah, it will be slow. But what I noticed though is like, forexample, let's say you want to search for, let's say you wantto search like info directory, like info files for Emacs andthe Emacs info file and the Elisp info file. So those are twodecently sized kind of books, kind of like referencematerial on Emacs.Relying on Elisp to search both of those together, it'sactually pretty, it's actually like almost instant. Imean, it's not slow enough. So I think that'sanother thing is like scale. Like I think on, on kind of likeindividual human level scales, I think Elisp can be goodenough. if you're going on the scale of like enterprise,like all the repositories, all the Git repositories of anenterprise, then yeah, that scale might, it might, it mightbe too much. But I think on, on the scale of what mostindividuals have to deal with on a daily basis, like forexample, maybe somebody has some, yeah, I mean, I think itshould, I think it hopefully should be enough. And if not,there's always room for optimizations.Yeah, so so I'll redirect you a little bit because based on acouple of things I got into, you know, or if you want to be donebe like, you know, give me the hi sign by all means and we canwe can shut up shop, but I'm curious, you know, what are what
are your boundary conditions? What what tends to cause youto to to write something more complicated and what whatcauses you to? So to work around it with more complexworkflow in Emacs terms, like where do you break out the bigguns? Just thinking about, like search, we talked about,maybe that's too abstract a question, but just generalusage. Search is an example where almost all of us haveprobably written something to go find something, right?Yeah, I mean, this is a good question. I'm actually of theidea, at my work, for example, I tried to get rid of all, Imean, this is probably a typical Emacs user thing, but like,I mean, I think that just like getting, just like havingEmacs expand to whatever it can get into and whatever it canautomate, like any task, any, like, just like the more youcan kind of get that coded, I actually find that kind of like,I mean, it is kind of like a meme. Like, yeah, I have toconfigure my Emacs until it's fun, and then I'll do it. But Iactually I actually think that maybe for like a normalsoftware developer, if you invest, if you invest, maybe,maybe you have like some spare time after you've done allyour tasks, if you invest all that time in, in just like kindof going through all the workflows, all the, you know, just,just getting all of that in, in Emacs, then I think that that,that acts as kind of like a, it kind of like a productivitymultiplier. And so. So I found that, I mean, I found to nothave those boundaries. I mean, obviously there's thingsyou can't do, like web-based things. I mean, that's a hardboundary, but that's more because... Yeah, there's reallynot much to do about that. Nobody's written a front-endengine, and too much of the forebrain is occupied withthings that should happen on the "end-usersinfrastructure", so to speak. So with like 40 seconds left, Iwas going to say a minute, but I guess, any final thoughts?Yeah, I mean, just thank you for listening, and And thank youfor putting this on. It's a really nice conference to have,and I'm glad things like this exist. So thank you. Yeah, it'syou and the other folks on this call. Thank you so much,PlasmaStrike, and all the rest of you for hopping on the BBBand having such an interesting discussion. Keeps it reallyfun for us as organizers. And thanks, everybody, for beinghere.