Swanky Python: Interactive development for Python

Scott Zimmermann (he/him) - sczi@disroot.org

Format: 22-min talk ; Q&A: ask questions via Etherpad/IRC; we'll e-mail the speaker and post answers on this wiki page after the conference Etherpad: https://pad.emacsconf.org/2025-swanky
Etherpad: https://pad.emacsconf.org/2025-swanky
Status: TO_REVIEW_QA

Duration: 21:03 minutes

Description

Project repository: https://codeberg.org/sczi/swanky-python/

I'm working on a development environment for Python based on Emacs' SLIME mode for Common Lisp. In this talk I'll demonstrate some of its features, like an object inspector, interactive backtrace buffer, thread and async task viewer, and function tracer. I'll also discuss its implementation and limitations, along with future directions for the project.

This project aims to bring a Lisp and Smalltalk inspired style of development to Python. You get a faster feedback loop by developing inside a running python process without needing to restart your program and lose state on changes, allowing you to immediately inspect the results of code you write. We can also provide more advanced tooling based on runtime introspection, as we have more information available at runtime than is available to traditional tools based on static analysis of source code, mainly we have the actual values of variables rather than just their types.

About the speaker:

Python is eating the world. Emacs is eating my computing environment. I'm attempting to get them working together.

Transcript

Hello everyone, I'm Scott and I'll be talking about Swanky Python, which is a development environment for Python based on Emacs' Slime package. So what is that and why might you find it interesting? SLIME is the Superior Lisp Interaction Mode for Emacs. It's an Emacs package for developing Common Lisp, and it's a bit different from the way we develop most languages in that you're always connected to a running instance of your application, and you kind of build up your application, piece by piece, modifying one expression at a time without ever having to restart your application. So why might you want to develop this way? One advantage is that you can get a faster feedback loop. For some kinds of software, it doesn't make a big difference. Like, if you're developing a web backend where all state is stored externally in a database, then you can have a file watcher that just restarts the whole Python process whenever you make any edit, and you're not really losing anything, because all the state is stored outside the Python process in a database. So it works great. But for other kinds of software, like let's say you're developing an Emacs package or a video game, then it can be a real pain to restart the application and recreate the state it was in before just to test the effect of each edit you want to make. Another advantage is the runtime introspection you have available. So since you're always connected to a running instance of your application, you can inspect the values of variables, you can trace functions, and all sorts of other information to help you understand your application better. And lastly, it's just a lot of fun to develop this way, or at least I find it fun developing with SLIME, so I wrote a SLIME backend for Python so I could have more fun when I'm coding in Python. As for the name swanky-python, within SLIME, swank is the name of the Common Lisp backend that runs within your Common Lisp application and connects to Emacs. So I'm not too creative. swanky-python is just a swank implementation in Python. So let's see it in action. So we started up with M-x slime. And what that does is it starts a Python process, starts swanky-python within it, and connects to it from Emacs. And you can configure how exactly it runs Python. Or you can start swanky python manually within a Python application running on a remote server and forward the port locally and connect to it in Emacs, from Emacs remotely. Within the README, there's more documentation on other ways to start it. But just M-x slime is the basic way that works most of the time. So within the REPL, the first thing you'll notice is that REPL outputs are clickable buttons, what SLIME calls presentations. So you can do things like inspect them. And for each presentation, in the Python backend, it holds on to the reference to the object. So for an int, it's not too interesting, but let's do a more complex object like a file. Then we can inspect the file. We can describe it, which will bring up documentation on that class. We can use it in further expressions like if we copy it, it will use the actual Python object in this expression. We can assign it to a variable. SLIME uses presentations everywhere that a Python object would be displayed. So instead of just their string representation, when you have a backtrace on an exception, or you... within the inspector or anywhere else really, anywhere that the string representation of an object would be displayed, it displays a presentation that you can go on to inspect, reuse, or send to the REPL and so on. One useful utility function is pp for print presentation. We haven't imported it yet. So when we get a name error exception and SLIME sees that that name is available for import somewhere, it'll give us the option of importing it. Since it's available for import from multiple modules, it'll prompt us for which one we want to import it from. We want to import it from swanky-python, not from the standard library. Then it will print a presentation of that object. Within the REPL, this is not really useful because all REPL outputs are already presentations. But I use this now whenever I would use print debugging, just whenever I would use insert print statements in my program to see what's going on, I have it print a presentation because that way I can go back and inspect it later, copy it to the REPL and further manipulate it and so on.
[00:05:16.600] Inspector
Next up, let's look at the inspector more. If we go back and inspect the file object, you can write custom inspector views for different kinds of objects. So far, I just have a couple. One for sequences, one for mappings, and one for every other kind of object. Like if we inspect a mapping, there's a shortcut inspect last result, which is what I normally use to open the inspector. Then we see the values, and each value in the inspector is a presentation that we can go on to inspect, and so on. Let's go back to inspecting the file object. Again, we can inspect each of the values, we can copy them back to the REPL and so on. It just displays all the attributes for the class and their values. We can configure what attributes we want to show. There's a transient menu where we can toggle if we want to show private attributes, dunder attributes, doc strings, so on, or everything, which is a bit much to show by default. So we'll reset it to the default. In the future, I want to add graphical inspector views for different kinds of objects, and also support showing plots in both the inspector and the REPL, but that's future work I haven't started on yet.
[00:06:47.720] Evaluating Python
Let's look at the different options for evaluating Python. So we can evaluate a whole file. We can evaluate just a class. We can evaluate just the method we're working on. We can evaluate a Python statement, and it will show the result in an overlay next to the cursor. We can select some code and just evaluate the highlighted region. We can sync the REPL to the active file. So now everything we evaluate in the REPL will be in the context of the eval_demo module. We can also set the module that the REPL is in. We can go back to main. But let's go back to the eval_demo module for now.
[00:07:43.680] Updating
One useful thing is when you update a class or a function, it updates old instances of that class or function. So right now, f.bar is foobar. But if we edit that class, it will actually edit the code for the old instance of that class. And that's provided by code I copied from IPython's autoreload extension. It helps when you're trying to develop in Python without having to restart the Python process whenever you make a change. Auto reload in Python is a big topic that I don't really have time to go into here, but right now it is more limited than what is done in Common Lisp. Like for example, if you have a data class in Python and you add a new field to the data class, it won't automatically update old instances of the data class with a new field. So there's more that needs to be done with that, but I am perhaps naively optimistic that Python's runtime is quite dynamic and flexible, and that I can fully implement autoreload in Python, but there's still work to be done, and it's a big topic to go into. Next up, let's look at the backtrace buffer. But as it is right now, autoreload is actually useful. I mostly develop in Python without having to restart the process and without running into issues from old state that hasn't been updated properly.
[00:09:22.900] Backtraces
So if we go on to look at the backtrace buffer, whenever we get an exception in Python... Let's go back to it. Whenever we get an exception, it will... let's change the code so that it actually gets an exception... we will get an interactive backtrace buffer where we can browse the source code for the different stack frames and the local variables within the stack frames, which are all presentations that we can inspect and so on. We can also open a REPL in the context of any stack frame. Or we can, when we go to the source for a given stack frame, we can select some Python code and evaluate it within the context of that stack frame. One major limitation compared to SLIME for Common Lisp is that in Common Lisp, you have the option to restart or resume execution from a given stack frame after an exception happens, where in Python, what we have right now is pretty much equivalent to the postmortem debugger. You can view the state that the call stack was in at the time of the exception, but you can't actually resume execution, which you often might want to do, because when you're coding in a dynamic language, you're going to get runtime errors. So if you're writing a script that does like some sort of long-running computation or processes a ton of files and gets an exception parsing one file halfway through, normally you'd have to fix the script, and then rerun it and have it process all the same files all over again, and lose a bunch of time for every bug you run into and fix you have to make. So right now we've got a kind of mediocre workaround which is you can add the restart decorator to a function and then... where in the case of a script processing a bunch of files, you would add the restart decorator to the function that processes a single file. You'd add it to the function that represents kind of the smallest unit of work that might fail with an exception, Then, when you get an exception, you can actually edit the function. Like, if we edit it so it doesn't throw an error, and then we can resume execution, then it will return from foo using the the new version of baz, without having to run the script from the beginning again. So in the example of a script that processes a bunch of files, that would let you, as you run into files that cause an exception, fix your code to deal with it and resume execution without having to restart the script from the beginning. But this is obviously a pretty terrible hack, having to add the restart decorator to the function. I would like it to be able to restart from any function. without needing the decorator, as you can in Common Lisp, but I think that will require patching CPython and I really have no idea how to do that. So if you do know anything about CPython internals and are interested in helping, please reach out.
[00:13:03.721] pydumpling
Another feature we have with the backtrace buffer is there's this library called PyDumpling which can serialize a traceback and store it to a file. So you can use PyDumpling with your applications running in production to serialize a traceback whenever they have an exception and save it to a file. Then you can transfer the file locally and load it into your local Emacs with slime-py-load-pydumpling. This will load the same backtrace buffer, and you see all the same local variables at the time of the exception. You can inspect them and get a REPL in the context of the stack frame. Well, this will only work for variables that can be serialized with pickle. Or actually, the library uses dill, which can serialize a bit more than pickle can. But yeah so this can help you inspect and debug errors for applications running in production remotely that you don't want to have SLIME connected to 24-7.
[00:14:20.060] Documentation browser
Next up, let's look at the documentation browser. We can bring up documentation for any module, and all this information is generated from runtime introspection, from the doc strings for the module and the classes and so on. So you won't see documentation for libraries that you don't have actually loaded into your running Python process. Then you can go browse to classes. It'll show all the attributes, their methods, and so on. By each method to the right, it will show the base class where the method was originally inherited from. You can also bring up a screen with all the Python packages that are installed, and browse that with imenu, and bring up information on any package and so on.
[00:15:20.360] Thread view
Next up, let's take a look at the thread view. So let's run this and then bring up the thread view and this will show information on all running threads. You can configure it to refresh after a given interval, like every second, but I don't have that set up right now, so I have to manually refresh it. Probably the most useful thing is that you can bring up a backtrace for any thread which won't pause the thread or anything, but will just give you the call stack at the time you requested the backtrace. You can again view the stack frames, local variables, open a REPL in the context of the thread, and so on. There's also a viewer for async tasks, but I'm not going to demo that right now, because for that to work, you have to start swanky-python after the async event loop has started, from within the same thread. If you go to the project readme, there's a demo of how to use the async task viewer with a fastapi project.
[00:16:27.440] Tracing functions
Next up, let's look at tracing functions. So here we got some random error, because this is still very much a work in progress. But it looks like it executed correctly this time. So now let's mark the fibonacci function for tracing and execute it. We can see, every time the function is called, all its arguments and return values. Again, there are presentations that we can inspect and so on. But let's inspect a more complex object, like a file object. If we trace the count_lines function and run that code, then we can inspect the file it was passed, or the file object. One pitfall is that in Python, objects are mutable. So in the trace buffer, the string representation that's printed is the string representation at the time it was passed to the function. But when we go to inspect it, we're inspecting the object as it is right now, which can be different than it was at the time the function saw it. So for this file object, for example, it's closed now, when it was open at the time the function used it.
[00:17:47.800] AI integrations
Next up, let's look at AI integrations. So if you're used to SLIME with Common Lisp, Emacs actually has a built-in AI that can help with the transition. So it's just a joke, I actually really like Python. And for more serious AI integrations, I have some ideas for the future but I haven't implemented anything yet. I think right now, people are mostly passing source code to LLMs but since we're embedded in the Python process at runtime, we have a lot of more information available, like maybe we can trace all calls to functions, and when we have a bug, we can feed the trace to the LLM, and the LLM can point out maybe when this function was called with these arguments, its return value doesn't make sense, so maybe that's the root cause of your bug. If you have any ideas of potential LLM or AI integrations, let me know. I'm happy to discuss.
[00:19:06.000] LSP-type features
Next up, let's look at standard LSP-type features. So we've got completions. It's fuzzy completions right now, so it's showing everything with a PR in the name. We can bring up documentation for each one. When we start calling a method in the minibuffer at the bottom it'll show the signature. There's some refactoring available. We can extract a function or variable, or rename something, like, let's rename fib to fib2, and it will rename all the uses of it. All these features are based on Jedi, which is the Python library used by IPython. But as it is right now, if you want the most complete Python development experience in Emacs, I'd probably recommend using LSP for everything LSP can do, and then just using swanky-python for the object inspector and backtrace buffer, and the interactive features it has that an LSP can't provide.
[00:20:18.032] Wrapping up
And that's it really. Shortly we'll have questions and answers as part of EmacsConf, and later on, if you have any questions, ideas, or issues feel free to reach out over email or create an issue on the repository. I should probably warn you, if you want to try out the project: so far I'm probably the only user of it and I've only tested it on my own Emacs setup, so it's quite likely you'll run into issues trying to get it installed and working. But if you do run into problems, please reach out, let me know. I'm happy to help and try and fix them. So that's it. Thanks for listening.

Captioner: sachac

Questions or comments? Please e-mail sczi@disroot.org