Livebook.dev The Livebook Blog logo

The Livebook Blog

Back to Homepage Subscribe to Updates

Labels

  • All Posts
  • releases
  • features
  • tutorials
  • announcements

Jump to Month

  • February 2023
  • January 2023
  • December 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • May 2022
  • January 2022
  • April 2021
Powered️ byAnnounceKit

Create yours, for free!

announcements
3 months ago

Announcing Bumblebee: GPT2, Stable Diffusion, and more in Elixir

We are glad to announce a variety of Neural Networks models are now available to the Elixir community via the Bumblebee project.

We have implemented several models, from GPT2 to Stable Diffusion, in pure Elixir, and you can download training parameters for said models directly from Hugging Face.

To run your first Machine Learning model in Elixir, all you need is three clicks, thanks to our integration between Livebook and Bumblebee. You can also easily embed and run said models within existing Elixir projects.

Watch the video by José Valim covering all of these topics and features:

Running Machine Learning models with Livebook

Here are some examples of what it looks like to use Livebook to run Machine Learning models:

Text to image

Image classification

Text classification

Text generation

Incorporating models into any Elixir project

Thanks to the new Nx.Serving functionality, you should be able to incorporate those models into any existing project with little effort and run them at scale.

You should be able to embed and serve these models as part of your existing Phoenix web applications, integrate them into data processing pipelines with Broadway, and deploy them alongside Nerves embedded systems - without needing 3rd-party dependencies.

Here’s an example of using Bumblebee to run an image classification model inside a Phoenix app:

Integration with Hugging Face

Bumblebee allows you to download and use trained models directly from Hugging Face. Let’s see an example of how we can generate text continuation using the popular GPT-2 model:

{:ok, gpt2} = Bumblebee.load_model({:hf, "gpt2"})
{:ok, tokenizer} = Bumblebee.load_tokenizer({:hf, "gpt2"})

serving = Bumblebee.Text.generation(gpt2, tokenizer, max_new_tokens: 10)

text = "Yesterday, I was reading a book and"
Nx.Serving.run(serving, text)

#=> %{
#=>    results: [
#=>        %{
#=>            text: "Yesterday I was reading a book and I was thinking, 'What's going on?'"
#=>        }
#=>    ]
#=> }

As you can see, we just load the model data and use a high-level function designed for the text generation task, that’s it!

Compiling to CPU/GPU

All of our Neural Networks can be compiled to the CPU/GPU, thanks to projects such as EXLA (based on Google XLA) and Torchx (based on Libtorch).

A massive milestone in our Numerical Elixir effort

This release is a huge milestone in our Numerical Elixir effort, which started almost two years ago. These new features, in particular, are possible thanks to the enormous efforts of José Valim, Jonatan Kłosko, Sean Moriarity, and Paulo Valente. We are also thankful to Hugging Face for enabling collaborative Machine Learning across communities and tools, which played an essential role in bringing the Elixir ecosystem up to speed.

Next, we plan to focus on training and transfer learning of Neural Networks in Elixir, allowing developers to augment and specialize pre-trained models according to the needs of their businesses and applications. We also hope to publish more on our progress in developing traditional Machine Learning algorithms.

Your turn

If you want to give Bumblebee a try, you can:

  • Download Livebook v0.8 and automatically generate “Neural Networks tasks” from the “+ Smart” cell menu inside your notebooks.
  • We have also written single-file Phoenix applications as examples of Bumblebee models inside Phoenix (+ LiveView) apps. Those should provide the necessary building blocks to integrate Bumblebee into your production app.
  • For a more hands-on approach, read some of our notebooks.

If you want to help us build the Machine Learning ecosystem for Elixir, check out the projects above, and give them a try. There are many exciting areas, from compiler development to model building. For instance, pull requests that bring more models and architectures to Bumblebee are welcome. The future is concurrent, distributed, and fun!

releases
5 months ago

What's new in Livebook 0.7

Livebook v0.7 is out! This is a major release coming with significant features in the following areas:

  • secret management
  • visual representations of the running system (supervision trees, inter-process messaging, and more)
  • an interactive user interface to visualize and edit Elixir pipelines

Let’s take a look at each of those.

We also created a video showing each one of those features:

Secret management

We know that putting sensitive data in your code, like passwords and API keys, is a security risk. With this new v0.7 release, Livebook has an integrated way to help you deal with that. We’re calling it Secrets. Let’s see an example.

Let’s say you’re writing a notebook that consumes data from an API that requires authentication:

api_username = "postman"
api_password = "password"

Req.get!("https://postman-echo.com/basic-auth", auth: {api_username, api_password})

This piece of code is hardcoding API username and password, but you want to avoid that. You can do that by creating two Livebook secrets, one for the API username and one for the API password:

Now, you can refactor your code to get the username and password values from those secrets by using System.fetch_env!/1:

api_username = System.fetch_env!("LB_API_USERNAME")
api_password = System.fetch_env!("LB_API_PASSWORD")

Req.get!("https://postman-echo.com/basic-auth", auth: {api_username, api_password})

Notice that Livebook adds an LB_ namespace to the environment variable name.

Let’s say you share with a co-worker that notebook that is using Livebook Secrets. If that person doesn’t have those secrets configured in their Livebook instance yet, when they run the notebook, Livebook will automatically ask them to create the required secrets! No more hidden secrets (pun intended 🤭). Here’s what it looks like:

The new Secrets feature is also already integrated with Database Connection Smart cells. When you’re creating a connection to PostgreSQL or Amazon Athena, Livebook will give you the option to use a Secret for the database password:


With Livebook Enterprise, you will be able to share secrets within your team and company. This allows notebooks to be safely versioned and distributed, even if they contain credentials or other restricted information.

Visual representations of the running system

One interesting aspect of coding is that it can feel like we’re “building castles in the air.” We can build something with our thoughts materialized by code, which is amazing!

But, because of the intangible nature of code, sometimes it can be hard to reason about it. When reading a piece of code, you’re simultaneously building a representation of how it works inside your mind. What if you could get some help with visualizing that?

That’s when the idea of “a picture is worth a thousand words” comes in handy.

Imagine someone who’s learning Elixir. The person is learning message passing between processes, and they are analyzing the following example:

parent = self()

child =
  spawn(fn ->
        receive do
            :ping -> send(parent, :pong)
        end
    end)

send(child, :ping)

receive do
    :pong -> :ponged!
end

Instead of making the person create a representation of how that code works only through text, we could show them a visual representation of it.

Since v0.5, Livebook has a way to show visual widgets to the user, through its Kino package. And with this v0.7 release, we have a new Kino widget to show a visual representation of message passing between Elixir processes.

All you need to do is wrap your code with Kino.Process.render_seq_trace/2:

Seeing a visual representation of code is not only applicable when you’re just getting started with Elixir. Imagine, for example, you need to code something that performs a job concurrently, and you discover Elixir’s Task.async_stream/3. You read its documentation and understand its API, but you also want to learn more about how it orchestrates multiple processes. You could use Livebook to visualize that:

Besides visualizing message passing, you can now use Livebook to visualize a supervision tree. You can do that by calling the Kino.Process.render_sup_tree/2 function with the supervisor’s PID:

Livebook will also automatically show you a supervision tree if your cell returns the PID of a supervisor:

This feature has been contributed by Alex Koutmos, and it is a great example of how modern notebooks can benefit from an open-source community.

Interactive user interface to visualize and edit Elixir pipelines

Elixir 1.14 came with a fantastic new feature for debugging called dbg. It can do many things, one of which is to help inspect a pipeline. So we thought, “how would dbg work inside Livebook?”

Maybe we could do some visualization of the pipeline, showing the result of each pipeline step, as the regular dbg does. But, since we’re already in a more visual environment, we went one step further. We built not only a visualization of an Elixir pipeline but also the ability to edit and interact with it!

Let’s say you run the following pipeline inside Livebook:

"Elixir is cool!"
|> String.trim_trailing("!")
|> String.split()
|> List.first()
|> dbg()

When you do that, Livebook will show a widget that you can use to:

  • see the result of the pipeline
  • enable/disable a pipeline step
  • drag and drop a pipeline step to re-order the pipeline

Here’s what it looks like:

This can be very helpful if you’re trying to understand what each step of a pipeline is doing. Ryo Wakabayashi created a cool example showing how that could be applied to a pipeline that is using Elixir to process an image using Evision:

image_path
|> OpenCV.imread!()
|> OpenCV.blur!([9, 9])
|> OpenCV.warpAffine!(move, [512, 512])
|> OpenCV.warpAffine!(rotation, [512, 512])
|> OpenCV.rectangle!([50, 10], [125, 60], [255, 0, 0])
|> OpenCV.ellipse!([300, 300], [100, 200], 30, 0, 360, [255, 255, 0], thickness: 3)
|> Helper.show_image()
|> dbg()


Other notable features and improvements

Besides those three big features, this new version contains many other news. You can check our GitHub repo’s changelogs to see everything that comes with this new version.

  • Livebook’s changelog
  • Kino’s changelog
  • KinoDB’s changelog

Organize your cell output with a tab or grid layout

You can now use Kino.Layout.tabs/1 to show the output of your cell in tabs. Here’s an example of how to do it:

data = [
    %{id: 1, name: "Elixir", website: "https://elixir-lang.org"},
    %{id: 2, name: "Erlang", website: "https://www.erlang.org"}
]

Kino.Layout.tabs(
    Table: Kino.DataTable.new(data),
    Raw: data
)

Here’s what it looks like:

You can also use Kino.Layout.grid/2 to show the output of your cell in a grid. Here’s an example of how to do it:

urls = [
  "https://images.unsplash.com/photo-1603203040743-24aced6793b4?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=580&h=580&q=80",
  "https://images.unsplash.com/photo-1578339850459-76b0ac239aa2?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=580&h=580&q=80",
  "https://images.unsplash.com/photo-1633479397973-4e69efa75df2?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=580&h=580&q=80",
  "https://images.unsplash.com/photo-1597838816882-4435b1977fbe?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=580&h=580&q=80",
  "https://images.unsplash.com/photo-1629778712393-4f316eee143e?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=580&h=580&q=80",
  "https://images.unsplash.com/photo-1638667168629-58c2516fbd22?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=580&h=580&q=80"
]

images =
    for {url, i} <- Enum.with_index(urls, 1) do
        image = Kino.Markdown.new("![](#{url})")
    label =  Kino.Markdown.new("**Image #{i}**")
        Kino.Layout.grid([image, label], boxed: true)
    end

Kino.Layout.grid(images, columns: 3)

Here’s what it looks like:

Universal desktop build for Mac and automated nightly builds

We made some improvements in the build process of Livebook Desktop.

First, we unified the Mac Intel and Mac Silicon download files into a single Mac Universal file. No need to ask yourself if you’re supposed to download the Intel or Silicon distribution. Pretty neat!

Second, we automated the build process of generating Livebok Desktop. We’re using that to do nightly builds of Livebook Desktop based on the main branch of our GitHub repo.

That means now you can use our nightly builds to download Livebook Desktop and play with the new features we’re still working on before we launch a new release. 😉

Try it!

To play with the new features all you need to do is:

  1. Install the new Livebook version
  2. Import this notebook containing a demo of the new features

Run in Livebook

Our team put a lot of effort into this new release, and we’re very excited about it! We hope you like it too.

Happy hacking!

tutorials
5 months ago

How to query and visualize data from Amazon Athena using Livebook

Livebook has built-in integrations with many data sources, including Amazon Athena.

In this blog post, you'll learn how to use Livebook to connect to Amazon Athena, execute a SQL query against it, and visualize the data.

You can also run this tutorial inside your Livebook instance by clicking the button below:

Run in Livebook

Connecting to Amazon Athena using the Database connection Smart cell

To connect to Amazon Athena, you'll need the following info from your AWS account:

  • AWS access key ID
  • AWS secret access key
  • Athena database name
  • S3 bucket to write query results to

Now, let's create an Amazon Athena connection using a Database connection Smart cell. Click the options "Smart > Database connection > Amazon Athena":

Once you've done that, you'll see a Smart cell with input fields to configure your Amazon Athena connection:

Fill in the following fields to configure your connection:

  • Add your AWS access key ID
  • Add your AWS secret access key
  • Add your Athena database name
  • Add your S3 bucket in the "Output Location" field

Now, click the "Evaluate" icon to run that Smart cell. Once you've done that, the Smart cell will configure the connection and assign it to a variable called conn.

Querying Amazon Athena using the SQL Query Smart cell

Before querying Athena, we need to have an Athena table to query from. So, let's create one.

We'll create an Athena table based on a public dataset published on AWS Open Data. We'll use the GHCN-Daily dataset, which contains climate records measured by thousands of climate stations worldwide. Let's create an Athena table called stations with metadata about those climate stations.

To do that, Add a new SQL Query Smart cell by clicking the options "Smart > SQL Query":

Copy and paste the SQL code below to the SQL Query cell:

CREATE EXTERNAL TABLE IF NOT EXISTS default.stations (
  station_id string, 
  latitude double, 
  longitude double, 
  elevation double, 
  name string)
ROW FORMAT SERDE 
  'org.apache.hadoop.hive.serde2.RegexSerDe' 
WITH SERDEPROPERTIES ( 
  'input.regex'='([^ ]*) *([^ ]*) *([^ ]*) *([^ ]*) *(.+)$') 
STORED AS INPUTFORMAT 
  'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
  's3://livebook-blog/amazon-athena-integration'
TBLPROPERTIES (
  'typeOfData'='file')

Now, click the "Evaluate" icon to run that Smart cell. Now we have an Athena table to query from.

Add a new SQL Query Smart cell and copy and paste the following SQL query to it:

select * from default.stations order by station_id

Evaluate that cell. It will query your Athena table and assign the results to a result2 variable. You'll see the result of that query in a tabular format like this:

Visualizing geographic coordinates data using the Map Smart cell

Notice that the table we created has each climate station's latitude and longitude information. We can visualize that data with a map visualization using the Map Smart cell. Let's do that.

Add a Map Smart cell by clicking the options "Smart > Map":


Your Map Smart cell will look something like this:

Use your Map Smart cell to configure:

  • the layer name
  • the data source
  • the coordinates format
  • the longitude field
  • the latitude field

Once you've configured the Smart cell, you can evaluate it, and it will build a map for you. It will look something like this:

That's it! Using Livebook Smart cells, you can connect to an Amazon Athena database, execute a SQL query against it and visualize the result.


6 months ago

Livebook + SpawnFest = ❤️ & 💻

Good news, SpawnFest will have a specific category for Livebook-based projects this year!

SpawnFest is an annual 48-hour online software development contest in which teams from around the world get exactly one weekend to create the best BEAM-based applications they can.

Over the years, we've seen many fantastic projects coming out of SpawnFest. We're excited to partner with them this year to bring that energy to the Livebook community. We're sponsoring a category for Livebook-based projects. The idea is to incentivize the community to develop more ideas on how to use Livebook while also supporting a community event.

There are two kinds of projects you'll be able to build:

  • a Livebook notebook
  • a Livebook Smart Cell

You could build all sorts of cool stuff using a Livebook notebook. Here are a few ideas for what to build with a Livebook notebook:

  • integrating and playing with APIs
  • data exploration and visualization
  • machine learning models
  • interactive data apps

With Smart Cells, you can build a custom notebook cell that could be reused by your notebooks or other community members.

If you want to see some examples of custom Smart Cells, here's a Smart Cell that allows querying the GitHub GraphQL API. And here's another one that allows connecting to a remote node in an Elixir cluster and optionally send a :erpc call.

And if you'd like a few ideas of what to build using a Smart Cell, here are a few:

  • a Smart Cell that sends a message to Slack
  • a Smart Cell where you could fill in some input fields, and then it sends an HTTP request to some API
  • a Smart Cell that builds a connection to a database, service, or server

How to get started with Livebook

If you haven't used Livebook yet, it's easier than ever to get started because of our new desktop app. You can download and install it in a couple of minutes.

Once you have installed it, you can use the built-in notebooks to learn how to use Livebook. You can do that by opening Livebook and going to Explore section:

Livebook Explore section

In the Explore section, you'll find many notebooks that will help you to learn how to use Livebook. We currently have builtin notebooks about:

  • learning Elixir
  • how to use Livebook itself
  • how to create charts (using VegaLite)
  • how to plot maps using geospatial and tabular data (using MapLibre)
  • how to build interactive notebooks (using Kino)
  • how to build a custom Smart Cell

This year SpawnFest will be on October 15th and 16th. Visit the event's website to learn more about it and how to register.

We're looking forward to seeing what you'll build using Livebook! =D

tutorials
7 months ago

How to query and visualize data from Google BigQuery using Livebook

Querying and visualizing data from a database is a common and recurring task. That's the kind of thing you don't want to repeat yourself, writing the same code repeatedly. That's where Livebook Smart cells come in. It helps you to automate any workflow you want.

Livebook has built-in Smart cells that help you query and visualize data from multiple databases, like PostgreSQL, MySQL, Google BigQuery, AWS Athena, and SQLite.

This article explains how to use Livevook Smart cells to query and visualize data from a Google BigQuery dataset.

If you like video format, here's a video version of this article:

You can also run this tutorial inside your Livebook instance:

Run in Livebook

Connecting to Google BigQuery using the Database connection Smart cell

Before connecting to a Google BigQuery dataset, you need to create a Google Cloud service account and a service account key. Follow the steps in this guide and download the generated service account key JSON file. That file contains the info needed to configure a connection to Google BigQuery.

Now, create a new notebook. Then, let's create a Google BigQuery connection using a Database connection smart cell. Click the options "Smart > Database connection > Google BigQuery":

Once you've done that, you'll see a Smart cell with input fields to configure your Google BigQuery connection:

Configure your Google BigQuery connection by following the instructions inside the Smart cell:

  • Add your Google Cloud Project ID
  • Upload your JSON credentials file

Pro tip: if you have an authenticated gcloud CLI in the machine running your Livebook instance, then Livebook will automatically pick up your Google Cloud credentials and use it for access. No need to upload the JSON file. 😉

Now, click the "Evaluate" icon to run that Smart cell. Once you do that, the Smart cell will configure the connection and assign it to a variable called conn.

Querying Google BigQuery using the SQL Query smart cell

Now, let's use your Google BigQuery connection to query a Google BigQuery public dataset.

Add a new SQL Query smart cell by clicking the options "Smart > SQL Query":

Once done, you can write and execute a SQL query inside that cell. Let's query a public Google BigQuery dataset.

Copy the following query to the cell:

select t.year, t.country_name, t.midyear_population
from bigquery-public-data.census_bureau_international.midyear_population as t
where year < 2022
order by year

And execute the cell. The Smart cell will execute the query and assign its result to a variable called result. It also will show you the result of the query in a table format, like this:

Visualizing data from Google BigQuery using the Chart smart cell

Now we can visualize the result from that query using a Chart smart cell.

Add a new Chart smart cell by clicking the option "Smart > Chart":

Your newly created Chart smart cell will look something like this:

Now, let's use that cell to visualize the results from our query.

The query we wrote returned the population per year per country from the International Census Data, which is published as a Google BigQuery public dataset. We can use that data to visualize how the world population has been growing over the years.

Use your Chart smart cell to configure:

  • the chart's title
  • the chart's width
  • the chart's type
  • the x-axis and its type
  • the y-axis, its type, and aggregate

Once you've configured the cell, you can evaluate it, and it will build a chart for you. It will look something like this:


That's it! Using Livebook Smart cells, you can connect to a Google BigQuery dataset, execute a SQL query and visualize the results with a chart. And since this is quite a common task, Smart cells enable you to do that without writing a single line of code! 

And, if you need more customization in any part of the process, easy peasy. You can easily convert a Smart cell to a code cell and edit the code generated for you. To do that, click in a Smart cell, and then click the "Convert to Code cell" icon:

When you click the "Convert to Code cell" icon, it will transform your Smart cell into a Code cell. You'll be able to see the code that was running behind the Smart cell and edit it as you like:


features
7 months ago

Introducing the Livebook Desktop app 🎉

We want Livebook to be accessible to as many people as possible. Before this release, installing Livebook on your machine could be considered easy, especially if you already had Elixir installed.

But imagine someone who's not an Elixir developer. They had to either install Docker or Elixir before getting started with Livebook. And if they are in their first steps as developers, even using a terminal could be demanding.

That's why we built Livebook desktop. It's the simplest way to install Livebook on your machine.

Livebook desktop app

Livebook desktop doesn't require the person to have Elixir previously installed on their machine. And it works both on the Mac and Windows.

We hope that the desktop app enables way more people to try Livebook. 

For example, let's say you want to help your friend to start learning Elixir. You can tell that person to download Livebook desktop and follow the built-in "Distributed portals with Elixir" notebook, a fast-paced introduction to Elixir.

Or maybe for the data scientist that heard about Livebook and wants to try it. That person can download Livebook desktop and learn how to do data visualization in Livebook with the built-in "Plotting with VegaLite" and  "Maps with MapLibre" notebooks.

If you want to try the Livebook desktop app, it's just one click away.

We hope you enjoy it! 😊

releases
10 months ago

v0.6: Automate and learn with smart cells

Livebook v0.6 is out with a number of exciting features! Join us, as we go from a database connection to charting the data in a few simple steps:

Below is a quick overview of the biggest features in this new version.

Smart cells

Livebook v0.6 introduces a new type of cells to help you automate and master entire coding workflows! Smart cells are UI components for performing high-level tasks, such as charting your data:

We already have a couple Smart cells - including connection to PostgreSQL and MySQL - but the best part is that anyone can create new Smart cells and share with the community. We can't wait to automate workflows from HTTP requests to Machine Learning models!

Package search

The new version also brings a much more integrated dependency management, including the new package search and the setup cell:

Error highlighting

You will also find better error reporting! Spot typos and syntax errors right away with the new squiggly lines:

Streamlined diagrams

In the previous release we introduced support for Mermaid.js diagrams in Markdown cells. It's now even easier with the new Diagram button:

There's more

There are many other notable features, including "code zen" and persistent configuration. Check out the technical release notes for the complete list of changes.

releases
a year ago

v0.5: Flowcharts, custom widgets, intellisense, and UI improvements

Livebook v0.5 is out with a number of goodies! We have recorded a video showing how to use those features to build chat apps, multiplayer games, and more:

In case you can't watch it, here is a rundown of the biggest features.

Flowcharts with Mermaid.js

Many teams are using Livebook for documentation. v0.5 improves on this use case by allowing users to embed Mermaid.js diagrams and visualizations:

Define your Mermaid.js definitions inside ```mermaid blocks and you are good to go!

Custom widgets

You can now add your own widgets to Livebook, known as Kinos in Livebook terminology. Build chat apps, multiplayer games, interactive maps, and more! It doesn't matter what is your domain area, you can now make Livebook as powerful as you want.

We have revamped our Explore guides to include a complete course on Kino with several examples:

Other improvements

  • You can now increase the editor font-size and pick a high-contrast theme. We also improved the contrast and accessibility in several places.

  • The code editor now supports auto-completion of structs fields and shows additional metadata about functions, such as which version they were added and deprecation notices.

Check out the technical release notes for the complete list of changes.

releases
a year ago

Announcing Livebook

We are glad to announce Livebook, an open source web application for writing interactive and collaborative code notebooks in Elixir and implemented with Phoenix LiveView. Livebook is an important step in our journey to enable the Erlang VM and its ecosystem to be suitable for numerical and scientific computing.

José Valim has recorded a screencast that highlights some Livebook features, which you can watch below. It also showcases the Axon library, for building Neural Networks in Elixir, as well as some improvements coming in Elixir v1.12:


Features

If you can’t yet watch the video, here is a summary of Livebook features:

  • A deployable web app built with Phoenix LiveView where users can create, fork, and run multiple notebooks.

  • Each notebook is made of multiple sections: each section is made of Markdown and Elixir cells. Code in Elixir cells can be evaluated on demand. Mathematical formulas are also supported via KaTeX.

  • Persistence: notebooks can be persisted to disk through the .livemd format, which is a subset of Markdown. This means your notebooks can be saved for later, easily shared, and they also play well with version control.

  • Sequential evaluation: code cells run in a specific order, guaranteeing future users of the same Livebook see the same output. If you re-execute a previous cell, following cells are marked as stale to make it clear they depend on outdated notebook state.

  • Custom runtimes: when executing Elixir code, you can either start a fresh Elixir process, connect to an existing node, or run it inside an existing Elixir project, with access to all of its modules and dependencies. This means Livebook can be a great tool to provide live documentation for existing projects.

  • Explicit dependencies: if your notebook has dependencies, they are explicitly listed and installed with the help of the Mix.install/2 command in Elixir v1.12+.

  • Collaborative features allow multiple users to work on the same notebook at once. Collaboration works either in single-node or multi-node deployments - without a need for additional tooling.