Pushing SQLite to its limits with Go while having fun

You are probably aware that SQLite is the most used database in the world. Every smartphone, iOS or Android, has it. Every app in those phones uses it. Every browser has it, on your desktop or any other device.

But SQLite is actually amazing for the server side, too. I’ve been thinking about this for years.

As web development is becoming more and more complex. Web stacks have tons of build tools and frameworks, and then you have microservices.

I’ve always been wondering, why do we go this way? Can we go the other way? Can we simplify? Use smaller number of languages, frameworks and tools?

Servers are becoming more and more powerful and network connections have more and more bandwith. When Google started they need a whole fleet of servers. Today, they still do, but you can have a single powerful machine with GB or even TB of RAM and hundreds of cores.

What if I could put a web app on such a machine? Only on that machine with only SQLite as a database. No separate server, no separate process. Instead I want to use a programming language with amazing tools (Go), which can handle concurency very easily (Go) and maybe run something of the size of Twitter. Although I will be happy with an app hundreds time smaller.

Anyway, this is what I want, but where to start? What fun can I do today?

So I decided to run some experiments on my MacBook Air with M1(8 cores) and 8GB RAM. Yeah, a beast, I know. Not a powerful production machine but enough do a fun experiment.

Let’s use Twitter/X as our benchmark. On average users post 6K tweets per second.

Can we do better than 6K requests per second?

On this simple laptop?

The Setup

The starting point for all of my tests is the following server.

func StartServer(router *http.ServeMux) {
	srv := &http.Server{
		Addr:         ":8000",
		ReadTimeout:  5 * time.Second,
		WriteTimeout: 10 * time.Second,
		IdleTimeout:  120 * time.Second,
		Handler:      router,
		TLSConfig: &tls.Config{
			MinVersion:               tls.VersionTLS12,
			PreferServerCipherSuites: true,
			CurvePreferences: []tls.CurveID{
				tls.CurveP256,
				tls.X25519,
			},
			CipherSuites: []uint16{
				tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
				tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
				tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
				tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
				tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
				tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
			},
		},
	}

	go func() {
		log.Fatal(srv.ListenAndServe())
	}()

	fmt.Println("Server started...")

	// Interrup on SIGINT (Ctrl+C)
	c := make(chan os.Signal, 1)
	signal.Notify(c, os.Interrupt)
	<-c

	// Shutdown gracefully but wait at most 15s
	ctx, cancel := context.WithTimeout(context.Background(), time.Second*15)
	defer cancel()

	srv.Shutdown(ctx)
}

This function starts a Go server optimized for production.

I’ve also prepopulated an SQLite database with millions of users and their data. The end result is a file of multiple GBs.

The Simple Server

My initial experiment doesn’t include SQLite or anything else. This would show us the maximum requests per second that the machine can sustain.

func SimpleServer() {
	router := http.NewServeMux()
	router.HandleFunc("/test/", func(w http.ResponseWriter, r *http.Request) {
		w.WriteHeader(http.StatusOK)
		w.Write([]byte("Hello, World!"))
	})

	StartServer(router)
}

The results are

110K requests per second, 8Mbit/s traffic, 144MB RAM used

Amazing! I know that the server doesn’t do much, but still 110K requests per second on this single small machine!

The Read Server

In the next test we are going to read some user data and render a small template with it

func ReadServer() {
	db, err := sql.Open("sqlite3", "./users.sqlite?cache=shared&_journal_mode=WAL")
	CheckError(err)

	preparedGetUser, err := db.Prepare("SELECT * FROM userinfo WHERE uid = ?")
	CheckError(err)

	rand.Seed(time.Now().UnixNano())

	router := http.NewServeMux()
	router.HandleFunc("/test/", func(w http.ResponseWriter, r *http.Request) {
		ctx := ReadViewContext{
			User: GetRandomUserFromDB(db, preparedGetUser),
		}

		ReadView(w, ctx)
	})

	StartServer(router)
}

The results are

65K requests per second, 192Mbit/s traffic, 246MB RAM used

Let that sync in ;-) Reading from the SQLite database and rendering simple page, results in 65K requests per second, while using very low memory. What?!

Let’s have a quick look at the connection string

./users.sqlite?cache=shared&_journal_mode=WAL

We are optimising the way we work with SQLite by turning SQLite Shared-Cache Mode and Write-Ahead Logging. This allow fast concurent access, not only for reading but also for writing.

10 times more requests than posts on Twitter/X. Could this sustain a read-only site ten time bigger than Twitter/X? Maybe.

The Large Read Server

Let’s get serious. An average page for a large web app is 100KB, without the staticly served JS, CSS etc. I’ve created a template of this size, otherwise the rest of the server is the same.

func ReadLargeServer() {
	db, err := sql.Open("sqlite3", "./users.sqlite?cache=shared&_journal_mode=WAL")
	CheckError(err)

	preparedGetUser, err := db.Prepare("SELECT * FROM userinfo WHERE uid = ?")
	CheckError(err)

	rand.Seed(time.Now().UnixNano())

	router := http.NewServeMux()
	router.HandleFunc("/test/", func(w http.ResponseWriter, r *http.Request) {
		ctx := ReadViewContext{
			User: GetRandomUserFromDB(db, preparedGetUser),
		}

		ReadLargeView(w, ctx)
	})

	StartServer(router)
}

The results are

11K requests per second, 8Gbit/s traffic, 245MB RAM used

11K/s requests is still twice than Twitter/X. The RAM usage is very low again. On the other side, the traffic has gone up quiet a bit, but today you can easily get a server with 25Gbits a second connections for a thousand buck a month, and maybe even less. We are far from reaching any limits.

The Write Server

So far, all the servers have been read-only. We know that writing is slower than reading. Let’s test it. This time we are back again with the smaller template, so that we can focus on the writing performance.

func WriteServer() {
	db, err := sql.Open("sqlite3", "./users.sqlite?cache=shared&_journal_mode=WAL")
	CheckError(err)

	preparedUpdate, err := db.Prepare("UPDATE userinfo SET departname = ? WHERE uid = ?")
	CheckError(err)

	preparedGetUser, err := db.Prepare("SELECT * FROM userinfo WHERE uid = ?")
	CheckError(err)

	rand.Seed(time.Now().UnixNano())

	router := http.NewServeMux()
	router.HandleFunc("/test/", func(w http.ResponseWriter, r *http.Request) {
		ctx := ReadViewContext{
			User: UpdateRandomUserFromDB(db, preparedUpdate, preparedGetUser),
		}

		ReadView(w, ctx)
	})

	StartServer(router)
}

The results are

24K requests per second, 72Mbit/s traffic, 271MB RAM used

24K rps! 24K users reading and writing at the same time, in the same second, on a single SQLite database.

Does you own websites, have this kind of traffic? How many websites out there today have this kind of traffic? Not many.

Yahoo has had 3.6B visits last month according to SimilarWeb. This results in 1388 requests per second. This is the 8th most visited website in the world. Let that sinkin, again.

Conclusion

I know that my test is simplified. On a simple machihe, simple requests. In the real world you have complex queries needing complex data. You have to consider also spikes. Yahoo doesn’t have 1388 requests every second. Sometimes they have 100 rps, sometimes they have 20,000 rps.

What if we really do a real app with Go, SQLite and a single powerful machine? I don’t know,… yet. But I am going to find out and let you know.

What about getting a powerful machine? Isn’t it super expensive?

I would suggest using OVH, it is cheaper than most other providers. For example, you can get a bare metal machine with 96 cores & 192 threads, with 2TB of SSD NVMe, 1TB RAM and 10Gbit of unmettered connection for around $1800. Not bad.

This is why I think SQLite is the best database in the world. For your next app consider using SQLite first. You might outgrow it, but you will able to go much further with it than you expect.

You might not be able to build Google, Facebook or Amazon purely on SQLite, but almost anything else doesn’t seem out of reach.

If you are tired of complex architectures with thousands of servers, millions of microservices, just give SQLite a try.

Did you like this article?

Please share it

We are Stefan Fidanov & Vasil Lyutskanov. We share actionable advice about software development, freelancing and anything else that might be helpful.

It is everything that we have learned from years of experience working with customers from all over the world on projects of all sizes.

Let's work together
© 2024 Terlici Ltd · Terms · Privacy