> PostgreSQL's query planner/optimizer is decidedly state-of-the-art
Postgres's cost-based planner is good, but it's a decidedly 1980s design, predating the famous but also outdated Volcano/Cascades systems (used by Microsoft SQL Server and CockroachDB and others).
So much has happened in the field of query optimization in the last 30 years, very little of which has ended up in Postgres, I think. Postgres has gotten parallel workers and a JIT, but the fundamental design is largely unchanged. It's also quite conservative about adding improvements; other databases has had some variation of index skip scans for ages (Oracle has probably had it for 20 years now, and you can get it through the TimeScale extension), but Postgres is still working on supporting it natively.
The state of the art is arguably Umbra [1], a research project by Thomas Neumann's group at the university of Munich, the successor to HyPer, which is now being commercialized as CedarDB. Their analysis of the Postgres query planner is an interesting read [2].
Also there are now databases purpose built for specific domains. E.g. if you are building a financial ledger I would way rather be interfacing with TigerBeetle than scaffolding an application-driven ledger around Postgres. https://tigerbeetle.com/.
If I am scraping giant amounts of data I would run far away from Postgres for other databases like Amazon Redshift.
1st1 62 days ago [-]
Perhaps. The trouble usually arrives when you need functionality that your specialized database doesn’t have. Then you end up maintaining two databases and spreading your data, business logic, and resources between them. To each their own, though... there are certainly valid use cases for this.
sgarland 62 days ago [-]
Tbf, at least for TigerBeetle, they make no bones about the fact that they are hyper-specialized, and they recommend using a general OLTP DB alongside it for the rest of your business logic. It’s purely a ledger DB.
gshulegaard 62 days ago [-]
These days if you want a PostgreSQL based Data Warehouse both Citus and Timescale are extensions/PostgreSQL based databases I would consider before Redshift.
But even in the 9.4 days (~a decade ago) I was pushing Terabytes worth of analytics data daily through a manually managed Postgres cluster with a team of <=5 (so not that difficult). Since then there have been numerous improvements which make scaling beyond this level even easier (parallel query execution, better predicate push down by the query planner, and declarative partitioning to name a few). Throw something like Citus (extension) into the mix for easy access to clustering and (nearly) transparent table sharding and you can go quite far without reaching for specialized data storage solutions.
pgmeanspostgres 62 days ago [-]
Postgres is a decent, acceptable _anything_ database.
Bisecant fractal databases like HyperKlingonDB are excellent at storing bisecant fractal data, but terrible at anything else (often, just plain terrible overall due to being immature).
TigerBeetle is impressive, but correct me if I'm wrong the schema can't be changed. To me, that that's like a single use DB, when often people don't know what they need.
RedCrowbar 62 days ago [-]
The paper seems to mostly focus on the quality of cardinality estimation (mostly driven by statistics) which is admittedly one of the frequent sore points in Postgres. There's been some progress in that area though (CREATE STATISTICS being a highlight).
systay 62 days ago [-]
Which is arguably the most important part of a planner. If you don't have good cardinality information, it doesn't matter if you have fancy planner strategies. They'll be employed in the wrong situation, and won't produce the good plans we all want.
ahoka 62 days ago [-]
And JIT is something you would disable, because often it makes queries slower or jus unpredictable (and results are not cached AFAIK).
paulryanrogers 62 days ago [-]
How does MySQL compare? I get the sense that innovations land there sooner because of all the mega corps that use it.
sgarland 62 days ago [-]
MySQL hasn’t had nearly as many innovations. That said, for a long time, it was often faster for simpler queries, and especially range queries, IFF your schema was designed to exploit the fact that it’s a clustering index, so rows are physically located next to the PK. If you’re using UUIDv4 PKs, you’re throwing away its entire advantage, and Postgres will almost certainly be faster.
One notable exception for MySQL is INSTANT DDL in 8.x, thanks to contributions from Tencent, which is an extremely nice QoL upgrade that Postgres doesn’t have. The other is logical replication, something Postgres does support now (and has for several years), but didn’t for a long time.
re-thc 62 days ago [-]
A lot of the work is now part of MySQL Heatwave, the paid Oracle / cloud edition.
celeryd 62 days ago [-]
> So much has happened in the field of query optimization in the last 30 years, very little of which has ended up in Postgres, I think. Postgres has gotten parallel workers and a JIT, but the fundamental design is largely unchanged.
Guess I'll keep using Postgres.
NetOpWibby 62 days ago [-]
> To oversimplify, Gel to Postgres is what TypeScript is to JavaScript.
I've been using EdgeDB for years (from RethinkDB and MongoDB before that) and it's my favorite database. I don't need to memorize SQL commands and I get a pretty UI to look at my data if my queries introduce issues.
1st1 62 days ago [-]
<3
alangou 62 days ago [-]
Tried many ORMs to get them to work in SQL, but EdgeDB's was the one that worked extremely straightforwardly, literally without any issues that weren't due to not following the instructions.
No bugs, no configuration errors, no nothing. It all just worked. So I think you guys deserve more recognition and credit for what is clearly a very well-engineered product that I intend to use for some of my personal projects.
purplerabbit 62 days ago [-]
Have you tried drizzle? If so, what's your beef? (The only one I've had is lack of down-migrations)
1st1 62 days ago [-]
Addressing your core question: Drizzle is a great ORM with a tastefully designed API—it's clearly a product of love. But it’s still an ORM, and it’s confined by certain design boundaries that come from being a library. For example, what if you want to use TypeScript, Go, and Python on your backend? Do you run three ORMs, each with different APIs? With Gel, you have one data model and a unified querying layer—the true source of truth.
We have a blog post about that and more [1].
---
By the way, if you visit Drizzle’s website, you’ll see that Gel is one of their biggest sponsors. We worked closely with Drizzle to ship a first-class integration with Gel. You can use Gel’s schema and migrations, and Drizzle will just work. You can even use the Drizzle query builder and EdgeQL side by side if you want.
> Do you run three ORMs, each with different APIs?
Yes and you absolutely have to. That's not a disadvantage, it's just how it is. Because SQL is the absolute bare minimum. The lowest common standard. And not a great one. Null handling and typesystem for example are way inferior to those of good programming languages. So why would I leave those productivity gains on the table?
Using EdgeQL simple means I have another programming language at hand.
> EdgeDB has a robust type system that's most [sic] comprehensive that most ORMs
Well yeah. And it is inferior to the programming language I use. Hence, this comes at a disadvantage to me.
> We've also built a query builder for TypeScript
Aha, so then... why not build a query builder for every language, just like with ORMs?
Sorry, not convinced. We would be better off by improving on SQL itself.
lelanthran 62 days ago [-]
> And not a great one. Null handling and typesystem for example are way inferior to those of good programming languages
There is no mainstream programming language that I know off that offers what are table stakes for an RDBMS.
For example, even trivial SQL things like a constraint saying 'at least one of these two fields must be empty, but both can't be empty' is missing in the "advanced" type systems in mainstream languages.
valenterry 61 days ago [-]
First of all, your example constraint is not possible to be statically expressed in SQL anyways. In the typesystem, that is.
Sure, you can create a constraint for `'at least one of these two fields must be empty, but both can't be empty'` but that will only be applied at runtime. It's not like any dmbs (to my knowledge) will reject the query at parse-time. Or else, please show me an example of how you would do that. (and don't use constraints or indexes, because those only work at runtime)
Second, I deliberately said "good programming languages" and you changed that to "mainstream programming language". You know what? If all your mainstream programming languages (however you count or define those) don't support that stuff, maybe it's time to move on and choose a better language.
In Scala at least, it's trivially possible to define such a constraint with types. That ensures that you will never by accident violate that constraint. And that is guaranteed at compile time, so no one is gonna wake you up in the night because your query failed due to that condition you specified.
If you don't believe me, I'm happy to show you the code to encode that logic.
lelanthran 61 days ago [-]
> First of all, your example constraint is not possible to be statically expressed in SQL anyways. In the typesystem, that is.
So? The DB will prevent you from violating the constraints, because it cannot tell in advance (i.e. before getting the query with the data in it) whether the data violates the constraint.
> It's not like any dmbs (to my knowledge) will reject the query at parse-time.
What parse-time?
echo "INSERT INTO tbl_test (c_one, c_two) VALUES (3, 4);
Where's the parse-time in that?
TBH, I am still failing to see your point - this looks like an artificial restriction (the query must be rejected before you present it to the DB).
Whether you call it runtime or compile-time or parse-time, the DB will not let you violate the type safety by accident.
The point is that the DB will enforce the constraints.
> Second, I deliberately said "good programming languages" and you changed that to "mainstream programming language".
Well, if your bar for "good programming languages" rules out all the mainstream languages, what's the point of even discussing your point? The point you are making then becomes irrelevant.
> If all your mainstream programming languages (however you count or define those) don't support that stuff, maybe it's time to move on and choose a better language.
For better or worse, the world has rejected those better languages and relegated them to niche uses. Shrieking shrilly about your favourite languages aren't gonna make them more popular.
You know what's more realistic? Teaching the users of the "poorer" languages that they can get all those benefits of type enforcement in their DB without needing to switch languages.
> In Scala at least, it's trivially possible to define such a constraint with types.
In more than a few languages it's possible to do that. I'm thinking more Prolog, and specific SAT solvers than Scala, though. There's benefits in doing so.
However, the minute you plug an RDBMS into your system, many benefits can be gained without switching languages at all. Like real constraints for XOR or composite uniqueness, referential integrity, NULL-prevention, default values, etc.
valenterry 60 days ago [-]
> What parse-time? (...) TBH, I am still failing to see your point
What I tried to say was: in some programming languages, this insert will not even compile. So you don't have to write a test, you don't have to spin up a test database or anything, it just doesn't compile. And I prefer that over having to wait until a query is actually sent before I get an error.
This is relevant for me because 1.) it makes me more productive since I get the error much quicker and I don't have to write a test and 2.) it prevents getting calls in the night because something broke.
I hope that makes it clear.
> However, the minute you plug an RDBMS into your system, many benefits can be gained without switching languages at all.
Yeah, but that doesn't invalidate my original point, does it?
lelanthran 60 days ago [-]
I don't really have time to discuss this in depth, but note that:
1) I still disagree somewhat on the finer points,
and
2) I've upvoted your post anyway because other than the finer points on which I disagree, you make good points anyway.
Cheers :-)
1st1 62 days ago [-]
> Yes and you absolutely have to. That's not a disadvantage, it's just how it is. Because SQL is the absolute bare minimum. The lowest common standard. And not a great one. Null handling and typesystem for example are way inferior to those of good programming languages. So why would I leave those productivity gains on the table?
I think... we're in a agreement? :)
EdgeQL doesn't have a NULL (it's a set-based language and NULL is an empty set, this tiny adjustment makes it easier to reason about missing data). And because it also has a more robust type system, allows for limitless composition and easy refactoring, it has far greater DX than SQL => you're more productive.
There's a footnote here: EdgeQL doesn't support some of the SQL capabilities just yet, namely window functions and recursive CTEs. But aside from that it is absolutely a beast.
> Well yeah. And it is inferior to the programming language I use. Hence, this comes at a disadvantage to me.
Maybe, but I'm curious how you arrived to that conclusion. I assume you mean that using the power of a high-level programming language you can force ORM to complete submission and that's just not true. Most of the time you'll either have grossly inefficient multi-roundtrip query code (hidden from you) or let ORM go and use SQL. Obviously that's an extreme scenario, but it's surprisingly common in complex code bases and logic.
> Aha, so then... why not build a query builder for every language, just like with ORMs?
We are. We started with improving the network protocol (it does less round-trips than Postgreses and is stateless) and crafting client libraries for MANY languages. All client libraries support automatic network & transaction error recovery, automatic connection pooling, and have generally great and polished API. Not to mention they are fast.
We do have a query builder for TypeScript. But we also have codegen for every language we support: place an .edgeql file in your project and you get fully typed code out of that. That said, we will eventually have query builders for every language we support. There's only so much you can do in 3 years since we announced 1.0.
> Sorry, not convinced. We would be better off by improving on SQL itself.
Significantly improving SQL without starting from scratch isn't possible. Adding sugar - surely is possible, but we are 100% that the productivity boost we deliver with EdgeQL is worth going all in (and our users agree with us).
In any case, with Gel 6 we have full SQL support (except DDL), so it's possible to use Gel along with ORMs if that's needed. We are not SQL haters at all.
> Most of the time you'll either have grossly inefficient multi-roundtrip query code (hidden from you)
Or you spend 5 minutes actually putting some effort in to configuring your ORM. Maybe spend 30 minutes learning if it's your first time. People will spend weeks handwriting SQL to save hours of ORM tuning.
dvtkrlbs 62 days ago [-]
In my experience it is the exact reverse. You spend weeks configuring or tuning your orm calls when you ccould just spend a few hours optimizing your sql. There is a reason most orms has a raw SQL escape hatch.
The sweet spot is typesafe sql libraries. You write your raw sql and library deduces the return type from the database and gives you typesafety (sqlx is really good for this, sqlc for go is similar). It gives you almost all the benefits of ORMs with almost no downsides.
alangou 62 days ago [-]
For me, personally, I do a ton of fullstack work in JavaScript-land, but also have Python services for ML-heavy needs, and it’s nice to define one schema that at its root is still SQL while generating query clients for multiple languages.
re-thc 62 days ago [-]
Drizzle still lacks a lot of features / stability (bug fixes required).
Hopefully it improves over time.
1st1 62 days ago [-]
Having worked with the core team I can only say that they are amazing. I'm sure they'll figure it out, but database tech is gnarly. Takes time.
hiccuphippo 62 days ago [-]
Ok, but is it gel as in gif or gel as in gif?
RedCrowbar 62 days ago [-]
Gel as in gif obviously! :-)
sieabahlpark 61 days ago [-]
[dead]
jakubmazanec 62 days ago [-]
EdgeDB is simply great. Schema, migrations, TypeScript query builder, auth are all awesome features - I love that for my small Remix apps I don't have to create separate API layer, I just use the DB directly.
I like the new name (mostly because typing "edgedb" when using the CLI was annoying). Hopefully the new documentation will be better, because the old one wasn't very usable and a little bit sparse.
jackfischer 62 days ago [-]
Tight integration with the typescript tool chain has been great for us with edgeql and is about an order of magnitude less error prone than ORMs I've interacted with. Gel is a winning formula especially in the typescript world.
1st1 62 days ago [-]
Thank you Jack!
kelseydh 62 days ago [-]
Postgres is great, but the level of work required to scale it on large workloads is really quite overwhelming. You really need expert-level knowledge to scale it. Just ask anybody who has had to shard their database.
zeeg 62 days ago [-]
Define large workload.
Sentry runs large workloads and Postgres isn't a bottleneck. We have also never employed a DBA. Most users never need to shard their database, and at most can just partition datasets by tables for _extremely_ high volume workloads.
You just have to consider architecture and optimize things that are slow, just like any other software. Nothing is free at the end of the day, and nothing else gives you the flexibility that Postgres does, particularly these days with its growing high value extensions.
genewitch 62 days ago [-]
I ran a 4000qps postgres database set to use no more than 640KB of RAM (its in the configs).
The DBA was having fun with my silly idea until Slow-Query takes 10 seconds and we flipped back over to the production systems with much more allowed memory for postgres.
My purpose was to show that Linux itself is pretty fast at caching and paging, even if Postgres was hamstrung. The db actually ran fine except for the slow queries, and we probably could have cleaned them up, too. But I proved my point.
DB was ~300GB, metal had 512GB RAM. This was in 2012.
dano 62 days ago [-]
Yep. I converted a business from to Postgres in 2004 and never looked back. Stable, reliable, no surprises. Postgres is the answer until proven otherwise.
sgarland 62 days ago [-]
No offense, but 4000 QPS is not large. I ran a 100K+ QPS MySQL instance with 384 GiB of RAM.
This is all subjective, of course, but I wouldn’t call 100K QPS large, either. It’s getting on the high end of what you might reasonably want to handle without horizontal scaling, but it’s still not “large.”
genewitch 61 days ago [-]
postgres served 4000qps while restricted to 640 kilobytes of memory.
sgarland 59 days ago [-]
I am so sorry; I conflated your comment with the sibling from zeeg.
genewitch 57 days ago [-]
you did say no offense, and i took none. Be well.
the_duke 62 days ago [-]
Lesson: if you start a new database company, start with SQL.
Almost every DB that starts without SQL support eventually ends up adding it back in later.
1st1 62 days ago [-]
True. In our case it's not that simple. By resisting adding SQL early on we were able to advance our data model & schema without SQL holding us back. And now we've added SQL in such a way that it takes full advantage of our stack.
lmm 62 days ago [-]
That's like saying you should start with microservices/kubernetes/etc. because every company ends up needing those things. IMO you should delay adding SQL as long as possible so that it doesn't compromise your design.
ndriscoll 61 days ago [-]
Not using SQL makes it basically a complete non-starter for any business unless it had some extremely compelling niche feature that you absolutely need and can't be found elsewhere. SQL has a massive ecosystem around it. All the examples I see are either some bespoke typescript library, or treat queries as strings. Obviously it can't hook into any of the zillion query builder libraries out there, and you'd have to go implement one or write a decent amount of code for an existing one to be able to use it. I also don't see any docs for JVM usage at all, which is like half the industry.
If you want to experiment with syntax for database queries, macros that translate to SQL at compile time seem like a much better way to get traction, but pretty much everything I've seen ends up being a more verbose cosmetic variation of SQL. The one big flaw to fix is that SELECT should have been at the end.
seer 62 days ago [-]
What is the self-hosting story I wonder? If we have our own Postgres db (placed in a specific region for compliance), and we put gel onto our k8s cluster as they state in their docs, does it work well? I assume this type of deployment is free right? What features are we missing from their cloud offering?
GCP has very cool high availability, backup and monitoring features that I’d hate to loose if we move to their cloud offering. Can you configure which region your data is in? Can you put it behind / inside a VPC?
Couldn’t really find that info in the docs / pricing pages.
1st1 62 days ago [-]
> What is the self-hosting story I wonder?
Fully supported, we have a guide for every popular platform [1].
> If we have our own Postgres db (placed in a specific region for compliance), and we put gel onto our k8s cluster as they state in their docs, does it work well?
If you're OK with managing Postgres and your own k8s you won't have a problem.
> I assume this type of deployment is free right?
Yes, and everything you need for that is Apache 2.
> What features are we missing from their cloud offering?
A couple:
1. Slow query log: it requires a custom C extension for Postgres (we wish it did not). It's part of Gel and is open source, but it's not allowed to deploy custom Postgres extensions, say, on AWS Aurora.
2. Integrations with Vercel [2] and GH. With our Cloud you get Vecvel Previews set up out of the box.
3. Convenience: we manage Gel Cloud and it's our headache to keep it running for you. :)
> Can you configure which region your data is in?
We're currently on AWS, but yes, you can configure a region, if you choose to use our cloud.
> Couldn’t really find that info in the docs / pricing pages.
Good point, I'll use this question as a template to add a Q&A there :)
Was the rebrand really just for simplicity and clarity? It seems like a lot of work to change the name when it was already a pretty established toolchain, not just for your team, but, and this is more important, for your community.
I actually think EdgeDB was a far better name. It actually meant something; yes, it wasn't a pure graph database, but it did work with the general concept of graphs and edges. Gel means nothing. Even the domain name: geldata.com makes no sense. You're not selling or creating data, it's a database system/layer.
I've been a true evangelist for EdgeDB over the last two years, but this has really irked me - irrationally so, I'm aware(!), but I just can't see the benefits outweighing the drawbacks here. It feels like it has to have been a result of some legal action or something.
And just as LLMs were starting to catch up with their knowledge cutoff to include EdgeDB too! Now it'll be another two years until they know what you're talking about again. I know, first world problems.
Is the query language name changing, by the way? Or is that still EdgeQL? Please let sanity prevail over at least one thing and don't call it "jel-q-ell"!
1st1 62 days ago [-]
EdgeQL stays EdgeQL :) I like that name.
I also liked EdgeDB (I'm biased, I coined that name myself). But at every conference we had the same conversation with developers:
"EdgeDB? Huh, must be an edge-computing database. Are you running SQLite?"
There are other minor reasons for the rename, but this annoyance persisted for a number of years and we decided to pull the plug on it.
> I've been a true evangelist for EdgeDB over the last two years
Thank you. This means a lot.
divan 61 days ago [-]
As someone who has used EdgeDB religiously for years now and still has projects with EdgeQL code generators (that use old syntax), one of my earlier requests was, "Please don't break things". Don't change the syntax of EdgeQL in a backward incompatible way, don't rename commands in the command line, etc. It's really not a big issue to remember some not-so-perfect term. Requiring thousands of people to unlearn/relearn new terms, commands, and names, and rewrite scripts and documents - is an issue. Didn't expect to see the whole name EdgeDB being replaced with another, ungoogleable name.
The argument of "some developers don't look past name and think that it's an edge-computing database" is superficial at best. I can get how it can be annoying, personally, but unless it was backed up with data showing that it hurts adoption, I wouldn't take it as a serious problem worth forcing thousands of users to switch to a new name/commands.
Following this logic, these developers could also ask "MySQL? Huh, you must be running it only on your own servers" or "Gel? Huh, is it some toy database for kids?".
I wish more people realized the enormous cognitive costs and debt created by such renamings. :/
1st1 60 days ago [-]
Sorry, divan, for causing pain. This wasn't an easy decision. FWIW we're still strict about backwards compat and all you stuff should continue working.
re-thc 62 days ago [-]
> But at every conference we had the same conversation with developers
I wonder if this will really go away or just be replaced with something else. Is it really the name "specifically" or how a lot of people work?
(i.e. going by first impressions, connotations, memes, etc)
> "EdgeDB? Huh, must be an edge-computing database. Are you running SQLite?"
"Gel? Huh, must be for cosmetics. Is this for retail?"
p.s. had no issues with EdgeDB.
scotttrinh 62 days ago [-]
Scott from Gel here. I suspect for everyone we talked to in the "So, this is a database for Edge computing?" camp there were a dozen people who thought that and just kept moving even if they were our target audience. It's hard to quantify just what the cost have having a name with such a strong connotation (which has arguably gone up and down in the hype cycle over the lifetime of EdgeDB) has been.
Gel doesn't have any really strong existing computing connotations, so maybe people might make a stretch that it's about some non-computing-related domain and maybe we miss reaching them. But that just seems far less likely than trying to continue to push against the overwhelming feeling that "Edge" just has too much baggage and was working against us.
Personally, I think it's fine if people find the name confounding or just personally dislike it. I think products like Supabase, Neon, Drizzle, CockroachDB, Django, Flask, Express, etc etc kind of prove that if your name is general enough, you'll overcome that reaction eventually via recognition for the value of the product itself.
pcthrowaway 62 days ago [-]
For every person who passed on EdgeDB on the suspicion it was related to edge computing, someone may have clicked it for the same reason entirely and found something else compelling.
Personally I think EdgeDB was a much better and descriptive name than Gel. I haven't used EdgeDB/Gel yet, but have been looking at it with excitement for years now waiting for the opportunity to use it.
I am worried that the rename will work against you since you've built such an excellent brand around EdgeDB with so many glowing testimonials.
halfmatthalfcat 62 days ago [-]
I don't want to pile on but it seems like a lot of "going on feels" versus any quantifiable justification.
scotttrinh 61 days ago [-]
That's fair, but I think going on _just_ the quantifiable stuff was still enough of a justification for us. My feels are that the cost of our confusing former name is higher than we know, but the known cost was still great enough.
999900000999 62 days ago [-]
3 questions.
How does this compare to solutions like Supabase. What stands out with Supabase is the sdk support. My current project is probably going to stay on Supabase( it's a hobbyist project regardless), but hypothetically if I was an enterprise prospect how would you pitch your solution.
Do you support functions, that I could call from the client for more advanced logic that can't be done with queries ?
Why not brand as GelDB. Would probably be easier to Google. Plus it tells me instantly what your selling.
PS: Can you offer something like a hobbyist tier for 10$ a month. I don't want to deal with my project randomly shutting off, but I have extremely low requirements in terms of storage during development.
1st1 62 days ago [-]
> How does this compare to solutions like Supabase.
Supabase is great and works well if you want vanilla (more or less) Postgres with some integrations ready to go.
With Gel you get a Postgres with a data layer on top. If you want to have a data model with abstract types & mixins that's easy to work with and scale in complexity, advanced access control, built-in migrations schema engine, hierarchical graph query language on top of SQL, more robust client libraries -- that's Gel.
Gel is opinionated and vertically integrated and that's it's core strength. All of its components were developed in tandem -- from the network protocol to client APIs, from the EdgeQL query language to the data model to the migrations engine and so forth. It provides more cohesive experience and can give you non-trivial performance and DX gains if you commit to it.
> but hypothetically if I was an enterprise prospect how would you pitch your solution.
Total TypeSafety enforced at all levels, built-in migrations engine, best in class access control (we'll be blogging about our access control vs RLS soon.)
> Do you support functions, that I could call from the client for more advanced logic that can't be done with queries ?
With Gel 6 we'll be announcing the new `net` module tomorrow (spoiler!). You'll be able to schedule HTTP calls from triggers/queries/functions.
> Why not brand as GelDB. Would probably be easier to Google. Plus it tells me instantly what your selling.
Well, rebranding from Gel to GelDB would be just a text change to the homepage, so hypothetically the door is open for that. But I hope we can make Gel work, just the same way it works for Render/Neon/Fly.
> PS: Can you offer something like a hobbyist tier for 10$ a month.
Stay tuned, we'll announce some news on that in a couple of days! :)
999900000999 62 days ago [-]
Thanks!
I don't know about migrating my current project, but I'll definitely try Gel for a future project.
>Total TypeSafety enforced at all levels, built-in migrations engine, best in class access control (we'll be blogging about our access control vs RLS soon.)
Very very cool.
If I just dump my Supabase project's Postgres DB can I load the SQL into Gel and have everything work. Or would I need to setup the schema.
Ultimately I'm just looking for an open source alternative to firebase-> while amazing, I can't exactly claim to have an open source project that requires a closed source service.
Last question, Supabase heavily pushes the use of Captchas if you use any anonymous authentication. Does Gel also suggest this. Notably Firebase doesn't care, which makes it much easier on end users.
Imagine having to solve a captcha to browse Amazon and add products to your cart, you'd probably just use something else.
Ok, just one more question! How is your SDK support. JavaScript is a given, but Godot support would be nice.
1st1 61 days ago [-]
> If I just dump my Supabase project's Postgres DB can I load the SQL into Gel and have everything work. Or would I need to setup the schema.
Currently you have to define your schema in Gel [1] and then write scripts to port your Postgres data into Gel. It's cumbersome, we know, we'll be working on improving the migration flow.
> Last question, Supabase heavily pushes the use of Captchas if you use any anonymous authentication. Does Gel also suggest this. Notably Firebase doesn't care, which makes it much easier on end users.
We don't have captchas implemented, but when we do, it will be an opt-in configuration option.
> Ok, just one more question! How is your SDK support. JavaScript is a given, but Godot support would be nice.
Thanks for responding! I'm about 60% done with my current project so I don't think I'll be up to migrate( again, originally I started with Firebase), but I still definitely consider Gel for future projects.
Or if I ever interview with your team( hiring? ) I'll migrate my existing project and document the process.
Yeah, I knew that, and precisely because of that I assumed it's a typo :)
We have a native Python client. We can take a look if it works from Godot. Do you know if this is a popular use case?
999900000999 59 days ago [-]
I don't think Python code really works with Godot. The syntax is similar, but it's a completely different language.
I think Godot is moderately popular, but I can't imagine you'll see a giant uptick in paying clients from adding it.
rajrahul 62 days ago [-]
EdgeDB has declarative schemas with baked in migrations and is a clear differentiator.
Supports namespaces within a database.
EdgeQL improves nested query performance as they are compiled into a single postgres query
1st1 61 days ago [-]
++
d0100 62 days ago [-]
How to deal with eventual slow queries? I currently have a system with complex queries with a lot of joins where I had to create a custom materialized table
These queries also deal with permissions & realtime scheduling so endpoint caching doesn't solve it
Have gel users hit any performance issues, and how have they dealt with them?
1st1 62 days ago [-]
We're shipping a slow query log UI in this version to continuously monitor queries in your system.
Our users do get performance issues, usually because they start really using our data model to its full potential, creating tens / hundreds complex access policies, 100-lines long EdgeQL queries etc. We have EXPLAIN command to deal with, and we also support our customers directly helping them understand and fix their system. All of the findings trickle down to the core product so that the rest can benefit too.
armincerf 62 days ago [-]
It says gel is to Postgres what typescript is to JavaScript, so can I add gel to an existing Postgres instance and get the benefits of the nicer query language or does it rely on special tables? If I use some other extension like timescale is that compatible with gel?
And is there a story for replication/subscribing to queries for real time updates?
Postgres is so powerful partly because of its ecosystem, so I want to know how much of that ecosystem is still useable if I’m using gel on top of Postgres
RedCrowbar 62 days ago [-]
(article author here)
> If I use some other extension like timescale is that compatible with gel [...] Postgres is so powerful partly because of its ecosystem, so I want to know how much of that ecosystem is still useable if I’m using gel on top of Postgres
Playing nice with the ecosystem is the goal. We started off with more of a walled garden, but with 6.0 a lot of those walls came down with direct SQL support and support for standalone extensions [1]. There is a blog post coming about this specifically tomorrow (I think).
> And is there a story for replication
Replication/failover works out of the box.
> subscribing to queries for real time updates?
Working on it.
> so can I add gel to an existing Postgres instance and get the benefits of the nicer query language or does it rely on special tables?
Gel is built around its schema, so you will need to import your SQL schema. After that you can query things with EdgeQL (or SQL/ORM as before).
In the just released new version we've added SQL support, so now you can use SQL taking full advantage of our data model (access policies, mixins, etc) and the network protocol (automatic recovery on network & transaction error, automatic connection pooling on client/sever).
We'll continue bridging the gap to make it easier for companies to adopt Gel for an existing database. We either will invest in creating a tool for migration, or maybe some more exciting options we're currently pondering on.
xanth 62 days ago [-]
Do you have any plans/aspirations in adding Temporal "immutable DB" functionality?
Congrats on the rebrand and launch! Biggest reasons to use Gel over Supabase?
1st1 62 days ago [-]
Good question. We’re indeed similar products.
Gel is somewhat similar to Supabase on the surface—both run on Postgres, both have Auth, and both offer AI features, a CLI, and a UI, among other similarities.
However, there’s a big difference beneath the surface: Gel comes with a high-level data model (abstract types, mixins, access policies) that replaces tables and joins. It’s still relational (and we’re about to publish a paper on that), but it’s more high level, strict, and yet more flexible.
On top of that, we have a built-in migration system (where schema is a first-class citizen), a performance-tuned network protocol, and a query language called EdgeQL, which is like a child of SQL and GraphQL. These are just a few of our “deep” features.
All in all, Gel is a fresh take on day-to-day database development. We cut no corners in trying to push the core database developer experience forward.
kelthuzad 62 days ago [-]
Just a user here, but I'd say there are many reasons, one of those reasons is in your name: auth.
GelDB's auth is versatile and doesn't cost a dime, while Supabase auth is only free up until 50,000 monthly active users, then it costs $0.00325 per MAU (> 100k MAU).
I personally just love its query language and the typescript query builder.
Yeah, we'll be repairing this shortly. FWIW we're completely revamping our documentation, it's almost a full rewrite, focused on bringing clarity and logic to the navigation, improving search, etc. We should fully wrap it up in a week or two.
Scramblejams 62 days ago [-]
I'm working on some server-side Swift, and it's feeling very promising. Any plans for a Swift client library?
1st1 62 days ago [-]
Not in the immediate future, but we have a member in our community who's building something. We'll see if they get near the finishing line.
plagiarist 62 days ago [-]
I was hoping to learn more but many of the docs.geldata.com links on the GitHub page are 404 right now, mentioning just in case nobody has reported that yet.
1st1 62 days ago [-]
We'll be fixing them shortly. We're rolling out a new documentation system. Apologies for the inconvenience.
krashidov 62 days ago [-]
If I have an existing postgres db how hard is it to migrate?
Can I write regular joins if I need to?
Do you have plans or do you already support db branching ?
1st1 62 days ago [-]
> If I have an existing postgres db how hard is it to migrate?
The main hurdle would to migrate the schema. You'll have to define your schema in Gel (take a look at the reference here [1]) and write a script to copy your data.
We are discussing internally how we can simplify this process, this is becoming a popular question.
> Can I write regular joins if I need to?
You can use EdgeQL and SQL side by side now though one connection in the same function. Gel's schema is still relational (even though it's more powerful with features like multiple inheritance).
This page describes the details [2] of how SQL works with our schema (spoiler: it's very straightforward and no different from a hardcoded SQL schema).
> Do you have plans or do you already support db branching ?
We call Postgres databases "branches" in Gel. And we have tooling around them [3] to have git-like experience with them. Conceptually you can map (manually) your Gel branches to your Git branches if you wish.
> We call Postgres databases "branches" in Gel. And we have tooling around them [3] to have git-like experience with them.
Sounds like it's a yes - thanks!
adsharma 62 days ago [-]
How do you reconcile DB schema with strong types vs RPC schema?
Have you looked into interop with typespec?
1st1 62 days ago [-]
For TypeScript, tRPC mostly just works out of the gate, if I understand the question fully.
greg 61 days ago [-]
I used EdgeDB 5.0 for a side project, and I loved it. It reminded me of the fun of developing an app with Django or Rails, but serverless with Typescript+React.
aitchnyu 62 days ago [-]
When will Python get a typesafe query builder?
Now there is
```
client.query('''
INSERT User {
name := <str>$name,
dob := <cal::local_date>$dob
}
''', name='Bob', dob=datetime.date(1984, 3, 1))
```
I'm interested in this and Jetbrains Pycharm and VSCode and CI to catch errors in these.
Probably Python won't get a typesafe query builder until it catches up with its typing to TypeScript.
Right now in Python the generics are quite rudimentary, there's no equivalent to TypeScript's keyof or mapped types. We're using all of TS' advanced typing system features to make our query builder work.
Python will get a query builder (it's a priority for us now) soon, but it will not be a type safe one. BUT: Python, and all other languages that we support, can use codegen.
Just put your query in a `<name>.edgeql` file, and run a special command. Gel will generate a fully typed function that runs that query.
aitchnyu 61 days ago [-]
Can we add filters and subqueries to query, like we do with an orm?
Was linked from one of your articles, returns 404 for me.
patatero 62 days ago [-]
I've always thought the name EdgeDB is odd since it's not an actual database software and more of a Prisma competitor.
mrbluecoat 62 days ago [-]
An odd naming decision from an SEO perspective.
flessner 62 days ago [-]
A great naming decision if you want to make it look like a new product category: sql, redis, s3, gel...
Fits right in.
sea-gold 62 days ago [-]
Agreed. EdgeDB may not have been representative of the product (or the best name), but at least it was unique.
DeathArrow 62 days ago [-]
>PostgreSQL seems to be quietly eating the database world. It's not just topping the charts, its adoption momentum is accelerating.
Is there something like Vitess for Postgres?
gulcin 61 days ago [-]
There is something we built for the schema management aspect of Vitess for PostgreSQL.
It is called pgroll, open-source, aiming to minimize potential downtime risks associated with DDL changes, offering multi-schema view, instant rollbacks, and more high-level options like backfilling in batches.
Neon seems to be doing something similar (their hosted solution was not that great last time I checked) most of their stack is open source.
clarkbw 61 days ago [-]
(neon employee)
there isn't something like Vitess for Postgres yet but there needs to be. Migrations are painful in general and they become very painful at scale. i haven't used gel yet, i know it manages migrations but i don't know to what extent. most of my experience is with prisma, drizzle, and atlas.
neon is working on some plans to solve migrations at scale. we think our custom storage layer will allow us to optimize certain paths, like setting a default value in a new column for a table with millions of rows. this alter command can take a lock on a table for hours. but ultimately we need better tooling.
ideally there is a client and a hosted service such that you can use the client to run migrations on your own from the CLI and integrate it into your dev workflow. the hosted service version allows you to push up your schema change from the client to an API. from there you can manage the migration rollout from an operational dashboard that helps you tune resourcing.
when i was at github we used vitess to roll out a migration that took 3 weeks to complete. a long time to wait but that's a better tradeoff compared to a migration that takes down production for 6 hours.
gulcin 61 days ago [-]
(xata employee)
I totally agree with schema migrations being painful, have you seen the open-source tool we developed to tackle this problem? It is called pgroll: https://github.com/xataio/pgroll
Any feedback is appreciated!
dvtkrlbs 58 days ago [-]
Postgres definitely needs github like tooling that does always online migrations. iirc githhub's migration tool was open source right?
And someone can correct me if I am wrong but really similar migrations are possible with Postgres I remember reading an article about some company doing a similar migration strategy using postgres publications. We just need better tooling.
sebnun 62 days ago [-]
The main reason I'm using Supabase is due to their support of the pgroonga extension out of the box.
Does Gel support multilingual full text search?
1st1 61 days ago [-]
Yeah, we probably can. We'll research!
patatero 61 days ago [-]
Your home page consumes 35% of my GPU, that's crazy. Maybe tone down the background animation a bit?
5Qn8mNbc2FNCiVV 62 days ago [-]
Just wish the local development story wasn't so bound to a installation or the concept of instances. I just want a compose file in my repositories that starts Postgres and "Gel" separately with the option to execute commands within the container
I mean, I don't need it anymore because I did it myself now, but it definitely annoys me that it's not first class
RedCrowbar 62 days ago [-]
You can use Docker (and Docker Compose) with Gel for local development [1], but of course you'd miss out on most management features of the CLI, because it's not built to supplant the docker/docker-compose CLI. Are there any particular issues you have currently with the Docker image approach?
is the pricing per database or per cluster/account?
RedCrowbar 62 days ago [-]
Strictly speaking we charge for compute and storage. You can create variously-sized Gel instances and within those instances an arbitrary number of branches.
pier25 62 days ago [-]
so I can have multiple free dev dbs using 1/4 compute unit?
1st1 62 days ago [-]
We give you 1GB of space for free. I think you can fit three or four branches in that. But we'll announce a cheaper tier this week (spoiler!) with the same amount of disk space.
sgbeal 62 days ago [-]
> Arguably, this makes PostgreSQL the only mainsteam database system that is truly open-source. It can't be bought or license-rug-pulled and this creates a kind of trust that can't be emulated in any other way.
SQLite, with far more installations than Postgres, isn't a "mainstream database"?
rswail 62 days ago [-]
>> ... mainstream database system ...
> SQLite, with far more installations than Postgres, isn't a "mainstream database"?
It's not a system, it's a monolith library, which is why it is used wherever you want a local database with local storage.
Postgres implementations are a client/server systems even if the connection is on localhost.
It's also MVCC and has RLS and other requirements that do not necessarily apply to a local database like SQLite.
It's not about the number of installations, it's the type of database.
Postgres's cost-based planner is good, but it's a decidedly 1980s design, predating the famous but also outdated Volcano/Cascades systems (used by Microsoft SQL Server and CockroachDB and others).
So much has happened in the field of query optimization in the last 30 years, very little of which has ended up in Postgres, I think. Postgres has gotten parallel workers and a JIT, but the fundamental design is largely unchanged. It's also quite conservative about adding improvements; other databases has had some variation of index skip scans for ages (Oracle has probably had it for 20 years now, and you can get it through the TimeScale extension), but Postgres is still working on supporting it natively.
The state of the art is arguably Umbra [1], a research project by Thomas Neumann's group at the university of Munich, the successor to HyPer, which is now being commercialized as CedarDB. Their analysis of the Postgres query planner is an interesting read [2].
[1] https://umbra-db.com/
[2] https://www.vldb.org/pvldb/vol9/p204-leis.pdf
If I am scraping giant amounts of data I would run far away from Postgres for other databases like Amazon Redshift.
But even in the 9.4 days (~a decade ago) I was pushing Terabytes worth of analytics data daily through a manually managed Postgres cluster with a team of <=5 (so not that difficult). Since then there have been numerous improvements which make scaling beyond this level even easier (parallel query execution, better predicate push down by the query planner, and declarative partitioning to name a few). Throw something like Citus (extension) into the mix for easy access to clustering and (nearly) transparent table sharding and you can go quite far without reaching for specialized data storage solutions.
Bisecant fractal databases like HyperKlingonDB are excellent at storing bisecant fractal data, but terrible at anything else (often, just plain terrible overall due to being immature).
One notable exception for MySQL is INSTANT DDL in 8.x, thanks to contributions from Tencent, which is an extremely nice QoL upgrade that Postgres doesn’t have. The other is logical replication, something Postgres does support now (and has for several years), but didn’t for a long time.
Guess I'll keep using Postgres.
I've been using EdgeDB for years (from RethinkDB and MongoDB before that) and it's my favorite database. I don't need to memorize SQL commands and I get a pretty UI to look at my data if my queries introduce issues.
No bugs, no configuration errors, no nothing. It all just worked. So I think you guys deserve more recognition and credit for what is clearly a very well-engineered product that I intend to use for some of my personal projects.
We have a blog post about that and more [1].
---
By the way, if you visit Drizzle’s website, you’ll see that Gel is one of their biggest sponsors. We worked closely with Drizzle to ship a first-class integration with Gel. You can use Gel’s schema and migrations, and Drizzle will just work. You can even use the Drizzle query builder and EdgeQL side by side if you want.
[1] https://www.geldata.com/blog/a-solution-to-the-sql-vs-orm-di...
Yes and you absolutely have to. That's not a disadvantage, it's just how it is. Because SQL is the absolute bare minimum. The lowest common standard. And not a great one. Null handling and typesystem for example are way inferior to those of good programming languages. So why would I leave those productivity gains on the table?
Using EdgeQL simple means I have another programming language at hand.
> EdgeDB has a robust type system that's most [sic] comprehensive that most ORMs
Well yeah. And it is inferior to the programming language I use. Hence, this comes at a disadvantage to me.
> We've also built a query builder for TypeScript
Aha, so then... why not build a query builder for every language, just like with ORMs?
Sorry, not convinced. We would be better off by improving on SQL itself.
There is no mainstream programming language that I know off that offers what are table stakes for an RDBMS.
For example, even trivial SQL things like a constraint saying 'at least one of these two fields must be empty, but both can't be empty' is missing in the "advanced" type systems in mainstream languages.
Sure, you can create a constraint for `'at least one of these two fields must be empty, but both can't be empty'` but that will only be applied at runtime. It's not like any dmbs (to my knowledge) will reject the query at parse-time. Or else, please show me an example of how you would do that. (and don't use constraints or indexes, because those only work at runtime)
Second, I deliberately said "good programming languages" and you changed that to "mainstream programming language". You know what? If all your mainstream programming languages (however you count or define those) don't support that stuff, maybe it's time to move on and choose a better language.
In Scala at least, it's trivially possible to define such a constraint with types. That ensures that you will never by accident violate that constraint. And that is guaranteed at compile time, so no one is gonna wake you up in the night because your query failed due to that condition you specified.
If you don't believe me, I'm happy to show you the code to encode that logic.
So? The DB will prevent you from violating the constraints, because it cannot tell in advance (i.e. before getting the query with the data in it) whether the data violates the constraint.
> It's not like any dmbs (to my knowledge) will reject the query at parse-time.
What parse-time?
Where's the parse-time in that?TBH, I am still failing to see your point - this looks like an artificial restriction (the query must be rejected before you present it to the DB).
Whether you call it runtime or compile-time or parse-time, the DB will not let you violate the type safety by accident.
The point is that the DB will enforce the constraints.
> Second, I deliberately said "good programming languages" and you changed that to "mainstream programming language".
Well, if your bar for "good programming languages" rules out all the mainstream languages, what's the point of even discussing your point? The point you are making then becomes irrelevant.
> If all your mainstream programming languages (however you count or define those) don't support that stuff, maybe it's time to move on and choose a better language.
For better or worse, the world has rejected those better languages and relegated them to niche uses. Shrieking shrilly about your favourite languages aren't gonna make them more popular.
You know what's more realistic? Teaching the users of the "poorer" languages that they can get all those benefits of type enforcement in their DB without needing to switch languages.
> In Scala at least, it's trivially possible to define such a constraint with types.
In more than a few languages it's possible to do that. I'm thinking more Prolog, and specific SAT solvers than Scala, though. There's benefits in doing so.
However, the minute you plug an RDBMS into your system, many benefits can be gained without switching languages at all. Like real constraints for XOR or composite uniqueness, referential integrity, NULL-prevention, default values, etc.
What I tried to say was: in some programming languages, this insert will not even compile. So you don't have to write a test, you don't have to spin up a test database or anything, it just doesn't compile. And I prefer that over having to wait until a query is actually sent before I get an error.
This is relevant for me because 1.) it makes me more productive since I get the error much quicker and I don't have to write a test and 2.) it prevents getting calls in the night because something broke.
I hope that makes it clear.
> However, the minute you plug an RDBMS into your system, many benefits can be gained without switching languages at all.
Yeah, but that doesn't invalidate my original point, does it?
1) I still disagree somewhat on the finer points,
and
2) I've upvoted your post anyway because other than the finer points on which I disagree, you make good points anyway.
Cheers :-)
I think... we're in a agreement? :)
EdgeQL doesn't have a NULL (it's a set-based language and NULL is an empty set, this tiny adjustment makes it easier to reason about missing data). And because it also has a more robust type system, allows for limitless composition and easy refactoring, it has far greater DX than SQL => you're more productive.
There's a footnote here: EdgeQL doesn't support some of the SQL capabilities just yet, namely window functions and recursive CTEs. But aside from that it is absolutely a beast.
> Well yeah. And it is inferior to the programming language I use. Hence, this comes at a disadvantage to me.
Maybe, but I'm curious how you arrived to that conclusion. I assume you mean that using the power of a high-level programming language you can force ORM to complete submission and that's just not true. Most of the time you'll either have grossly inefficient multi-roundtrip query code (hidden from you) or let ORM go and use SQL. Obviously that's an extreme scenario, but it's surprisingly common in complex code bases and logic.
> Aha, so then... why not build a query builder for every language, just like with ORMs?
We are. We started with improving the network protocol (it does less round-trips than Postgreses and is stateless) and crafting client libraries for MANY languages. All client libraries support automatic network & transaction error recovery, automatic connection pooling, and have generally great and polished API. Not to mention they are fast.
We do have a query builder for TypeScript. But we also have codegen for every language we support: place an .edgeql file in your project and you get fully typed code out of that. That said, we will eventually have query builders for every language we support. There's only so much you can do in 3 years since we announced 1.0.
> Sorry, not convinced. We would be better off by improving on SQL itself.
Significantly improving SQL without starting from scratch isn't possible. Adding sugar - surely is possible, but we are 100% that the productivity boost we deliver with EdgeQL is worth going all in (and our users agree with us).
In any case, with Gel 6 we have full SQL support (except DDL), so it's possible to use Gel along with ORMs if that's needed. We are not SQL haters at all.
We have this blog post that was on HN front page a few times, it's a good read and explains our position: https://www.geldata.com/blog/we-can-do-better-than-sql
Or you spend 5 minutes actually putting some effort in to configuring your ORM. Maybe spend 30 minutes learning if it's your first time. People will spend weeks handwriting SQL to save hours of ORM tuning.
The sweet spot is typesafe sql libraries. You write your raw sql and library deduces the return type from the database and gives you typesafety (sqlx is really good for this, sqlc for go is similar). It gives you almost all the benefits of ORMs with almost no downsides.
Hopefully it improves over time.
I like the new name (mostly because typing "edgedb" when using the CLI was annoying). Hopefully the new documentation will be better, because the old one wasn't very usable and a little bit sparse.
Sentry runs large workloads and Postgres isn't a bottleneck. We have also never employed a DBA. Most users never need to shard their database, and at most can just partition datasets by tables for _extremely_ high volume workloads.
You just have to consider architecture and optimize things that are slow, just like any other software. Nothing is free at the end of the day, and nothing else gives you the flexibility that Postgres does, particularly these days with its growing high value extensions.
The DBA was having fun with my silly idea until Slow-Query takes 10 seconds and we flipped back over to the production systems with much more allowed memory for postgres.
My purpose was to show that Linux itself is pretty fast at caching and paging, even if Postgres was hamstrung. The db actually ran fine except for the slow queries, and we probably could have cleaned them up, too. But I proved my point.
DB was ~300GB, metal had 512GB RAM. This was in 2012.
This is all subjective, of course, but I wouldn’t call 100K QPS large, either. It’s getting on the high end of what you might reasonably want to handle without horizontal scaling, but it’s still not “large.”
Almost every DB that starts without SQL support eventually ends up adding it back in later.
If you want to experiment with syntax for database queries, macros that translate to SQL at compile time seem like a much better way to get traction, but pretty much everything I've seen ends up being a more verbose cosmetic variation of SQL. The one big flaw to fix is that SELECT should have been at the end.
GCP has very cool high availability, backup and monitoring features that I’d hate to loose if we move to their cloud offering. Can you configure which region your data is in? Can you put it behind / inside a VPC?
Couldn’t really find that info in the docs / pricing pages.
Fully supported, we have a guide for every popular platform [1].
> If we have our own Postgres db (placed in a specific region for compliance), and we put gel onto our k8s cluster as they state in their docs, does it work well?
If you're OK with managing Postgres and your own k8s you won't have a problem.
> I assume this type of deployment is free right?
Yes, and everything you need for that is Apache 2.
> What features are we missing from their cloud offering?
A couple:
1. Slow query log: it requires a custom C extension for Postgres (we wish it did not). It's part of Gel and is open source, but it's not allowed to deploy custom Postgres extensions, say, on AWS Aurora.
2. Integrations with Vercel [2] and GH. With our Cloud you get Vecvel Previews set up out of the box.
3. Convenience: we manage Gel Cloud and it's our headache to keep it running for you. :)
> Can you configure which region your data is in?
We're currently on AWS, but yes, you can configure a region, if you choose to use our cloud.
> Couldn’t really find that info in the docs / pricing pages.
Good point, I'll use this question as a template to add a Q&A there :)
[1] https://docs.geldata.com/resources/guides/deployment [2] https://www.geldata.com/blog/seamless-dx-with-vercel
I actually think EdgeDB was a far better name. It actually meant something; yes, it wasn't a pure graph database, but it did work with the general concept of graphs and edges. Gel means nothing. Even the domain name: geldata.com makes no sense. You're not selling or creating data, it's a database system/layer.
I've been a true evangelist for EdgeDB over the last two years, but this has really irked me - irrationally so, I'm aware(!), but I just can't see the benefits outweighing the drawbacks here. It feels like it has to have been a result of some legal action or something.
And just as LLMs were starting to catch up with their knowledge cutoff to include EdgeDB too! Now it'll be another two years until they know what you're talking about again. I know, first world problems.
Is the query language name changing, by the way? Or is that still EdgeQL? Please let sanity prevail over at least one thing and don't call it "jel-q-ell"!
I also liked EdgeDB (I'm biased, I coined that name myself). But at every conference we had the same conversation with developers:
"EdgeDB? Huh, must be an edge-computing database. Are you running SQLite?"
There are other minor reasons for the rename, but this annoyance persisted for a number of years and we decided to pull the plug on it.
> I've been a true evangelist for EdgeDB over the last two years
Thank you. This means a lot.
The argument of "some developers don't look past name and think that it's an edge-computing database" is superficial at best. I can get how it can be annoying, personally, but unless it was backed up with data showing that it hurts adoption, I wouldn't take it as a serious problem worth forcing thousands of users to switch to a new name/commands.
Following this logic, these developers could also ask "MySQL? Huh, you must be running it only on your own servers" or "Gel? Huh, is it some toy database for kids?".
I wish more people realized the enormous cognitive costs and debt created by such renamings. :/
I wonder if this will really go away or just be replaced with something else. Is it really the name "specifically" or how a lot of people work?
(i.e. going by first impressions, connotations, memes, etc)
> "EdgeDB? Huh, must be an edge-computing database. Are you running SQLite?"
"Gel? Huh, must be for cosmetics. Is this for retail?"
p.s. had no issues with EdgeDB.
Gel doesn't have any really strong existing computing connotations, so maybe people might make a stretch that it's about some non-computing-related domain and maybe we miss reaching them. But that just seems far less likely than trying to continue to push against the overwhelming feeling that "Edge" just has too much baggage and was working against us.
Personally, I think it's fine if people find the name confounding or just personally dislike it. I think products like Supabase, Neon, Drizzle, CockroachDB, Django, Flask, Express, etc etc kind of prove that if your name is general enough, you'll overcome that reaction eventually via recognition for the value of the product itself.
Personally I think EdgeDB was a much better and descriptive name than Gel. I haven't used EdgeDB/Gel yet, but have been looking at it with excitement for years now waiting for the opportunity to use it.
I am worried that the rename will work against you since you've built such an excellent brand around EdgeDB with so many glowing testimonials.
How does this compare to solutions like Supabase. What stands out with Supabase is the sdk support. My current project is probably going to stay on Supabase( it's a hobbyist project regardless), but hypothetically if I was an enterprise prospect how would you pitch your solution.
Do you support functions, that I could call from the client for more advanced logic that can't be done with queries ?
Why not brand as GelDB. Would probably be easier to Google. Plus it tells me instantly what your selling.
PS: Can you offer something like a hobbyist tier for 10$ a month. I don't want to deal with my project randomly shutting off, but I have extremely low requirements in terms of storage during development.
Supabase is great and works well if you want vanilla (more or less) Postgres with some integrations ready to go.
With Gel you get a Postgres with a data layer on top. If you want to have a data model with abstract types & mixins that's easy to work with and scale in complexity, advanced access control, built-in migrations schema engine, hierarchical graph query language on top of SQL, more robust client libraries -- that's Gel.
Gel is opinionated and vertically integrated and that's it's core strength. All of its components were developed in tandem -- from the network protocol to client APIs, from the EdgeQL query language to the data model to the migrations engine and so forth. It provides more cohesive experience and can give you non-trivial performance and DX gains if you commit to it.
> but hypothetically if I was an enterprise prospect how would you pitch your solution.
Total TypeSafety enforced at all levels, built-in migrations engine, best in class access control (we'll be blogging about our access control vs RLS soon.)
> Do you support functions, that I could call from the client for more advanced logic that can't be done with queries ?
With Gel 6 we'll be announcing the new `net` module tomorrow (spoiler!). You'll be able to schedule HTTP calls from triggers/queries/functions.
> Why not brand as GelDB. Would probably be easier to Google. Plus it tells me instantly what your selling.
Well, rebranding from Gel to GelDB would be just a text change to the homepage, so hypothetically the door is open for that. But I hope we can make Gel work, just the same way it works for Render/Neon/Fly.
> PS: Can you offer something like a hobbyist tier for 10$ a month.
Stay tuned, we'll announce some news on that in a couple of days! :)
I don't know about migrating my current project, but I'll definitely try Gel for a future project.
>Total TypeSafety enforced at all levels, built-in migrations engine, best in class access control (we'll be blogging about our access control vs RLS soon.)
Very very cool.
If I just dump my Supabase project's Postgres DB can I load the SQL into Gel and have everything work. Or would I need to setup the schema.
Ultimately I'm just looking for an open source alternative to firebase-> while amazing, I can't exactly claim to have an open source project that requires a closed source service.
Last question, Supabase heavily pushes the use of Captchas if you use any anonymous authentication. Does Gel also suggest this. Notably Firebase doesn't care, which makes it much easier on end users.
Imagine having to solve a captcha to browse Amazon and add products to your cart, you'd probably just use something else.
Ok, just one more question! How is your SDK support. JavaScript is a given, but Godot support would be nice.
Currently you have to define your schema in Gel [1] and then write scripts to port your Postgres data into Gel. It's cumbersome, we know, we'll be working on improving the migration flow.
> Last question, Supabase heavily pushes the use of Captchas if you use any anonymous authentication. Does Gel also suggest this. Notably Firebase doesn't care, which makes it much easier on end users.
We don't have captchas implemented, but when we do, it will be an opt-in configuration option.
> Ok, just one more question! How is your SDK support. JavaScript is a given, but Godot support would be nice.
Golang? We have a great Go client [2].
[1] https://docs.geldata.com/reference/datamodel
[2] https://github.com/geldata/gel-go
Supabase has unofficial support.
https://github.com/supabase-community/godot-engine.supabase/...
Thanks for responding! I'm about 60% done with my current project so I don't think I'll be up to migrate( again, originally I started with Firebase), but I still definitely consider Gel for future projects.
Or if I ever interview with your team( hiring? ) I'll migrate my existing project and document the process.
Yeah, I knew that, and precisely because of that I assumed it's a typo :)
We have a native Python client. We can take a look if it works from Godot. Do you know if this is a popular use case?
I think Godot is moderately popular, but I can't imagine you'll see a giant uptick in paying clients from adding it.
These queries also deal with permissions & realtime scheduling so endpoint caching doesn't solve it
Have gel users hit any performance issues, and how have they dealt with them?
Our users do get performance issues, usually because they start really using our data model to its full potential, creating tens / hundreds complex access policies, 100-lines long EdgeQL queries etc. We have EXPLAIN command to deal with, and we also support our customers directly helping them understand and fix their system. All of the findings trickle down to the core product so that the rest can benefit too.
And is there a story for replication/subscribing to queries for real time updates?
Postgres is so powerful partly because of its ecosystem, so I want to know how much of that ecosystem is still useable if I’m using gel on top of Postgres
> If I use some other extension like timescale is that compatible with gel [...] Postgres is so powerful partly because of its ecosystem, so I want to know how much of that ecosystem is still useable if I’m using gel on top of Postgres
Playing nice with the ecosystem is the goal. We started off with more of a walled garden, but with 6.0 a lot of those walls came down with direct SQL support and support for standalone extensions [1]. There is a blog post coming about this specifically tomorrow (I think).
> And is there a story for replication
Replication/failover works out of the box.
> subscribing to queries for real time updates?
Working on it.
> so can I add gel to an existing Postgres instance and get the benefits of the nicer query language or does it rely on special tables?
Gel is built around its schema, so you will need to import your SQL schema. After that you can query things with EdgeQL (or SQL/ORM as before).
[1] https://github.com/geldata/gel-postgis
We'll continue bridging the gap to make it easier for companies to adopt Gel for an existing database. We either will invest in creating a tool for migration, or maybe some more exciting options we're currently pondering on.
[0] https://en.wikipedia.org/wiki/Temporal_database
[1] https://github.com/scalegenius/pg_bitemporal/blob/master/doc...
[2] https://github.com/ifad/chronomodel
[1] https://docs.geldata.com/reference/datamodel/access_policies
[2] https://github.com/geldata/gel/issues/4228#issuecomment-1208...
Gel is somewhat similar to Supabase on the surface—both run on Postgres, both have Auth, and both offer AI features, a CLI, and a UI, among other similarities.
However, there’s a big difference beneath the surface: Gel comes with a high-level data model (abstract types, mixins, access policies) that replaces tables and joins. It’s still relational (and we’re about to publish a paper on that), but it’s more high level, strict, and yet more flexible.
On top of that, we have a built-in migration system (where schema is a first-class citizen), a performance-tuned network protocol, and a query language called EdgeQL, which is like a child of SQL and GraphQL. These are just a few of our “deep” features.
All in all, Gel is a fresh take on day-to-day database development. We cut no corners in trying to push the core database developer experience forward.
GelDB's auth is versatile and doesn't cost a dime, while Supabase auth is only free up until 50,000 monthly active users, then it costs $0.00325 per MAU (> 100k MAU).
I personally just love its query language and the typescript query builder.
Can I write regular joins if I need to?
Do you have plans or do you already support db branching ?
The main hurdle would to migrate the schema. You'll have to define your schema in Gel (take a look at the reference here [1]) and write a script to copy your data.
We are discussing internally how we can simplify this process, this is becoming a popular question.
> Can I write regular joins if I need to?
You can use EdgeQL and SQL side by side now though one connection in the same function. Gel's schema is still relational (even though it's more powerful with features like multiple inheritance).
This page describes the details [2] of how SQL works with our schema (spoiler: it's very straightforward and no different from a hardcoded SQL schema).
> Do you have plans or do you already support db branching ?
We call Postgres databases "branches" in Gel. And we have tooling around them [3] to have git-like experience with them. Conceptually you can map (manually) your Gel branches to your Git branches if you wish.
[1] https://docs.geldata.com/reference/datamodel
[2] https://docs.geldata.com/reference/reference/sql_adapter#que...
[3] https://docs.geldata.com/reference/cli/gel_branch
This is great info.
> The main hurdle would to migrate the schema
I'm sure you know this but I know drizzle lets you generate schemas from an existing db. Not sure how applicable that is to Gel.
https://orm.drizzle.team/docs/drizzle-kit-pull
> We call Postgres databases "branches" in Gel. And we have tooling around them [3] to have git-like experience with them.
Sounds like it's a yes - thanks!
Have you looked into interop with typespec?
Now there is
``` client.query(''' INSERT User { name := <str>$name, dob := <cal::local_date>$dob } ''', name='Bob', dob=datetime.date(1984, 3, 1)) ```
I'm interested in this and Jetbrains Pycharm and VSCode and CI to catch errors in these.
``` insert(User(name='Bob', dob=datetime.date(1984, 3, 1), children=[User(name='C', dob=datetime.date(2000, 3, 1))]) ```
Right now in Python the generics are quite rudimentary, there's no equivalent to TypeScript's keyof or mapped types. We're using all of TS' advanced typing system features to make our query builder work.
Python will get a query builder (it's a priority for us now) soon, but it will not be a type safe one. BUT: Python, and all other languages that we support, can use codegen.
Just put your query in a `<name>.edgeql` file, and run a special command. Gel will generate a fully typed function that runs that query.
Was linked from one of your articles, returns 404 for me.
Fits right in.
Is there something like Vitess for Postgres?
It is called pgroll, open-source, aiming to minimize potential downtime risks associated with DDL changes, offering multi-schema view, instant rollbacks, and more high-level options like backfilling in batches.
You can check the repo here: https://github.com/xataio/pgroll
there isn't something like Vitess for Postgres yet but there needs to be. Migrations are painful in general and they become very painful at scale. i haven't used gel yet, i know it manages migrations but i don't know to what extent. most of my experience is with prisma, drizzle, and atlas.
neon is working on some plans to solve migrations at scale. we think our custom storage layer will allow us to optimize certain paths, like setting a default value in a new column for a table with millions of rows. this alter command can take a lock on a table for hours. but ultimately we need better tooling.
ideally there is a client and a hosted service such that you can use the client to run migrations on your own from the CLI and integrate it into your dev workflow. the hosted service version allows you to push up your schema change from the client to an API. from there you can manage the migration rollout from an operational dashboard that helps you tune resourcing.
when i was at github we used vitess to roll out a migration that took 3 weeks to complete. a long time to wait but that's a better tradeoff compared to a migration that takes down production for 6 hours.
I totally agree with schema migrations being painful, have you seen the open-source tool we developed to tackle this problem? It is called pgroll: https://github.com/xataio/pgroll
Any feedback is appreciated!
Edit: yes it was https://github.com/github/gh-ost
Does Gel support multilingual full text search?
I mean, I don't need it anymore because I did it myself now, but it definitely annoys me that it's not first class
[1] https://docs.geldata.com/resources/guides/deployment/docker
[1] https://docs.geldata.com/reference/reference/environment#gel...
SQLite, with far more installations than Postgres, isn't a "mainstream database"?
> SQLite, with far more installations than Postgres, isn't a "mainstream database"?
It's not a system, it's a monolith library, which is why it is used wherever you want a local database with local storage.
Postgres implementations are a client/server systems even if the connection is on localhost.
It's also MVCC and has RLS and other requirements that do not necessarily apply to a local database like SQLite.
It's not about the number of installations, it's the type of database.