InputLayer is a modern open-source database built on three key concepts:
// Facts: who manages whom
+manages("alice", "bob")
+manages("bob", "charlie")
+manages("bob", "diana")
// Rule: transitive authority (recursive)
+authority(X, Y) <- manages(X, Y)
+authority(X, Z) <- manages(X, Y), authority(Y, Z)
// Query: who does Alice have authority over?
?authority("alice", Person)
The problem
The standard RAG pipeline retrieves documents by similarity. That fails when the answer is connected through a chain of facts - not surface-level similarity.
Patient asks “Can I eat shrimp tonight?” System finds recipes. Misses the allergy record three hops away: patient takes Drug X → interacts with iodine → shrimp is high in iodine.
Drug → interaction → ingredient → food has zero vector similarity to “shrimp dinner.”
Employee asks for Q3 revenue reports. Vector DB returns 40 matching documents. Cannot check whether this employee, in this role, in this department, has permission to see any of them.
Access control is a logical question, not a similarity question.
Compliance asks “Is this transaction suspicious?” System finds similar transactions. Misses: Entity A paid Entity B, B is a subsidiary of C, C is on a sanctions list.
Graph traversal + rule evaluation - not pattern matching.
The solution
InputLayer's query language combines graph traversal, logical rules, and vector search in a single query.
Recursive rule evaluation
Define rules like transitive authority. The engine recursively derives all conclusions - including things you never explicitly stored.
Policy-filtered search
Logical access control and vector similarity in one pass. Permission-checked and semantically ranked results without glue code.
One query replaces three systems
A single query replaces a vector DB call, a graph traversal, and a policy engine.
Example
// Policy-filtered semantic search - one query
?authority("alice", Author),
document(DocId, Author, Embedding),
Similarity = cosine(Embedding, QueryVec),
Similarity > 0.7
This query resolves the authority rule recursively, runs vector search in the same pass, and returns only documents that Alice has permission to see and that are semantically relevant to her question.
Under the hood
InputLayer is built on an incremental computation engine. This gives it three properties that matter for production use.
When a fact changes, only the affected derivations recompute. Insert one new edge into a 2,000-node graph and re-query transitive closure: 6.83ms instead of 11.3 seconds.
faster than full recompute
Every derived fact traces back to the rules and base facts that produced it. Not “the vector was close” - a full derivation chain you can audit and explain.
of results fully traceable
Delete a fact and every conclusion derived through it disappears automatically - even through chains of recursive rules. No phantom permissions, no manual cache invalidation.
stale results
Comparison
Replace the duct-tape architecture where you run a vector DB, a graph DB, a rules engine, and a batch pipeline - and try to keep them in sync.
| Capability | Vector DBs | Graph DBs | SQL | InputLayer |
|---|---|---|---|---|
| Vector similarity | ||||
| Graph traversal | ||||
| Rule-based inference | ||||
| Recursive reasoning | ||||
| Incremental updates | ||||
| Correct retraction | ||||
| Explainable retrieval |
Features
Pull the Docker image and start querying in seconds. The query language is intuitive - if you know SQL, the basics take about 10 minutes.
docker run -p 8080:8080 ghcr.io/inputlayer/inputlayer