Let’s go on unveiling Datomisca a bit more.
After evoking queries compiled by Scala macros in previous article, I’m going to describe how Datomisca allows to create Datomic fact operations in a programmatic way and sending them to Datomic transactor using asynchronous/non-blocking API based on Scala 2.10 Future/ExecutionContext.
First, let’s remind a few facts about Datomic:
Datomic stores very small units of data called facts
Yes no tables, documents or even columns in Datomic. Everything stored in it is a very small fact.
Fact is the atomic unit of data
Facts are represented by the following tuple called Datom
- entity is an ID and several facts can share the same ID making them facts of the same entity. Here you can see that an entity is very loose concept in Datomic.
- attribute is just a namespaced keyword :
:person/namewhich is generally constrained by a typed schema attribute. The namespace can be used to logically identify an entity like “person” by regrouping several attributes in the same namespace.
- value is the value of this attribute for this entity at this instant
- tx uniquely identifies the transaction in which this fact was inserted. Naturally a transaction is associated with a time.
Facts are immutable & temporal
It means that:
- You can’t change the past
Facts are immutable ie you can’t mutate a fact as other databases generally do: Datomic always creates a new version of the fact with a new value.
- Datomic always grows
If you add more facts, nothing is deleted so the DB grows. Naturally you can truncate a DB, export it and rebuild a new smaller one.
- You can foresee a possible future
From your present, you can temporarily add facts to Datomic without committing them on central storage thus simulating a possible future.
Reads/writes are distributed across different components
- One Storage service storing physically the data (Dynamo DB/Infinispan/Postgres/Riak/…)
- Multiple Peers (generally local to your app instances) behaving like high-speed synchronized cache obfuscating all the local data storage and synchro mechanism and providing the Datalog queries.
- One (or several) transactor(s) centralizing the write mechanism allowing ACID transactions and notifying peers about those evolutions.
For more info about architecture, go to this page
Immutability means known DB state is always consistent
You might not be up-to-date with central data storage as Datomic is distributed, you can even lose connection with it but the data you know are always consistent because nothing can be mutated.
This immutability concept is one of the most important to understand in Datomic.
Schema contrains entity attributes
Datomic allows to define that a given attribute must :
- be of given type :
- have cardinality (
- be unique or not
- be fullsearchable or not
- be documented
It means that if you try to insert a fact with an attribute and a value of the wrong type, Datomic will refuse it.
Datomic entity can also reference other entities in Datomic providing relations in Datomic (even if Datomic is not RDBMS). One interesting thing to know is that all relations in Datomic are bidirectional.
I hope you immediately see the link between these typed schema attributes and potential Scala type-safe features…
Author’s note : Datomic is more about evolution than mutation
I’ll let you meditate this sentence linked to theory of evolution ;)
When you want to create a new fact in Datomic, you send a write operation request to the
There are 2 basic operations:
Add a Fact
Adding a fact for the same entity will NOT update existing fact but create a new fact with same entity-id and a new tx.
Retract a Fact
Retracting a fact doesn’t erase any fact but just tells: “for this entity-id, from now, there is no more this attribute”
You might wonder why providing the value when you want to remove a fact? This is because an attribute can have a MANY cardinality in which case you want to remove just a value from the set of values.
In Datomic, you often manipulate groups of facts identifying an entity. An entity has no physical existence in Datomic but is just a group of facts having the same entity-id. Generally, the attributes constituting an entity are logically grouped under the same namespace (
:person/age…) but this is not mandatory at all.
Datomic provides 2 operations to manipulate entities directly
1 2 3
Actually this is equivalent to 2 Add-Fact operations:
1 2 3
In Datomic, there are special entities built using the special attribute
:db/ident of type
Keyword which are said to be identified by the given keyword.
There are created as following:
If you use
:person.characters/dumb, it references directly one of those 2 entities without using their ID.
You can see those identified entities as enumerated values also.
Now that you know how it works in Datomic, let’s go to Datomisca!
Datomisca’s preferred way to build Fact/Entity operations is programmatic because it provides more flexibility to Scala developers. Here are the translation of previous operations in Scala:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
person / "name"creates the keyword
DId(Partition.USER)generates a temporary Datomic Id in Partition
USER. Please note that you can create your own partition too.
violent.refis used to access the keyword reference of the identified entity.
ops = Seq(…)represents a collection of operations to be sent to transactor.
Remember the way Datomisca dealt with query by parsing/validating Datalog/Clojure queries at compile-time using Scala macros?
You can do the same in Datomisca with operations:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
It compiles what’s between
"""…""" at compile-time and tells you if there are errors and then it builds Scala corresponding operations.
Ok it’s cool but if you look better, you’ll see there is some sugar in this Clojure code:
You can use Scala variables and inject them into Clojure operations at compile-time as you do for Scala string interpolation
For Datomic queries, the compiled way is really natural but we tend to prefer programmatic way to build operations because it feels to be much more “scala-like” after experiencing both methods.
There is a last way to create operations by parsing at runtime a String and throwing an exception if the syntax is not valid.
It’s very useful if you have existing Datomic Clojure files (containing schema or bootstrap data) that you want to load into Datomic.
Last but not the least, let’s send those operations to Datomic Transactor.
In its Java API, Datomic Connection provides a
transact asynchronous API based on a
ListenableFuture. This API can be enhanced in Scala because Scala provides much more evolved asynchronous/non-blocking facilities than Java based on Scala 2.10
Future allows to implement your asynchronous call using continuation style based on Scala classic
ExecutionContext is a great tool allowing to specify in which pool of threads your asynchronous call will be executed making it non-blocking with respect to your current execution context (or thread).
This new feature is really important when you work with reactive API such as Datomisca or Play too so don’t hesitate to study it further.
Let’s look at code directly to show how it works in Datomisca:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
Please note the
tx: TxReportwhich is a structure returned by Datomic transactor containing information about last transaction.
In all samples, we create operations based on temporary ID built by Datomic in a given partition.
But once you have inserted a fact or an entity into Datomic, you need to resolve the real final ID to use it further because the temporary ID is no more meaningful.
The final ID is resolved from the
TxReport send back by Datomic transactor. This
TxReport contains a map between temporary ID and final ID. Here is how you can use it in Datomisca:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20