A new web front-end design

JeeLabs - Sat, 13/02/2016 - 01:00

JET is going to need a web interface. In fact, it’s likely that a major part of the total development effort will end up being poured into this “front-end” aspect of the system.

After many, many explorations, a very specific set of tools has been picked for this task, here at JeeLabs. It’ll all be JavaScript-based (ES6 as much as possible), since that’s what web browsers want. But unlike a number of earlier trials and actual builds of HouseMon, this time we’ll go for a fairly plain approach: no CoffeeScript and … no ClojureScript. They’d complicate things too much for casual development, despite their attraction (ClojureScript is pretty amazing!).

We do, however want a very capable development context, able to create an UI which is deeply responsive (”reactive”, even), and can keep everything on the screen up to date, in real-time.

Here is the set of tools selected for upcoming front-end development in JET:

  • ReactJS - should be easier to learn than AngularJS (but also totally different!)
  • WebPack - transparently builds and re-builds code during development
  • Hot Reload - an incredible way to literally edit a running app without losing context
  • ImmutableJS - a new trend in ReactJS, coming from Clojure and other FP languages
  • PureCSS (probably) as a simple and clean grid-based CSS styling framework

This front end will be called JET/Web and has been based on the react-example-es2015 project template on GitHub. It has all the main pieces in place for a truly fluid mode of development. A very preliminary setup can be found in the “web/” directory inside the JET repository - but note that the current code still has sample content from the template project.

Front-end development is a lot different from back-end development, i.e. the JetPacks and the hub itself. In development mode, a huge amount of machinery is activated, with NodeJS driving WebPack, “injecting” live-reload hooks into the web pages, and automatic change detection across all the source files. Once ready for “deployment”, the front-end development ends with a “build” step, which generates all the “assets” as static files, and compresses (“uglifies”) all the final JavaScript code into a fairly small single file - optimised for efficient use by web browsers. In JET/Web, the final production code is only about 150 KB of JavaScript (including ReactJS).

If you’re new to any of the tools mentioned above - you may well find that there’s an immense amount of functionality to learn and get familiar with. This is completely unlike programming with a server-side templating approach, such as PHP, Ruby on Rails, or Django. Then again, each and every one of these tools is incredibly powerful - and it’s guaranteed to be fun!

That’s the consequence of today’s breakneck speed and progress w.r.t. web development. But these choices have not been made lightly. Some considerations involved in these choices were:

  • a low-end (even under-powered, perhaps) server, which can’t handle a lot of processing
  • the desire to have everything on a web page updated in real-time, without page refreshes
  • the hub’s web server can’t be restarted, at least not for work on the web client software

In its current state, JET/Web is next to useless. It doesn’t even connect to the MQTT server yet, so there’s no dynamic behaviour other than inside the browser itself (see for yourself, the demo is quite interesting, especially when you start to look into how it’s all set up in the source code).

One final note about these “decisions”: obviously, you have to pick some set of software tools to be able to implement anything meaningful. But with JET, “big” decisions like these are actually quite inconsequential, because many different front ends can easily co-exist: anyone can add another JetPack, and implement a completely different (web or native) front end!

Categories: Community Blog posts

JET development mode setup

JeeLabs - Fri, 12/02/2016 - 01:00

Since JET is intended to remain always-on, at least as far as the hub is concerned, we need to be a little careful how we introduce changes. The way to do this in JET, is to treat the whole system as one, whereby development simply happens to be inactive some of the time:

The hub, and all the interfaces and JetPacks it has been told to activate, will continue to run no matter what (unless they crash or fail in some other way, evidently). This is what keeps the home monitoring and automation going at all times.

For development, we’re going to need a whole slew of tools:

  • a command-line shell, to perform ad-hoc tasks
  • a web browser client, to examine the public interface of the system
  • NodeJS to (re-)generate all the client-side code and assets
  • a programmer’s editor or IDE, obviously
  • an SSH terminal session to connect to the machine running the hub
  • Go, or whatever language environment we are using in our JetPack
  • Git, to manage versions and revisions of all the source code
  • … and probably a variety of other applications and “dev tools”

This is what it looks like on a dual-machine setup, connected via a (local) network:

There are a number of points to make here:

  • a JetPack does not have to be running on the same machine as the hub - although this will depend somewhat on its role: it might need to access a hardware interface, for example
  • the hub is not in charge of what happens on the development machine (shown here as being at the receiving end of the arrows) - it is not necessarily the parent of all JetPacks
  • the hub does not need NodeJS, even when it’s serving web browser requests, but if you would like to use NodeJS functionality, you can install it there as well, of course
  • the above dual-machine split is optional - when making sweeping (or risky) changes, or simply to try out JET first, this can all be set up on a single development machine

The development system at JeeLabs is a Mac OSX laptop, with the Homebrew package manager installed to grab all the pieces of software, and to easily keep them up to date. Currently, this is:

  • ‘brew install go’ - Go version 1.5.3
  • ‘brew install node’ - NodeJS version 5.6.0
  • ‘brew install macvim’ - the Vim editor, as GUI variant
  • ssh is pre-installed, gcc and git are in the standard Xcode command-line toolset

If you develop on a Windows or Linux PC, you’ll neeed to locate these packages for your system. Version requirements are not very stringent: try to use fairly recent versions of Go and NodeJS.

The one missing piece is cross-compilation for ARM and AVR µCs. Here are some options:

  • for ARM, you can get good ready-made builds from the launchpad.net site
  • or set up your search paths to use the gcc cross-compilers included in the Arduino IDE
  • or you could choose to set up cross-compilation on the hub’s Linux machine - this will be slower but can be automated with some remote commands through ssh

More details about these different options will be the subject of a separate article.

Lastly, as example of a dual-machine configuration, here is a permanent-ish setup for the hub:

Actually, this is going to be used as the basis for a secondary test setup at JeeLabs. This should make it easier to experiment with more radical design ideas involving the hub itself. The “main” production setup is separate, and already running 24 / 7 on a battery-backed Odroid U3.

The components above are, in clockwise order:

  • an Odroid C1+ with 8 GB eMMC, running a lightweight version of Debian Jessie 8.3
  • also mounted inside the case: the RasPi RF board
  • a JeeLink Classic with RFM12 @ 868 MHz
  • an Odroid-branded WiFi stick
  • underneath the JeeLink, a short USB cable to …
  • a HyTiny-STM32 board, programmed to act as Black Magic Probe, driving …
  • a second HyTiny-STM23 board, with RFM69 and OLED display
  • a 10,000 mAh LiPo-based USB Power Bank - it’ll easily power all this for a day
  • a small experimental node, based on an LPC824 and RFM69
  • all mounted on … foam board! … with … tie wraps!

More options are likely to be added later, e.g. for trying out an ESP8266.

This configuration has multiple radios, so it can also be used to generate test packets and see how the receiving node (and hub) processes the data. And the OLED is a nice debugging aid.

Categories: Community Blog posts

System requirements for JET

JeeLabs - Thu, 11/02/2016 - 01:00

One of the main design goals for JET, is that it must run well on very low-power Linux boards.

The reasoning is that you really don’t need much computing power at all to manage all the data flows involved with home monitoring and automation. It’s a very slow kind of real-time system, with at most a few events per second, and most of the time nothing else to do than collecting the incoming sensor data for temperature / light / door / motion sensors, etc.

Then again, a good responsive user interface which can update a flashy graphical screen and is able to show live graphs with potentially a lot of historical data is considerably more demanding. The insight here, is that all of this processing can take place in the browser, which is always very performant nowadays, on desktops as well as mobile platforms.

In fact, with modern “reactive” and “single-page” applications, we don’t even need a very capable web server - if it can serve 100% static files and handle a WebSocket connection on the side for all the (bi-directional) real time stuff, we’ll be fine. There’s no need for any server-side rendering of web pages, i.e. no templating, no embedded scripting language, nothing.

That means that on the server side, JET must include these functions:

  • a message dispatcher, i.e. MQTT with Mosquitto
  • accept incoming sensor data, via serial, gpio, i2c, LAN, WiFi, etc.
  • keep raw data logs, i.e. the current “logger” module in the hub
  • a scalable data store, in this case a key-value database using BoltDB
  • a web server, serving static web pages, and JS/CSS/image assets
  • websocket handler, tied into MQTT (either in Mosquitto or via the hub)
  • the ability to run arbitrary tasks as custom JetPacks, launched by the hub
  • optional extensions, such as statistics, historical data, and a rule engine

But that’s about it. As long as none of these pull in large applications, we really can keep it all very lightweight. And indeed, so far, it really is a very light load - after two weeks of running, the MQTT + hub have proven to require extremely few resources so far:

  • the hub process (in Go) needs less than 5 MB RAM as working set
  • Mosquitto (in C) needs well under 1 MB of RAM to do its thing
  • on an Odroid U3, the hub uses 15 min/day of CPU, and Mosquitto 1.5 min/day
  • that’s with 1,500 incoming messages per hour, about 3 MB of raw logs per day

This is a few percent of a Raspberry Pi - even an “ancient” model B is more than enough:

(model A is a bit inconvenient due to its lack of Ethernet and scarcity of USB ports)

Now let’s look at the software side of things…

The minimal JET setup has virtually no dependencies on other software, since the hub is built as a fully static Go executable, and Mosquitto is a one-line install (”sudo apt-get install mosquitto”) which also pulls in virtually no other packages.

For a truly minimal Raspberry Pi setup, all you need is some small Debian or Raspbian build - version 8.x (“Jessie”) is the latest incarnation these days. For a tiny distro, check out pipaOS.

If you’re using HardKernel’s Odroid C1 board, which is really a RasPi clone, then you could use this 1.0-20160131-C1 image for it. It’s a very nice minimal setup, as described in this forum post. The result will fit on even a small 1 GB (µ)SD card, with enough room for a year’s data collection.

And that’s about it. Since the hub can be cross-compiled from Go on any machine (see the JET releases page for a few builds), and Mosquitto is ready to, ehm, go from the “apt-get” package repository used in Debian, Raspbian, and Ubuntu, there is not even a requirement to install the gcc compiler toolchain on such a machine. The hub’s installation has been described earlier.

Categories: Community Blog posts

Day-to-day JET practicalities

JeeLabs - Wed, 10/02/2016 - 01:00

Here’s a “JET” engine, for your amusement:

This week, I’m going to go into the practical aspects of the JET project: for production, i.e. the always-running hub + MQTT server, and for development, i.e. what is needed to work on this software and take it further.

As you’ll see, these end up at two extremes of the spectrum: a tiny setup with minimal dependencies is enough for production, whereas development requires a slew of tools (albeit standard and well-supported), and a hefty machine if you want a snappy development cycle.

But first, I’ll revisit the redesigned JET data store, and its new - improved! - API:

All these aspects are still evolving and in flux, but hey… ya’ gotta start somewhere!

Latest hub v4.0-66 builds now on GitHub.

Categories: Community Blog posts

The JET data store, take 2

JeeLabs - Wed, 10/02/2016 - 01:00

This article supersedes this one - but it’s probably still useful to read that original article first.

While many of the design choices remain the same, the API has changed a bit. The description below matches what is currently implemented on GitHub and included in the latest release.

Terminology and semantics

What hasn’t changed, is the way the data store is tied into MQTT. The hub listens for messages matching “!/#” and “@/#” topic patterns and interprets these as stores and fetches, respectively.

Here is how to store the text “abc” in an item “c” inside a bucket “b”, which is in turn inside bucket “a” at the top level:

jet pub '!/a/b/c' abc

(the topic has to be quoted, because “!” has special significance in the shell)

The term bucket is from the underlying BoltDB package. As you can see, this looks very much like storing text in a file “c” in the directory “/a/b/”. From now on, we’ll stop calling these nested level buckets, and use the term directories and directory paths instead. But to avoid confusion with real files, let’s also continue to call “c” an item, and not a file. It’s not accessible via the file system after all, it only exists somewhere inside the BoltDB data file.

There is one important limitation in BoltDB: it can only store directories at the top level. Items must be placed inside a directory, i.e. “!/c” is an invalid item reference, since it’s not in a dir.

Another convention added in this redesign, is that directories must always be specified with a trailing slash, whereas items may not end in one. So “!/a/b/” and “@/a/b/” refer to the (nested) sub-directory “b”, while “!/a/b” and “@/a/b” refer to the item “b”. Also, empty names are no longer allowed, i.e. a path cannot contain two slashes next to each other (“…//…” is invalid).

Extracting / fetching data is a two-step process: you send a message with a topic corresponding to the item of interest, and as payload a “reply topic”. Like this, for example:

jet pub @/a/b/c '"abcde"'

Note the extra double quotes, “abcde” is a JSON string which names the topic where the reply will be sent (it should not normally start with “@/…”!). To see what’s happening, we have to subscribe to that topic before sending out the fetch request, i.e. by keeping a separate terminal window open and running this command:

jet sub abcde

No double quotes this time, the topic is always a plain string, not JSON. If we now re-send that “jet pub '@/a/b/c' '"abcde"'” request, we’ll see this output appear in the subscription:

abcde = abc

In summary: to store a value, send it as payload to the proper “!/…” topic. To fetch a value, set up a subscription listener to pick up the reply, then send a message to the proper “@/…” topic and specify our listener topic as payload, formatted as JSON.

This approach turns MQTT into an RPC mechanism for the data store. Any MQTT client can use the data store if it adheres to the above convention, this is not limited to JetPacks. As long as the hub is active, the store will process these requests.

Payload considerations

MQTT topics are always plain text strings, with “/” to segment the key space. Null bytes and control characters should be avoided, but UTF-8 is fine.

MQTT payloads can be anything: plain text, JSON-formatted text, or binary data. The same holds for the data store: it takes a number of bytes, whatever their format might be, and returns them as is. There is no hard limit for the size of a payload.

Note that in JET, many parts of the system do expect JSON-formatted payloads. For numeric values, there is no difference, but strings will need to be double-quoted when this is the case.

Storing data

As already shown above, you store an item by sending it as “!/…” message:

jet pub '!/a/b/c' 123

If the item exists, it will be overwritten. If the directory “/a/b/c/” exists, you’ll get an error - items and directories cannot have the same name.

All intermediate directory levels are automatically created if necessary. Again, this will fail if any of the directory names already exist as item names.

You can also store multiple items in one go, by storing a JSON object to a directory. The above could also have been written as:

jet pub '!/a/b/' '{"c":123}

Same effect, and to store multiple items, we could have done:

jet pub '!/a/b/' '{"c":123,"d":456}'

This creates (or overwrites) two items in the “/a/b/” directory. This is an atomic operation: all the items are saved as part of a single transaction.

Multi-stores can be convenient to “unpack” an object into separate items, but since the request uses JSON, you can only use it to store JSON-formatted data. To store arbitrary text or binary data, you have to use the single version.

The following request is a no-operation, except that it will create “/a/b/” it it didn’t exist:

jet pub '!/a/b/' '{}'

Note that a multi-store does not affect other items in the same directory. Items are “merged into” the directory, leaving the rest unchanged, it does not delete anything. Speaking of which…

Deleting data

Deleting an item is done by sending it an empty payload:

jet pub '!/a/b/c' ''

Or, equivalently:

jet delete '!/a/b/c'

Note that this cannot be done via a multi-store:

jet pub '!/a/b/' '{"c":""}'

This will store the empty JSON string (with its double quotes), not a zero-length payload, which may not be what you had in mind.

You can also delete a directory and everything it contains, including any sub-directories, by sending the empty payload to the directory:

jet pub '!/a/b/' ''

As before, the item vs. directory distinction is made through the trailing slash.

Empty payloads

As you can see, empty payloads play a special role. This is not the same as the empty JSON string (“”) or even JSON’s “null”, which consists of a small number of bytes, even if they represent “nothingness”.

Storing empty payloads deletes stuff from the store. But since fetching a non-existing item also returns the empty payload, you can often ignore this behaviour. The only difference is in directory listings, as described below.

Fetching data

The fetching behavious of the store has already been described above, but for completeness, here is a quick example anyway:

jet pub @/a/b/c '"abcde"'

This is what will happen when this message is sent:

  • the hub picks up the request
  • it retrieves the content of item “c” in bucket “b” inside bucket “a”
  • it sends the result to MQTT as payload, using “abcde” as topic

If the item did not exist, an empty payload will nevertheless be sent. But in case “/a/b/” doesn’t exist, the hub will report an error on its log instead, and not send anything back.

Listing directories

One request type has not yet been presented. The data store also offers a way to scan its contents, allowing you to enumerate all items in either the top level or any existing directory.

This again, uses the “@/…” notation, with a reply topic as payload. The difference is that now the topic refers to a directory. An example:

jet pub @/a/b/ '"abcde"'

The result, as reported by “jet sub abcde”, might be something like:

abcde = {"a":2,"b":0,"c":4}

Here, “/a/b/” contains items “a” and “c”, with payloads of size 2, and 4, respectively, as well as a subdirectory “b”. Subdirectories always have zero size, which is never the case for normal items.

Names are stored in sorted order (sorted as raw bytes that is, not UTF-8 or anything fancy), but JSON object attributes aren’t always kept in order (they’re usually implemented as hash tables).

More advanced searches - such as ranges and globs - can be implemented later, by passing in more information than just a reply topic string. This could also be used for on-the-fly statistics, i.e. scanning and summarising data on the hub, and reporting only the resulting metrics.

Reply topics

As you can see, all accesses require some reply topic to get the results back to the requesting app. These topics should be unique, to avoid confusion about which reply relates to which request.

The plan is to have a convention for any JetPack to easily come up with such reply topic names, and to add some utility code which will wait for a reply and timeout if nothing comes in quickly. Since each JetPack has a unique name when it connects to MQTT, and since the hub manages these names when it starts them up, we can probably choose topics with the following structure:


This way, each pack can easily track and issue its own sequence numbers. Other (non-JetPack) applications will have to come up with their own unique reply topics.

For the time being (early Feb 2016), reply topics are not yet automated.

Categories: Community Blog posts

Distributed node functionality

JeeLabs - Sat, 06/02/2016 - 01:00

Just to avoid any possible confusion: the case made in the previous article for “centralised node management” is not meant to imply that nodes need to operate in a centralised fashion!

This is where design and development are really distinct from the decisions on what remote nodes (wireless as well as wired) actually do. We may well want to design a home automation infrastructure where a light-switch node sends out a signal when pressed, only intended for another node in the same room, controlling a lamp attached to it.

JET uses MQTT as its central message exchange “switchboard” (or “bus”, rather), which is indeed a centralised design. A model of automation where all decisions are made centrally is probably also going to be the main (or at least initial) mode of operation for JET and its hub. But all such decisions can be made on a case-by-case basis: you could for example, decide to use the central system only as a “data collection centre”, with all home-automation decisions taking place in the periphery, by the actual nodes involved in (and affected by) a particular event.

This leads to a design whereby the central node doesn’t end up becoming a potential “single point of failure”, an important concept in reliability engineering. A system without central decision-taking authority is able to limit the effect of any failure, allowing the rest of the system to continue to function. So that a faulty light switch in the garage will not affect the heating control system (assuming that different nodes are involved in these two functions).

Warning: what follows is HIGHLY tentative - it’s just a (wild) thought experiment for now!

The “bigger” plan for JET is to create a range of different node types, each with a specific set of sensors and other hardware, and to manage each of these remote units from a central node attached to the hub. This includes the source code, its cross-compilation, and the upload process to get the resulting firmware “flashed” into the remote units (either over the air, or by wiring them up temporarily). This will allow managing versions and revisions, especially when tied to each remote µC’s built-in unique “hardware ID”.

But the story does not end there. The idea in JET is to separate the functionality (i.e. basic capabilities and hardware drivers) from the wiring (i.e. the way all the available functions are tied together). The basic capabilities will have to be hard-coded into the firmware as C/C++ code, but the wiring is going to be implemented as (soft) data, i.e. as a (possibly quite elaborate) data structure describing all the periodic events, sensor triggers, and actual “readings”, and how these should be routed - both inside each remote node and between these nodes (as messages).

For the existing (currently all-wireless) nodes here at JeeLabs - which all happen to be various generations of JeeNodes - very little will change: they’ll continue to broadcast their sensor readings as is, without any notion of where that data ends up or what is done with it.

For future nodes, the aim is to build a lot more flexibility into them, by adding support for rules and a certain amount of decision-making capability. All driven from “wiring diagrams” and customisable for each individual node. An example of this could be a set of rules to behave as follows: “report motion detection immediately, but also send a trigger to a specific lamp if it’s dark outside”. Then again, maybe this logic should not be the motion sensor’s role, but the lamp’s role, i.e. we could instead tell the lamp node to listen for motion and light level messages, and let it decide for itself whether its lamp should be turned on. We can explore both avenues.

The idea here is not to come up with a design right now, but to illustrate how configurable “smarts” could be encoded as rules, sent out (and managed) by the central node, but leading to autonomous behaviour in the remote nodes. This can really only work when this kind of behaviour is designed for the network as a whole, and not simply by throwing new nodes in and leaving the old ones alone. Clearly, all of this will need to evolve over time - as we gradually find out and evaluate what sort of behaviour we actually want for our home!

With changes limited to sending out new wiring diagrams, we can greatly reduce the risk of failure and serious disruption in the case of (occasional, but inevitable) mistakes. These wiring diagrams are likely to be quite small, compared to a full firmware upgrade, and since we’re not altering the firmware itself, we’ll also benefit in the extremely important area of security: it’ll be impossible to make nodes do things they were definitely not intended to do, if their firmware stays intact (assuming it properly validates all incoming wiring changes!).

A nice side-effect (for everyone who is not a deep-diving C/C++ programmer), is that firmware recompiles will not be required to change a node’s behaviour: that’ll be controlled by the wiring.

Another benefit with this data-driven approach: with the separation of code and data, different node architectures can be tried out, with different µC boards and different RF technologies. As long as all the nodes understand the same basic common data structure conventions, we’ll be able to mix / replace / upgrade as needed, and make them inter-operate as much as we like.

Categories: Community Blog posts

Centralised node management

JeeLabs - Fri, 05/02/2016 - 01:00

There are a number of ways to avoid the long-term mess just described. One is to make the target environments aware of source code. For example by running a Basic / Forth / JavaScript / whatever interpreter on them - then we can work on the target environment as with a regular development platform: the source stays on the target, and whenever we come back to it, we can view that code and edit it, knowing that it is exactly was has been running on the node before.

There are some serious drawbacks with this source-on-the-target approach:

  • it requires a fairly hefty µC, able to not only run our code, but also to act as an editor and mini development environment - bit it is quite effective, and probably a reason why BASIC became one of the mainstream languages decades ago, even in “serious” industrial & laboratory settings

  • you end up with the worst of all worlds, by today’s standards: a target µC, struggling to emulate a large system, and a very crude editing context

  • if that target machine breaks, you lose the code - there is no backup

  • no revision control, no easy code sharing with other nodes, no history

Trying to turn a remote node into a “mini big system” may not be such a great idea after all, in the context of one-off development at home with a bunch of remote nodes: it really is too risky, especially for the tinkering and experimentation that comes with physical computing projects.

Some recent developments in this area, such as Espruino and MicroPython, do try to mitigate the drawbacks by offering a front end which keeps the source code local - but then you end up back in square one: with a potential disconnect between what’s currently running on each node and the source code associated with it, and stored on the central/big development setup.

Another option, which takes some discipline, is to become very good at taking snapshots of your development environment setup, and in particular at taking notes of which build ended up where. With proper procedures, everything becomes traceable, recoverable, and repeatable.

The problem with it: discipline? notes? backups? constantly? for hobby projects? … yeah, right!

To re-iterate: the central problem is that development happens in a different context than actual use - embedded µCs can’t come anywhere near the many very convenient capabilities of modern development environments, with their fancy programmer’s editors, elaborate IDEs, revision control systems, cross- compiler toolchains, debuggers, and uploaders.

The issue here is not that our development tools are lacking. The problem is that they tend to be used in a node-by-node “fire and forget” development style, which doesn’t help with the entire (evolving) home collection of nodes and gadgets. Which node was compiled how again?

The best we can probably do is to aim for maximum automation, and to focus all development in a single spot - not just on a node-by-node basis, but for the entire network and collection of devices we’re gradually setting up. And not just for one node type, or even one vendor’s products, but for everything we’re tying together, from home-grown one-off concotions to commercially obtained ready-to-use devices and gadgets.

If all design and development takes place in one place, and if all results are pushed out to the remote node “periphery” in a semi-automated way, then we may stand a chance of being able to re-use our work and re-generate new revisions in the same way at a (much) later date. Whereby “one place” doesn’t imply always developing on the same machine (that too, is bound to evolve after all) - we just need to have remote access to that “one place”, the fixed point of it all.

In the longer term, i.e. a decade or more, there is no point trying to find a single tool or setup for all this. Technology changes too fast, and besides: we’re much too keen on trying out the latest new fad / trick / language / gadget. We really need to approach this all with a heterogenous set of technologies in mind. The goal is not one “perfect” choice, but a broad approach to keeping track of everything over longer periods of time. Much longer than our attention span for any specific new node we’re adding to our home-monitorin/-automation mix.

Maybe it’s time to treat our hobby as a “multi-project”: lots of ideas, lots of experimentation, hopefully lots of concrete working parts, but by necessity it’ll also be a bit like herding cats: alternative / unfinished designs, outdated technologies alongside with shiny new ones, and lots of loose ends, some actively worked on, some abandoned, some mature and “in production”.

In terms of keeping things organised to avoid the predictable mess described in the previous article, there really is no other sane option than to at least track the entire home-monitoring and home-automation project in one place. And there’s a fairly simple way to make this practical: simply add a web- server on top, which allows browsing through all the files in the project. It can be password-protected if needed, but the key point is that a single area somewhere needs to represent the state of our entire “multi-project”.

How do we get there? Some options come to mind: we could add a web server on the same machine as where our home server is running (JET or whatever), and make sure that all the related code, tools, documentation, and design notes live there. We could turn that entire area into one massive Git repository, and even keep a remote master copy somewhere (on GitHub, why not?). Note that this is not really about sharing, it’s merely a way to keep track of what is inevitably going to be a unique and highly personal setup. And if putting it in public view doesn’t feel right, then of course you shouldn’t be placing your copy on Github. Put it in a personal cloud folder instead, or keep it on a server within your own house (you do have a robust backup strategy in place, right?). The main point is: treat your hobby setup as if it were an “offcial” project, because it’s even more important to create a durable structure for such a unique and evolving configuration, than with public open-source stuff which is going to be replicated all over the place anyway.

As you can see, this isn’t about “the” solution, or “the” technology. There is no single one. In a way, it’s about the greater context of “sustainable tinkering”, thinking about where your projects and hobbies will take you (and your family members) ten years from now. You’re probably not doing all this to become a “sysadmin for your own house”, right?

What we need to do, is design and implement “in the open”, so that we can go back and tweak / fix / improve things later, possibly many years later, when all the neat ideas and builds will be fond memories, but their details long-forgotten. Note that “in the open” does not imply “in public”, it may well be open to an audience of just one person: you. What “open design” actually means here, is: resumable design.

Keep in mind that this is a long-term, small-scale, personal, bursty, hobby-mode context. Life is too short to allow it to turn into a long-term mess - yet that seems to be exactly what happens a lot, well… at least here at JeeLabs. It’s time to face up to it, and to try to avoid these problems.

From this perspective, this hobby may become a whole different ball game. Tools which could come in handy include Hugo, to easily manage notes (ignore all the flashy “themes”), and Gogs, to set up a personal git repository browser. Heck… taking notes, documenting your ideas and progress, and tracking the evolution of your own designs over time could actually be fun!

Categories: Community Blog posts

Creating a long-term mess

JeeLabs - Thu, 04/02/2016 - 01:00

So you’ve set up a Wireless Sensor Network, i.e. 2 or more “nodes”, talking to each other via RF. This being a JeeLabs article, let’s say you’re using a JeeNode, JeeLink, or other ATmega/tiny-based board with an RFM12 or RFM69 wireless radio module - or perhaps some self-made variation, using some neat flashy ARM µC.

Everything is working fine. Great! And as time passes, you add more nodes. Fun! Some nodes are just like the previous one, for example a “Room Node” to monitor temperature, humidity, motion, and light levels in a room, and some nodes are different, such as an OOK Relay, or a LED Node to control LED strips, or … whatever, you get the idea.

In many cases, similar nodes will only differ in the “node ID” used to identify them, and such nodes can all run the same firmware, with the difference stored in EEPROM. Sometimes, you need to set up a node in just a slightly different way, and you start editing the source code to upload a slightly different build. Easy! The Arduino IDE, Eclipse, or Makefile-based build environment can make this sort of tinkering oodles of (sheer endless!) fun.

What could possibly go wrong, then?

The trouble with this way of working with more than one node, is that the tools used to build the code usually operate in “edit, build, upload, fire-and-forget” mode: each build stands on its own, but the build environment in no way helps you manage multiple nodes and all their variations.

What variations, you ask?

Here are a few, most of them will be pretty obvious:

  • one node needs to report on wireless as ID 16, the other as ID 17
  • one node is running on a JeeNode v4, the other on a JeeNode v6
  • one node is a room node with a Pressure Plug added, the other without
  • one node is running with an RFM12, the other with an RFM69
  • one node has an ATmega328P, the other one uses the new ATmega328PB
  • one node runs as an AVR-based door sensor, the other is ARM-based
  • one node talks directly to the central node, the other uses a repeater

The list is endless. This is what happens over time. And this is the way to create a (potentially) huge mess when it comes to keeping track of all such variations.

With the Arduino IDE, you may have to readjust the build settings in the “Tools” menu for each different “sketch”, for example. Or worse: re-adjust a #define hidden somewhere deep in one of the included libraries.

In some scenarios, this variability can be ignored: nodes get set up, you test them, you install them in their remote spot, and that’s it. In fact, here at JeeLabs, over a dozen nodes have been running this way for years on end, with only occasional battery replacements to keep ‘em going.

But the world is not a fixed place, and neither is the home. A lot can change in the course of a few years - more nodes, to the point where things become a bit crowded on the RF band perhaps, or maybe improvements such as tracking RF signal levels and adjust the TX/RX sides to optimise for lowest packet loss - it’s most unlikely that all your home-grown designs will stay the same over the course of a few years, especially if you’ve got tinkering-for-fun in your genes!

One of the main underlying issues is the disconnect between source code and remote nodes: the source code is cross-compiled - by necessity! - (since a remote node can’t do it), and therefore it runs - by definition! - on a different machine from where it’s intended be used.

We can use wonderful version control tools such as Git and GitHub all we like, they can’t address the fact that at the end of the day, the generated machine code will be out there on some small embedded µC, with no mechanism in place to track the association to the exact source code, tool versions, and upload style used when it was originally “flashed” into that device. Say bye, bye to tweaking or bug fixing, after a while.

And then there are the technological advances: of course there will be new, flashier, smarter, cheaper, more flexible, better performing options over time. Should we replace our entire setup every year? Of course not: a working room node is still a working room node, even if isn’t the latest trend, buzz word, or fad. The reality is that over the years, any home environment is bound to become a collection of old and new. And newer still. Sure, it’d be neat to run 28V DC all over the house and base everything on LED lighting with simple central control - but are we really going to rip out existing AC mains wiring, with its hazards and security requirements? Nah…

This is the situation here at JeeLabs now, and it may actually be in a better shape than some, since many - if not all - of the nodes here have been described and documented as weblog posts over the years, with their code incorporated as examples in the JeeLib repository on GitHub.

Node-by-node development doesn’t scale, neither for hardware, nor for software!

If you’re setting up a home-monitoring / home-automation system, and you’re not assembling it 100% from stable, officially long-term-supported products, then it probably isn’t such a great idea to just keep on adding stuff to what is essentially becoming a personalised environment which no-one else, including your family members, will be able to deal with in the LONG run. And what if… ehm, you’re not always around? Or you simply forgot some of the details as the years go by? (as years tend to do…)

Spouse calls significant other: “the heating isn’t working, what do I do now?” … sound familiar?

Categories: Community Blog posts

From crontab to systemd

JeeLabs - Wed, 03/02/2016 - 01:00

The crontab “@reboot” approach mentioned in the hub’s installation guide has as benefit that it’s very easy to do, without the risk of messing up anything serious, because it doesn’t involve “sudo”. It also should work on just about any system - cron has been around for a long time.

But if you’re willing to do just a little more work, there’s actually a more flexible mechanism in recent Linux distributions, called systemd: it will take care of starting and stopping a service, all its output logs, and catching any runaway or otherwise failing launches.

Here’s how to set up the hub to run under systemd as a service, but first:

  • type “systemctl” to verify that “systemd” is actually available in your system
  • make sure the “@reboot” entry in your crontab is commented out! (crontab -e)
  • also make sure that the hub is no longer running, as you’ll move some stuff around

Now create a file called “jet.service”, with the following lines in it:

[Unit] Description=JeeLabs JET Daemon After=mosquitto.service After=network.target [Service] WorkingDirectory=/home/jcw/jet-v4 ExecStart=/home/jcw/jet-v4/hub-linux-arm User=jcw Restart=always [Install] WantedBy=multi-user.target

Note that this is set up to wait for both Mosquitto and the network to be ready.

Be sure to check the ExecStart, WorkingDir, and User settings, and adjust as needed for your situation. If you prefer to put the hub (and its data store and packs) in a more central directory: there’s an “/opt“ area intended for just that purpose. Here’s how you can migrate the hub to it:

sudo mkdir -p /opt sudo mv ~/jet-v4 /opt/

In which case the jet.service file will need to be adjusted to:

WorkingDirectory=/opt/jet-v4 ExecStart=/opt/jet-v4/hub-linux-arm

And if you’ve set up a “jet” script, you’ll need to adjust the path in there as well.

The last step is to put the service in place:

sudo chown root:root jet.service sudo mv jet.service /etc/systemd/system/

Now you can start and stop the hub (and its child processes, i.e. active JET packs) at will:

sudo systemctl start jet sudo systemctl stop jet

One thing to beware of is that you need to enable the service if you want it to also start automatically on power-up or after a reboot:

sudo systemctl enable jet

You only need to do this once, it’ll stay that way until you disable it again.

To see the status and the last few lines of the hub’s output, use … you guessed it:

sudo systemctl status jet

Here is some sample output with a freshly-installed hub:

$ sudo systemctl status jet ● jet.service - JeeLabs JET Daemon Loaded: loaded (/etc/systemd/system/jet.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2016-02-02 16:37:45 CET; 7s ago Main PID: 11645 (hub-linux-arm) CGroup: /system.slice/jet.service └─11645 /opt/jet/hub-linux-arm Feb 02 16:37:45 xudroid systemd[1]: Started JeeLabs JET Daemon. Feb 02 16:37:45 xudroid systemd[1]: Starting JeeLabs JET Daemon... Feb 02 16:37:45 xudroid hub-linux-arm[11645]: 2016/02/02 16:37:45 [JET/Hub] ...) Feb 02 16:37:45 xudroid hub-linux-arm[11645]: 2016/02/02 16:37:45 connected ...3 Feb 02 16:37:45 xudroid hub-linux-arm[11645]: 2016/02/02 16:37:45 opening da...b Feb 02 16:37:45 xudroid hub-linux-arm[11645]: 2016/02/02 16:37:45 starting H...7 Hint: Some lines were ellipsized, use -l to show in full.

(note: the above still shows duplicate timestamps - this has been fixed in the latest hub revision)

If you want to shorten this last command, add the following line to your “~/.bashrc” script:

alias jets='sudo systemctl status jet'

(as always with a .bashrc change: re-load or re-login to put it in effect)

Now, typing “jets” will give you a quick glimpse of the JET/Hub’s status.

It’s really convenient and the new “standard” way to run services in Linux: letting you start, stop, and check up on the hub at any time. Thanks to Thomas L. for his suggestion and help with this.

Categories: Community Blog posts

Let's talk about the 'N' in WSNs

JeeLabs - Wed, 03/02/2016 - 01:00

The acronym “WSN” stands for Wireless Sensor Network. Ok sure, we all have one or more wireless sensor nodes, JeeNodes or whatever, and they probably work nicely. But how do we manage them? What about code revisions?

Let’s go into this for a moment. Because the usual approach of: “a sketch, an upload, and off you go” doesn’t really scale well. How do you manage all those nodes, which may be different in functionality, in their hardware, or even just be different models or revisions of the same type?

But first, I’ll start off this week with a note about running the hub in the background:

And just to keep a clear model of the hub’s main role in front of you, here’s the diagram from last week’s configuration guide again:

Latest hub v4.0-45 builds now on GitHub.

Categories: Community Blog posts

Using the built-in database

JeeLabs - Mon, 01/02/2016 - 01:00

(This information has been superseded by a newer design with an updated API)

The “database” built into JET/Hub is set up as a general-purpose key-value store - i.e. persistent data storage which can’t store or retrieve anything other than (arbitrary) data by key. Whereby the keys are treated as a hierarchical structure separated by forward slashes (“/”). This database looks very much like an ordinary file system, and also very much like an MQTT topics hierarchy.

This is by design: it makes it easy to treat this persistent data in the hub as a tree-of-files. Where each “file” is usually a JSON object. The size of these objects can range from one byte to multiple megabytes, since the underlying storage implementation can easily accommodate them all.

Speaking of implementation: the hub’s store is now based on BoltDB, which is widely used in Go, is in active development, and appears to be well-designed and robust. And it’s open source.

BoltDB is an “embedded database” package, which means that its implementation is part of the application using it. As an important consequence, no outside access by any other process is possible while the database is open. In the case of the hub, this won’t be very restrictive, as all its functionality is going to be exposed via MQTT requests anyway: the hub is the database server.

Store semantics

Let’s call this a “store” from now on, since it isn’t a full-scale database (no indexes, no multi-user facilities, no user management, just keys and values - albeit arbitrarily many, and nestable).

To store data, we need a (string) key to specify where to store things, and a value (which can be an arbitrary set of bytes).

All keys must start with a slash. Another constraint is that we can’t store any data at the top level, we need to include at least one extra “directory level” in our keys. Empty key segments should be avoided and will at some point probably be rejected. So these are valid keys:

/abc/def (an "abc" directory, with a "def" entry in it) /a/b/c/d/e/f (5 directory levels, with an "f" entry in the top one)

… while these are not:

abc/def (not starting with a slash) abc/ (same, and empty entry name) /abc (need at least one dir and one entry) /abc/ (empty entry name) //def (empty directory name)

In the case of “/abc/def/ghi”, there are two directory levels, “abc” and “def”, plus the final “file-like” entry, which is where the value gets stored. It really helps to think of these keys as being very much like file-system paths, except that there are no “.” and “..” special entries, since there is no concept of a “current directory”.

The values stored can be any non-empty byte sequence, although other parts of JET might impose some further restrictions, such as being valid JSON.

Empty values cannot be stored - they represent the absence of an entry, as will become clear below. But you can store the empty JSON string, represented as a pair of double quotes (“”).

The hub’s store listens to MQTT topics starting with “!” and “@”, i.e. patterns “!/#” and “@/#”.

Storing a value

To store the value “123” in directory “foo”, entry “bar”, send this message to MQTT:

topic: !/foo/bar payload: 123

I.e. when using the “jet” utility:

jet pub '!/foo/bar' 123

(the “!” needs to be quoted, because it has special significance in the shell)

In the case of JSON objects as values, you can use quoting to get things across without the shell messing things up. For example:

jet pub '!/foo/bar' '{"name":"John","age":21}'

In both cases, the “foo” top-level directory level will be automatically created if it doesn’t exist. This applies to all intermediate levels.

Deleting a value

To delete a value, store the empty value in it (i.e. a value of zero bytes long). There are two ways to do this with “jet“:

jet pub '!/foo/bar' '' jet delete '!/foo/bar'

To delete all values under directory “foo”, as well as the directory itself, use either of these:

jet pub '!/foo' '' jet delete '!/foo'

This will delete “foo” and everything below it, including all sub-directories. Use with caution.

Fetching a value

Now it gets interesting. How do you fetch a value? Or rather, what do you do with that result?

This is implemented in JET is as a “request/reply” pair: the request to fetch a value includes a topic to which the reply (i.e. the fetched value) should be sent. So to fetch the value of directory “foo”, entry “bar”, we need to set up a listener to a unique topic, and send this MQTT message:

topic: @/foo/bar payload: "<topic>" (a JSON-formatted string)

For example, if we want to receive the value back on topic “abc”, we can send:

jet pub @/foo/bar '"abc"'

Note the extra quotes again here, to get that JSON string across properly.

When this request is picked up by the hub, it’ll obtain the requested value from the store and send a message to topic “abc” with that value. It’s a bit like a “call” with a “return value”, and indeed this is essentially a Remote Procedure Call, mapped on top of MQTT.

If the requested key does not exist, an empty value will be returned instead.

Note that if any of the directories don’t exist, nothing will be returned at all.

Listing the entries

One last function is needed to make the above suitable for general use: the ability to find all the entries in a specific directory. This can be done by “fetching” the value of that directory:

jet pub @/foo '"abc1"'

The reply sent to the “abc1” topic might be something like:


Each entry is a key in the returned JSON object, with as value the number of bytes stored for that entry. In the case of sub-directories, this number will be zero.

So in this example, there are three entries, called “bar”, “baz”, and “text”, as well as one sub-directory, called “sub”. We could then follow-up with a new request:

jet pub @/foo/sub '"abc2"'

And a reply would be sent to “abc2”, perhaps this one:


Which indicates that “sub” does exist as directory, and that it doesn’t have any entries.

Current status

As of this writing (end Jan 2016), the above is work-in-progress. An early implementation with slightly different semantics has been implemented and is being adapted to match the above API.

Note that this data store - like all other elements of the hub - is totally optional and independent of the rest of JET. Each pack can choose to use this MQTT-based key-value store, or store data using its own approach. One benefit of this “store” is that it’s always available, to every JET pack.

Categories: Community Blog posts

Hub configuration guide

JeeLabs - Sun, 31/01/2016 - 01:00

The JET/Hub process has two types of configuration settings:

  • startup configuration, such as how to connect to MQTT and the name of the data store
  • run-time configuration, such as which serial ports to open, and which packs to launch
Command-line options

Since the hub is intended to run virtually “indefinitely” once started, only very few configuration settings are specified via command-line options, and any changes will require a hub restart:

-data filename

the filename of the hub’s persistent data store (default: ./store.db)

-logger dirname

the directory where the logger stores its daily log diles (default: ./logger)

-mqtt url

the host and port of the MQTT server (default: tcp://localhost:1883)

-packs dirname

the directory where packs are launched from (default: ./packs)

Serial port configuration

Serial port configuration in the hub uses a more flexible mechanism: the hub continuously listens for topics matching the pattern “serial/+” and treats them as serial port configuration requests, as described earlier. To set up a serial port listener for a device on USB port 0, you simply need to send a message to MQTT with a specific format:

topic = serial/<name> payload = {"device":"/dev/<usb-name>","sendto":"<publish-topic>"}

The payload must be a valid JSON object, with device and sendto fields.

The “jet” utility makes it very easy to set this up from the command line, once the hub and MQTT server are up and running. Here is an example:

jet pub serial/jeelink '{"device":"/dev/ttyUSB0","sendto":"logger/jeelink"}'

Note the use of single quotes to simplify passing JSON’s double quotes without further escapes.

If you have more serial ports, just send more messages and use different names:

jet pub serial/arduino '{"device":"/dev/ttyUSB1","sendto":"logger/arduino"}'

And if you don’t know which device is on which port, or if this might change from one power-up cycle to the next, then there’s a trick for that too in Linux:

Each serial device is listed in a little-known directory called /dev/serial/by-id/. By looking up your device and using that (much longer) device name instead of ttyUSB0, you can force the hub to always open a specific device. Here is an example:

long=/dev/serial/by-id/usb-FTDI_FT232R_USB_UART_A40117UK-if00-port0 json='{"device":"XYZ","sendto":"logger/jeelink"}' jet pub serial/jeelink `echo $json | sed "s#XYZ#$long#"`

As you can see, this may require some nasty massaging to avoid quoting hell and keep all the double quotes in that JSON payload intact. Note that if you replace “jet” by “echo” in that last line, you can see what’s going on without publishing to MQTT.

To close a serial port, send an empty payload. This can be done with JET’s “delete” command:

jet delete serial/jeelink

These commands can be sent at any time. There is no need to stop-and-restart the hub.

Persistent settings

The above message sent to MQTT are once-only, i.e. on restart the hub won’t reopen serial ports. But there is a simple solution for this in MQTT: the RETAIN flag. By adding the -r flag to the above “jet pub” commands, the messages will be sent as before, but also stored and re-sent when the hub is restarted and reconnects to MQTT at a later time:

jet pub -r serial/jeelink '{"device":"/dev/ttyUSB0","sendto":"logger/jeelink"}' jet pub -r serial/arduino '{"device":"/dev/ttyUSB1","sendto":"logger/arduino"}'

The RETAIN flag is also sent by “jet delete”, i.e. a deletion / close request is also permanent. One little detail to keep in mind is that when a retained message is stored in MQTT, subsequent non-retained messages do not affect it: the original message will still be sent after a hub restart.

You can use “jet config” as a quick way to see all retained messages (i.e. persistent settings).

Managing JET packs

Apart from routing some messages to and from serial ports, logging, and a few other built-in features, one of the hub’s main tasks is to manage “JET packs”, i.e. separate processes (in any language) which are tied into the system as equal citizens and communicate through MQTT.

As with serial ports, the hub supports starting and stopping JET packs at any time. And again, this is driven via MQTT messages. Here is the bird’s eye view:

Here is how to add a new JET pack and manage it with the hub:

  • place the JET pack’s executable or a small shell script wrapper in the “packs” directory - it must have the executable bit set (“chmod +x”), and in the case of a shell script, it must also start with the line “!/bin/sh“ so the hub can launch it
  • send an MQTT message to “packs/<name>” with as payload a JSON array, containing the name of the executable or shell script plus optional arguments

So for example, if packs/abc.sh exists, we can issue the following command:

jet pub packs/abc '["abc.sh","arg1","arg2"]'

The hub will report what it’s doing in the log, as well as any errors it runs into.

For security reasons, the hub will only launch packs present in the “packs“ directory. Path names are not accepted as first item of the JSON payload.

All stdout and stderr output from the pack is also reported by the hub and sent out as MQTT messages to “packs/<name>/log”. Output lines from stderr will be prefixed with "(stderr)".

Send an empty message to stop the pack again, e.g. “jet delete packs/abc”. If the pack is running when another launch request comes in, the old one will be killed first (using SIGQUIT). If you want to temporarily prevent a pack from starting up, you can remove its executable bit (”chmod -x packs/abc.sh”) and add it back in later (”chmod +x packs/abc.sh”).

As with serial ports, JET pack launch requests only persist across hub restarts if you include the RETAIN flag by using “jet pub -r ...”.

Categories: Community Blog posts

How to install JET/Hub

JeeLabs - Sat, 30/01/2016 - 01:00

Installation… ah, the “joys” of modern computing!

1. Prerequisites

To try out JET, you need:

  • a Raspberry Pi, Odroid, OLinuXino, or similar ARM-based Linux board
  • a working Linux setup on your particular board, e.g. Raspbian or Ubuntu
  • the ability to login on your system via SSH over the network
  • some basic familiarity with Linux and the “shell” command line
  • a link to the latest JET release on GitHub: here it is
  • a bit of patience to skim through this setup guide (yep, go right ahead!)

Because JET/Hub is going to run as an unattended background task (next to the MQTT server), we need to set up a few things to automate this process (marked in bold in the diagram below):

And since the idea is to make this as painless as possible: let’s get started!

2. Hardware setup

This will depend on what board you have and is beyond the scope of this guide, but here are some links you might want to check out to get going:

You need to get to a state where your board is running properly, is connected to the internet (because you’ll need to fetch a few files), and you are logged in with the ability to become “superuser” via sudo or su (again, because you’ll need to install a package into the system).

If Linux is new to you: there are lots of ways to get familiar with it, for example with this book.

3. Software setup

You’ll need to get two packages installed and running: the JET/Hub core and the MQTT server. Start with the latter: for MQTT, install a package called Mosquitto - it should be available as standard package, so on systems such as Raspbian, Debian, or Ubuntu, just run this command:

sudo apt-get install mosquitto

Press “y” when asked to accept the installation. If all goes well, Mosquitto will be installed and set up to automatically run in the background, also after reboots of the system.

Part two is to download and run the latest JET/Hub binary from the Releases page on GitHub. You should create a fresh directory, download the package, and unpack as follows:

mkdir ~/jet-v4 cd ~/jet-v4 wget <url-of-hub-linux-arm.gz> # from the above Releases page gunzip hub-linux-arm.gz chmod +x hub-linux-arm ls -l hub-linux-arm

That last command should produce output similar to this:

-rwxr-xr-x 1 jcw jcw 6664656 Jan 26 11:56 hub-linux-arm

That’s it. The essential parts of JET are now installed.

4. Starting JET for the first time

It’s time to launch the hub (Mosquitto will already be running, see above), and the first time around you should do it manually by entering this command:


Here is what should appear (this was done with a slightly older version):

2016/01/26 11:57:28 [JET/Hub] v4.0-2-gd80ce94 2016-01-26 (go1.5.3) 2016/01/26 11:57:28 connected as hub/384389 to tcp://localhost:1883 2016/01/26 11:57:28 opening data store: store.db

Are you seeing something similar? Then all is well - congratulations! If not, please verify all the above steps, and if you can’t fix the problem, get in touch via the Forum, Gitter, or GitHub.

At this point the hub is running, but if you press CTRL-c or log out, it will stop again.

5. Launching JET automatically

We need a more permanent setup, which doesn’t require login or manually starting up the JET system. One way to do this is via the “crontab” utility, which can set up commands to launch the moment the system starts up, even when you are not logged in.

Let’s edit our “crontab” entry (every user has a different one). Enter the following command:

crontab -e

An editor screen pops up, allowing you to edit text. Add the following line at the end of the file:

@reboot sleep 5 && cd ~/jet-v4 && ./hub-linux-arm 2>&1 >out.log

Then save these changes and exit the editor. With the default “nano” editor, instructions for this will be on-screen (to change the default editor: “update-alternatives --config editor”).

The “sleep 5” adds a little time for the rest of the system startup to complete (such as MQTT).

From now on, the hub will automatically start when you power up your board. Log output will be saved in a file called out.log (use “tail -f ~/jet-v4/out.log” to watch the latest output).

There’ll now be an MQTT server running in the background on port 1883.

6. JET admin utility

There is one last step to make it easier to tinker with a running JET system. See this example shell script - which is a special wrapper to control a running system, called - drumroll - “jet” !

When properly set up, you get some nice conveniences for use from the command line, such as:

  • jet - display JET’s exact version and build details
  • jet config - list the persistent configuration, i.e. all retained messages
  • jet pub <topic> <value> - a way to manually publish a message to MQTT
  • jet sub <topic> - subscribe to an MQTT topic (for example: “jet sub '#'”)

More admin, control, and debugging options will be added once the hub’s functionality grows.

To set this up, create a little wrapper script called “jet” with the following contents:

#!/bin/sh # jet -- Command-line admin interface to a running JET/Hub instance # Note: this command never starts up a new hub, it's ONLY an admin front-end! exec "$HOME/jet-v4/hub-linux-arm" -admin tcp:// "$@"

Then do “chmod +x jet” to make the script executable and move it to a directory on your $PATH (perhaps $HOME/bin), so you can use it as “jet“ instead of “./jet”.

This utility can also be used from a different machine, i.e. you can perform the above actions without logging in to your Linux box by adjusting the “-admin” arg to the remote IP address. Note that you’ll need to install a second copy of the hub, but built for the originating machine! (never mind if this doesn’t make sense yet - for local use the above instructions should be fine)

7. Security

Once installed, an internet connection is no longer strictly required, but at a minimum you’ll probably want to keep the local network enabled for SSH and web browser access (unless you use a keyboard and monitor, and attach all your devices directly to your board). JET will never connect to anything “out there”, nor accept any incoming connections - unless you tell it to.

JET does not require superuser privileges, but you may have to fix some permissions to enable permanent access to the serial ports (tip: try “sudo usermod -aG dialout <username>” if you run into this particular issue). JET should run fine with just standard user permissions.

The current JET setup does not have any access control. Authentication and TLS will be added later, both in MQTT (via Mosquitto’s config file) and in the hub.

Categories: Community Blog posts

Timestamps and logging

JeeLabs - Fri, 29/01/2016 - 01:00

Keeping track of time across different systems is not as straightforward as it might seem: every clock has its own notion of time, and always drifts around to a certain extent. There’s an entire community around the challenge of trying to keep track of time as accurately as possible. In an absolute sense, you can’t actually win in this game - it’s very Heisenberg-like, in that there will always be some uncertainty, even if only at the femtosecond level!

Fortunately, this isn’t a big deal for simple home monitoring and automation scenarios, except that with battery-powered nodes without accurate crystal this issue need addressing.

The simplest solution is to timestamp all data in a single place, based on a single clock source - preferably a stable one, obviously. In Linux, tracking real time is well-understood - all you need when network connected, are the ntp, ntpdate, or chrony tools (thanks for the tip, Thomas!) to keep its time-of-day clock time accurate to within a milliseconds or so.

Hub timestamping service

JET/Hub provides a simple millisecond-timestamping service for anything arriving via MQTT:

The <device> part can be used to distinguish between the different message sources.

The contents of these message can be anything - they also pass through without change. Note that the timestamped topics change endlessly - these messages should not be published with the RETAIN flag set, else the MQTT server will have to store an unbounded number of messages!

Logging raw incoming data

A mechanism introduced in JeeMon and HouseMon many years ago, was the “logger”, which takes all incoming (presumably plain text) data, and saves it to disk. This mechanism is also included in the hub, and serves two purposes:

  • as a future-proof permanent record of every “raw” message, independent of the processing applied to it later - i.e. even before these messages get “decoded”

  • as a way to “replay” the original data at a later time, for testing or simulation purposes, but also to be able to completely rebuild all time-series and redo all statistics and other processing - this has proven invaluable to support major changes to the software, since a new installation can quickly be fed all historical data as if it were coming in real-time

The structure and format of these log files have remained the same for many years now:

  • there is one log file for every day, running from midnight to midnight (UTC!)
  • the name of these log files is “YYYYMMDD.txt”, e.g. “20160128.txt“
  • each entry in the log is one line of text
  • the format of each line is: “L HH:MM:SS.TTT <device> <message...>“

Here is an example of some log entries, taken from the “20160128.txt” file:

L 12:27:51.979 usb-USB0 OK 3 65 112 8 0 L 12:27:52.941 usb-USB0 OK 9 161 25 58 235 31 159 228 5 13 219 234 62 L 12:27:55.937 usb-USB0 OK 9 161 25 58 235 33 159 230 5 13 219 234 62 L 12:27:58.936 usb-USB0 OK 9 161 25 58 235 35 159 224 5 14 219 198 61 L 12:28:00.574 usb-USB0 OK 19 96 16 1 28 13 28 0 L 12:28:01.934 usb-USB0 OK 9 161 25 58 235 37 159 224 5 14 219 198 61 L 12:28:03.080 usb-USB0 OK 6 199 149 143 0

This functionality is included in the hub - it subscribes to MQTT topic “logger/+/+”, causing it to receive all timestamped messages (and only those), which it then saves to the proper log file. With automatic daily roll-over to a new log file at 0:00 UTC. The file paths include the year, so that no more than a few hundred text files end up in a single directory - the above example was actually copied from the file located at “logger/2016/20160128.txt”.

The hub has a heartbeat

For completeness, this is probably a good place to mention that the hub also implements a one-tick-per-second “heartbeat” - i.e. a periodic message, published to the “hub/1hz” topic. The value of this text message is the current time in milliseconds since Jan 1st, 1970 (as commonly used in JavaScript). The hub will continuously adjust its heartbeat timing to happen exactly on the second mark, as you can see in this example:

$ jet sub hub/1hz hub/1hz = 1453988511000 hub/1hz = 1453988512000 hub/1hz = 1453988513000 hub/1hz = 1453988514000 hub/1hz = 1453988515000 hub/1hz = 1453988516000 hub/1hz = 1453988517000 ...

An issue to watch out for, is that these messages can end up a few milliseconds late - even before propagating through MQTT - since Unix / Linux do not offer any real-time guarantees.

One use for this heartbeat is to detect and track clock skew in other machines on the network.

Tying it all together

The serial port interface, timestamping, and raw logging services are built into the hub but fully independent of each other. And - by design - they can be chained together very nicely:

  • a retained message at “serial/mydev” sets up the serial port and starts the hub’s listener
  • sendto is set to “logger/mydev”, directing all incoming data to the timestamping service
  • these messages will then be re-published as “logger/mydev/1453986012808”, etc.
  • since the logger is listening to “logger/+/+”, it gets a copy of each timestamped message
  • as a result, all this data also ends up being stored in daily log files

This addresses a major design goal for the hub: to keep raw serial data collection and logging going at all times, even when other parts of JET are in development, being replaced, or failing.

Categories: Community Blog posts

Connecting to serial ports

JeeLabs - Thu, 28/01/2016 - 01:00

The main mechanisms for communicating between devices around the house in JET are via serial communication and via Ethernet (LAN or WLAN) - often in the form of USB devices acting as virtual serial interfaces. For simple radio modules used for a Wireless Sensor Network, we can then use a USB or WiFi “bridge”.

Other (less common) options are: I2C and SPI hardware interfaces, and direct I/O via either digital or analog “pins”. For these special-purpose interfaces, there is the WiringPi library, which is often also ported to other boards than the Raspberry Pi for which it was originally conceived. In this case, a small C program can be used as bridge to MQTT.

Network connections are simple in Linux, and in particular in Go, and will not be covered here right now. Besides, with MQTT in the mix, network access is essentially solved if the other end knows how to act as MQTT client. Getting data into JET via the network is easy - even if some massaging is needed, it can be done later by including some custom code for it in a JET pack.

Serial ports are a different story. There are many serial conventions in use, with all sorts of settings that you need to get just right to send messages across: baudrate, parity, stop bits, h/w or s/w handshake - it can be quite a puzzle. Just enter the man stty command on Linux to see how many different configuration choices have been added over time…

Fortunately, a few common serial interface conventions will cover the majority of today’s cases, when it comes to “hooking up” a serial device such as an USB-to-serial “dongle”. And Linux tends to have excellent support out of the box for all the different brands and vendors out there. All we need is some glue code in the hub, and we should be able to get serial data in and out.

And yet that’s just step one in this story. Welcome to the “interfacing to the real world” puzzle!

We also need to match the serial data-format choices of the device. Is it text? Is each line one message? Or if it’s binary data: how do we know where a message ends and the next one begins? There are so many different “framing” and other protocol conventions, that this is probably best handled in a custom pack - at least for more complex cases. But even then we need a serial driver which is able to pass all the information faithfully across to that JET pack.

Another area of concern is with the I/O pins other than the send and receive lines: do we need to connect any of the RTS, CTS, DTR, DSR pins? Do they need to be in a certain state?

Eventually, many of these use cases will need to be addressed. For now, let’s just focus on a basic subset and aim for the following scenario:

  • plugging in a serial USB adapter, or an Arduino / JeeLink based on one
  • opening and closing a specific serial port on request, via MQTT
  • being able to receive plain text, line by line, and to send arbitrary text
  • adjusting the state of the DTR and RTS pins, for reset / upload control
  • configuring standard baud rates, from at least 4,800 to 230,400 baud
  • inserting brief delays of a few milliseconds, up to perhaps a few seconds

What about the actual serial data I/O?

This is where MQTT’s pub-sub can help a lot: we can subscribe to a fixed/known topic for each interface, and pass each incoming message to the serial port. The advantage over plain serial, is that any number of processes can do so - if more than one send at the same time, the output will get inter-mixed, but that’s fine as long as each message is a self-contained outbound “packet”.

On the incoming side, there are two uses cases:

  1. pick up each message and pass in along to anyone interested - this is the most natural mode for MQTT and matches exactly with its pub-sub semantics

  2. briefly claim the output while a client “takes control”, which it then relinquishes once done - this is useful for an “uploader”: switch the attached device to firmware-upgrade mode, send new firmware, after which normal operation resumes, with all listeners getting new incoming data

Here is a design, currently being implemented, which supports both modes:

  • each serial interface listens to a fixed topic, i.e. the driver for interface “xyz” subscribes to serial/xyz to receive all incoming requests and data
  • if the message is a JSON object (i.e. {...}), it is treated as a new serial port open request
  • if the message is a JSON array (i.e. [...]), then it’s a list of interface change requests
  • a JSON string is parsed (with escapes properly replaced) and sent as is
  • everything else will probably be treated as an error or be ignored
Serial port open requests

Request format, in JavaScript notation:

{ device: <path> // the serial device to open baud: <baudrate> // connection speed, default is 57600 baud init: [<commands...>] // initialisation commands and settings sendto: <topic> // the topic to forward incoming data to }

Example (JSON):


New open requests implicitly close the serial port first, if previously open.

To close the serial port and not re-open it, we can send an empty message. This is not valid JSON, but will be recognised as a special “close-only” request.

Serial interface requests

Interface change requests are inside a JSON array and get processed in order of appearance:

  • "+dtr" - assert the ¬DTR line (i.e. a low “0” logical level)
  • "-dtr" - de-assert the ¬DTR line (i.e. a high “1” logical level)
  • "+rts" - assert the ¬RTS line (i.e. a low “0” logical level)
  • "-rts" - de-assert the ¬RTS line (i.e. a high “1” logical level)
  • "=text" - send some text as-is, for use between the other requests
  • <number> - delay for the specified number of milliseconds (1..10000)

(note: “¬” denotes a pin with inverted logic: “1” is inactive, “0” is active!)

Additional requests could be added later, e.g. to switch between text and binary mode, to set a receive timeout, and to encode/decode hex, base64, etc.

Power-up behaviour

While the above is sufficient to use serial ports, it does not address what happens on power-up or after a system restart. Ah, but wait… meet MQTT’s “RETAIN” flag:

  • when the RETAIN flag is set in a message, a copy of the message is stored in the MQTT server, and a duplicate is automatically sent whenever a matching subscription is set up

  • by setting RETAIN on the serial port-open request, we indicate that this request is to persist across reboots of the system

  • only one RETAIN message is kept (the last one), it overwrites any older one

  • an empty message with the RETAIN flag set removes the last one from the server

In other words: to configure our system, we merely need to send the proper open requests for each serial port once - with the RETAIN flag set, i.e. the MQTT server now acts as persistent configuration store for each of these settings.

Other messages, without the RETAIN flag set, pass through as is - they won’t affect the storage of prior retained messages. Normal outbound data should therefore be published without the RETAIN flag. Likewise for interface change requests: they must be processed, but not stored.

To permanently open a serial port in a different way, we can simply send a new open request message, with RETAIN set, and replace the previous one.

Incoming data

The open request includes a sendto: field, which specifies the topic where every incoming message is sent. In the initial implementation, data is expected to come in line-by-line, and each line will be re-published to the given topic.

By using open requests without the RETAIN flag, we can play tricks and briefly re-open a serial port for a special case, with a different sendto value. Then, once ready, we simply re-open again with the original topic, and data will start getting dispatched as before.

As already mentioned, the above mechanisms are currently being implemented and will be included in the hub once the software is working and stable. For real code, see GitHub.

Categories: Community Blog posts

An introduction to JET/Hub

JeeLabs - Wed, 27/01/2016 - 01:00

This is the start of a series describing “JET v4”, and in particular the “hub” subsystem. JET is a system which is intended to bring together all the home monitoring and automation tasks here at JeeLabs, for the next 10 years…

As mentioned before - and as its name indicates - “JET/Hub” is the centrepiece of this architecture. It’s the switchboard which sits between the various (and changing) components of the system, including: from serial ports, bridges to Wireless Sensor Networks, and directly-attached I/O hardware, to software-based functionality, e.g. real-time data collection of sensor readings, sending out commands to control lights, heating, and appliances in the house, calculating trends and other statistics, presenting this information in a web browser. And eventually also the management of potentially large rule sets to guard and even control certain actions in and around the house. The hub is where everything passes through, it’s also the autonomous “always-on” part of JET.

That’s quite a mouthful. In a nutshell: JET is for home monitoring and automation, whereby JET/Hub takes care of orchestrating the entire system.

To expand a bit on an earlier set of requirements, the hub should:

  • make minimal demands on the hardware and run well on a small Linux board
  • allow continuous development on a “live” system, without constant restarts
  • be flexible enough to support a very wide range of home sensors and actuators
  • make installation very straightforward, and likewise for subsequent updates
  • support remote management and avoid the need to log in and enter commands
Yet Another HAS

There are many Home Automation Systems. Even at JeeLabs there have been several major iterations: JeeMon/JeeBus (Tcl), HouseMon 0.7 (Node.js, with “Briqs”), HouseMon 0.8 (Node.js, using “Primus”), and HouseMon 0.9 (with Go, using “Flow”). But hey, that’s just the JeeLabs code - there are tons (hundreds?) of other OSS automation projects on GitHub alone.

The reason for JET is really that nothing else seems to fit the (perhaps not totally conventional) requirements here at JeeLabs:

  • it should be extremely lightweight, to run on a small Linux system with very low energy demands (50W for an always-on system still wastes 430 KWh each year)
  • it should support continuous development, since the system needs to remain useful - and usable - for at least a decade, and preferably well beyond that
  • it should make few assumptions in the core about technology, since the needs and available solutions are bound to change drastically over time

JET is not about making the flashiest choices today - it’s about picking a limited set of design guidelines and adopting “minimally viable” conventions. And based on that: implementing a small core to keep around for a long time.

JET’s design choices

It’s time to cut through the vague hand-waving fog, and make some hard choices:

  • all subsystems of JET will run as separate processes: 1 “hub” and N “packs”
  • the hub stays running 24 / 7, and manages the lifetimes of all the JET packs
  • all communication between hub and packs goes through an MQTT server (broker)
  • MQTT’s “topics” lend themselves well to designing a clear naming hierarchy
  • the payload of messages going through MQTT should be JSON in most cases

The MQTT server of choice right now is Mosquitto, which is open source, highly standardised, and well-tested. Furthermore, it scales well and it’s widely available on all major platforms.

The hub subsystem is implemented in the Go language, which is also open source, portable, in active development yet very robust, and extremely well-suited for network-oriented applications. Being statically compiled (yet supporting flexible dynamic typing) means that the hub code can be built and installed as executable with zero external package dependencies.

The major functions included in the hub (as opposed to being implemented in JET packs) are:

  • simple communication with the MQTT server/broker
  • connecting to serial ports (incl. USB) to capture data and emit commands
  • a built-in fast and scalable key-value data store
  • a built-in web server with websockets, for efficient web browser access
  • installing / removing JET packs, and a way of starting and stopping them
  • registration and discovery services to let packs work together
  • a robust upgrade mechanism for packs, as well as for the hub itself
  • supervising all running packs, with configurable automatic restart-on-failure
  • watchdogs to detect system anomalies and report / act upon them
  • basic system and error logging to stdout/stderr
  • everything else is configurable, and can evolve massively over time…
  • as long as the hub can launch it, a pack can be built in any way you like

A brief attempt to include the MQTT broker inside the hub as well has been abandoned for now, since the SurgeMQ package is not quite ready for prime time. For now, JET will rely on both the hub and the MQTT broker running alongside each other as separate processes.

Oh, and by the way: this will be called “JET version 4.0” … gotta start somewhere, right?

Categories: Community Blog posts

What's in a hub?

JeeLabs - Wed, 27/01/2016 - 01:00

The restart of the JET project is progressing nicely. This week’s episode is about installing a first version as a basic-yet-functional new core system and describing / documenting some of the central features being built into the hub.

So here goes, one story per day, about what JET/Hub is all about:

Yes, that’s a lot of articles for one week. Because there’s a lot to describe!

Here’s another little diagram, this one is from 2014 - even older than last week’s:

And yet it’s still mostly applicable to JET …

Categories: Community Blog posts

Architecture: it's all about data!

JeeLabs - Fri, 22/01/2016 - 01:00

Code vs. data… there are many ways to go into this oh so important dichotomy.

Here is a famous quote from a famous book, now over four decades old (as you can tell from its somewhat different terminology, i.e. flowcharts <=> code, tables <=> data):

Show me your flowcharts, and conceal your tables, and I shall continue to be mystified; show me your tables and I won’t usually need your flowcharts: they’ll be obvious. – Fred Brooks, “The Mythical Man Month”

Data really tends to be the most important aspect of a long-term design process, because:

  • code matters while our program is executing, data is what stays around when it is not
  • code is what we invent and produce to deal with a task, data is what comes in as facts
  • code evolves as we better understand a task, data needs to be saved and kept intact

Very often, software development is like a constant shuffle: we write code, we run it, it collects, generates, and transforms some data, we save that data to disk, we stop the code and replace it with an improved version, and then the whole process starts over again. We’re continuously alternating between run mode (with the code frozen) and stop mode (with the data frozen):

There are clearly exceptions to this view of the software development process: when we store HTML pages as data, it really is part of the software design, in the same way that our code is.

But the model breaks down with JET, which needs to be running 24 hours a day, 7 days a week. As far as the hub is concerned, there is no stop mode. We don’t want to lose incoming data.

This means the design of the central data structures and formats must be frozen from day one. Of course we’ll need to be able to add, change, and remove any data flowing through the system, but its shape and semantics should be fixed, as far as the logic and code in the hub is concerned.

This is not as hard as it may seem. The hub is a switchboard. There is very little data which it needs to understand. If it can collect data, pass it around, and save it, it won’t care what that data is. And that’s where MQTT’s “pub/sub” and Bolt’s “key/value” concepts make things easy:

  • there are topics (a plain text string, with slashes and some minor conventions)
  • these topics determine the routing of incoming and outgoing messages
  • and there are values (message “payloads” in MQTT terminology)
  • for MQTT, the mechanism is called publish-subscribe, or “pub/sub” in short
  • for Bolt, the topic is the (hierarchical) key under which a value is stored on disk
  • the values can be anything and can often be treated as an opaque collection of bytes

The only exceptions are the messages which control the behaviour and operation of the hub itself. These need to be specified early on, and frozen - hopefully in such a way that all further changes can remain 100% backwards-compatible. Again, this is not necesssarily a very hard requirement to meet: if we start off with a truly minimally-viable set of special hub messages, then every subsequent change can be about adding new message conventions for the hub.

Adding message types, formats, rules, and semantics to a running system is far less intrusive than changing what is already in use. Even if the first hub can only pass messages as-is through MQTT and not save them in Bolt, quite a few features in JET can be tried and built already. As we figure out the best messaging design for this, we can start by implementing this in a separate JET Pack before messing with the hub. This can be done on our development machine, as a pack which includes Bolt and connects to the rest of the system like any other pack: over MQTT.

With respect to data formats, one more design decision will be imposed: the values / payloads which need to be processed and understood by the hub will use JSON formatting. It may end up getting used in lots of places, but that’s not a hard requirement as far as the hub is concerned.

Messaging is the heart of JET (i.e. data) - not logic or processing (code) !

What about sensors, actuators, and tying into the physical world? - Same story, really: we can implement it first as a separate pack, and then choose to move that functionality into the hub, if it works well, is super-robust, and if it simplifies the flow and structure of the entire setup.

What about the front-end then, the web server which lets us see what’s going on in our house, control appliances, and define automation rules? - Again: we can start with a separate pack.

You might recognise some concepts from an old project at JeeLabs, called JeeBus - and many of the design aspects of JET are indeed similar, even if based on different technology which didn’t even exist at the time. It’ll be interesting to see how this approach plays out this time around.

As an architecture, JET embraces decoupled development, because this will allow the city-like properties mentioned in the intial requirement specs. If JET is about evolving software over a long time span, then it has to be able to evolve from a tiny nucleus (the hub) right from the start.

In a nutshell: JET/Hub is the place where data thrives - JET Packs are where code thrives.

Categories: Community Blog posts

Ongoing software development

JeeLabs - Thu, 21/01/2016 - 01:00

Ok, with those (somewhat vaguely-defined) requirements down, how do we even start to move in this direction?

Well, we’re going to have to make a number of decisions. We will need to build at least one program / app, of course, and we’re going to need to run it on some hardware. In a product setting (which this is not!), the steps would involve one or more people planning, designing, implementing, and testing the product-to-be. Iteration, waterfalls, agility, that sort of thing.

JET is different. It’s almost more about the process itself than about the resulting system, because 1) the specs are going to be really fluid (we’ll get new ideas as the project grows and evolves), 2) the timeline is wide open (we’er trying to accomplish something, but the moment we do, we’ll come up with more things to do/add/try/fix/replace), and 3) there is no hard deadline.

Different as it may be, there is a good reason for doing things this way: let’s not constrain what is possible, and leave the door open for unpredictable new ideas, devices, and demands. A home should be a fluid and organic living space. The last thing we want, is to turn our home into an industrial design task. JET will be much better off in permanent research & exploration mode.

And then there’s history: are you going to throw out everything you have and replace it with new technology, just because you could? Is everyone going to dump their fridge because there is now a shiny new “IoT model” for sale? (apart from even wanting such a thing in the first place…)

We need a different model. Some people like to mess with their houses, and are always tinkering with it (even if perhaps not a perfect choice for every spouse…). We need a setup which works yet can evolve while being in use. We don’t want to distinguish production from dev-mode.

If you’re familiar with the Erlang programming language, then you’ll know how some aspects of its design makes it eminently suitable for such a task: in Erlang, every piece of code can be replaced without restarting the system (compare that to going through a “minor” Windows update!). Erlang was designed at Ericsson for its telephone exchanges, i.e. always-on hardware.

But we don’t have to adopt Erlang, a functional programming language which can be quite a challenge to learn (and perhaps also a bit heavy for a Raspberry Pi). What we can do, is to design a minimal core process, in such a way that it doesn’t need to be stopped and restarted during development. There’s a little chicken-and-egg issue here, since obviously we will have to build, and test, and restart that core process first. But the trick is to make the core agnostic about our particular application domain: if the core contains only general-purpose code which has next to nothing to do with JET’s home monitoring and automation, then it also won’t need to change as we start augmenting it with features.

Let’s try and work this out in some more detail:

  • JET/Hub is the main core process to rule all the others
  • it is launched once and then never ever quits or runs out of memory
  • it launches everything else as one or more child processes
  • it supervises these processes, taking action when they exit or fail
  • it provides some communication mechanism(s) between itself and the rest
  • it may contain some more key (but generic) functionality, such as logging
  • it may include a (generic and robust!) database, just to simplify the system
  • it may also include a (generic and robust!) web server, again to simplify

With JET/Hub running, even before we have any monitoring or automation code, we can then start to design and implement one or more child processes, which will be called JET Packs. The implementation language for these packs need not be the same as the hub, all that matters is that the starting and stopping adheres to a few conventions, and that all communication with the hub is well defined and fully compatible accross every pack that will ever be created.

(This diagrams is slightly dated, but still matches most of the current design)

So how will the hub be implemented then? Answer #1: you couldn’t care less. Answer #2: using a language which is very easy to port and install, and which is robust and able to handle messaging extremely well. Answer #3: using the programming language Go, with MQTT for “pub-sub” messaging, Bolt as key-value store database, and Go’s standard net/http package as web server.

JET/Hub has been implemented in an hour, and can be found in the dev branch on GitHub. Well… that’s a bit of an exaggeration: this is mostly a prototype for the actual code. Things haven’t been tied together yet, making the code so far next to useless. We will need a way to make it all work together and there’s no supervisor logic to manage the JET packs yet. But it’s safe to say that the entire hub can probably be built with under 1,000 lines of custom Go code. Which is not too bad for such a key architectural component of JET!

With the hub running (forever, on Mars, remember?), we still need a way to set up packs, add them to the mix, replace them, and make them do interesting stuff. But apart from these basic requests (over MQTT), there is nothing else we need in the hub to be able to start thinking about the real task at hand: designing and implementing features as a JET pack, using some language.

Some interesting properties emerge out of this approach:

  • we can run packs on our own local machine during development (even if that means the hub can’t supervise and launch them for us) - as far as the actual operation is concerned, there will be no difference and all messaging will take place via socket-based MQTT, the same as other packs

  • there is no preferred programming language - the only thing which matters is the protocol and semantics of all the message exchanges over MQTT

  • authentication can be enforced via MQTT / HTTPS sessions and SSL certificates

  • a different MQTT implementation can be used, if the hub’s one is not desired

  • other services from the hub, i.e. web servers and database storage also need not be used, if not needed or if alternatives are preferred: it’s all optional

So in a way there is no core - there are only conventions: MQTT and choices for its topic names and message formats. If Go or any of the packages used in the hub were to vanish or stop being supported, we can look for alternatives as needed and replace the entire hub with a different implementation. This would not affect the overall architecture of JET.

We have now defined a central infrastructure, yet we haven’t really made any limiting choices. Which was exactly the purpose of this whole exercise…

Categories: Community Blog posts

JET, as seen from 9,144 meters

JeeLabs - Wed, 20/01/2016 - 01:00

The JET project name is an acronyn for “JeeLabs Embello Toolkit”. And with that out of the way, please forget it again and never look back. A few more acronyms will be explained when the subsystems are introduced - none of them particularly fancy or catchy - it’s just because we can’t move forward without naming stuff. Names and terms will be introduced in bold.

As will become clear, this is a long-term project, but not necessarily a big one: this is not about creating a huge system. It’s more about making it last…

This has implications for the choices made, but even more for the choices left out, i.e. the options being kept open for future extension.

First off: JET is neither about picking a single favourite language or tool and imposing it on everything, nor about limiting the platform on which it can be used. Any such choices made today would look stale a few years from now.

But evidently we do need to pick something to work with. These choices will be made based on requirements as well as current availability and stability. With luck, we can avoid having to combine a dozen different languages & tools. But choices made for one part of the system won’t impose restrictions on the rest. How this is possible will become clear in the upcoming articles.

So what is JET?

  • JET is an architecture, in the sense that it covers a whole set of systems: a central machine, all sorts of remote “nodes” for sensing the state of the home, environmental data, nodes which control lights, appliances, curtains, etc, and a variety of controls and displays, from switches to mobile screens

  • JET is like a city - pieces come, pieces go, everything evolves as needed, and most importantly: the design embraces variation - old and new - it’s all about being able to use a heterogenous mix of technology, because over the years hardware and software is bound to change, again and again

  • JET is a conceptual framework, allowing us to implement a real working setup, without having to constantly revisit the choices made so far - there needs to be some level of consistency for different parts to be designed and built over time, in such a way that everything continues to inter-operate nicely

Some technical design requirements which will greatly affect the choices made:

  • the core of JET is always on: it needs to run 24 hours a day, 7 days a week
  • the (realistically) expected lifetime of this core must be at least 10 years
  • it has to run on low-cost commodity hardware, e.g. Raspberry Pi, Odroid, etc.
  • JET must be extensible enough to connect with just about anything out there
  • there has to be a good way to evolve or migrate from one setup to the next
  • all admin tasks must go through authenticated access to the central system

Here are a few more requirements, some of which are perhaps less conventional:

  • the system must be self-diagnosing and self-healing, wherever possible
  • it must be possible to run in unattended mode without any user interface
  • web access is optional, with possibly more than one server setup in parallel
  • support for reduced-functionality mode when the central machine is down

One could almost compare this to designing for a car or a comms exchange…

Or to put it in a somewhat different perspective: JET is for real use on Earth, by real humans, but its core has to be written as if it will run on Mars, in a very unforgiving environment!

Categories: Community Blog posts
Syndicate content