Space tools

JeeLabs - Tue, 01/09/2015 - 23:01

It’s a worrisome sign when people start to talk about tools. No real work to report on?

With that out of the way, let’s talk about tools :) – programming tools.

Everyone has their favourite programmer’s editor and operating system. Mine happens to be Vim (MacVim) and Mac OSX. Yours will likely be different. Whatever works, right?

Having said that, I found myself a bit between a rock and a hard place lately, while trying out ClojureScript, that Lisp’y programming language I mentioned last week. The thing is that Lispers tend to use something called the REPL – constantly so, during editing in fact.

What’s a REPL for?

Most programming languages use a form of development based on frequent restarts: edit your code, save it, then re-run the app, re-run the test suite, or refresh the browser. Some development setups have turned this into a very streamlined and convenient fine art. This works well – after all, why else would everybody be doing things this way, right?

But there’s a drawback: when you have to stop the world and restart it, it takes some effort to get back to the exact context you’re working on right now. Either by creating a good set of tests, with “mocks” and “spies” to isolate and analyse the context, or by repeating the steps to get to that specific state in case of interactive GUI- or browser-based apps.

Another workaround, depending on the programming language support for it, is to use a debugger, with “breakpoints” and “watchpoints” set to stop the code just where you want it.

But what if you could keep your application running – assuming it hasn’t locked up, that is? So it’s still running, but just not yet doing what it should. What if we could change a few lines of code and see if that fixes the issue? What if we could edit inside a running app?

What if we could in fact build an app from scratch this way? Take a small empty app, define a function, load it in, see if it works, perhaps call the function from a console-style session running inside the application? And then iterate, extend, tweak, fix, add code… live?

This is what people have been doing with Lisp for over half a century. With a “REPL”:

A similar approach has been possible for some time in a few other languages (such as Tcl). But it’s unfortunately not mainstream. It can take quite some machinery to make it work.

While a traditional edit-save-run cycle takes a few seconds, REPL-based coding is instant.

A nice example of this in action is in Tim Baldridge’s videos about Clojure. He never starts up an application in fact: he just fires up the REPL in an editor window, and then starts writing little pieces of code. To try it out, he hits a key combination which sends the parenthesised form currently under the cursor to the REPL, and that’s it. Errors in the code can be fixed and resent at will. Definitions, but also little test calls, anything.

More substantial bits of code are “require’d” in as needed. So what you end up, is keeping a REPL context running at all times, and loading stuff into it. This isn’t limited to server-side code, it also works in the browser: enter “(js/alert "Hello")” and up pops a dialog. All it takes is the REPL to be running inside the browser, and some websocket magic. In the browser, it’s a bit like typing everything into the developer console, but unlike that setup, you get to keep all the code and trials you write – in the editor, with all its conveniences.


Another recent development in ClojureScript land is Figwheel by Bruce Hauman. There’s a 6-min video showing an example of use, and a very nice 45-min video where he goes into things in a lot more detail.

In essence, Figwheel is a file-driven hot reloader: you edit some code in your editor, you save the file, and Figwheel forces the browser (or node.js) to reload the code of just that file. The implementation is very different, but the effect is similar to Dan Abramov’s React Hot Reloader – which works for JavaSript in the browser, when combined with React.

There are some limitations for what you can do in both the REPL-based and the Figwheel approach, but if all else fails you can always restart things and have a clean slate again.

The impact of these two approaches on the development process are hard to understate: it’s as if you’re inside the app, looking at things and tweaking it as it runs. App restarts are far less common, which means server-side code can just keep running as you develop pieces of it further. Likewise, browser side, you can navigate to a specific page and context, and change the code while staying on that page and in that context. Even a scroll position or the contents of an input box will stay the same as you edit and reload code.

For an example Figwheel + REPL setup running both in the browser and in node.js at the same time, see this interesting project on GitHub. It’s able to do hot reloads on the server as well as on (any number of) browsers – whenever code changes. Here’s a running setup:

And here’s what I see when typing “(fig-status)” into Figwheel’s REPL:

Figwheel System Status ---------------------------------------------------- Autobuilder running? : true Focusing on build ids: app, server Client Connections server: 1 connection app: 1 connection ----------------------------------------------------

This uses two processes: a Figwheel-based REPL (JVM), and a node-based server app (v8). And then of course a browser, and an editor for actual development. Both Node.js and the browser(s) connect into the Figwheel JVM, which also lets you type in ClojureScript.


So what do we need to work in this way? Well, for one, the language needs to support it and someone needs to have implemented this “hot reload” or “live code injection” mechanism.

For Figwheel, that’s about it. You need to write your code files in a certain way, allowing it to reload what matters without messing up the current state – “defonce” does most of this.

But the real gem is the REPL: having a window into a running app, and peeking and poking at its innards while in flight. If “REPL” sounds funny, then just think of it as “interactive command prompt”. Several scripting languages support this. Not C, C++, or Go, alas.

For this, the editor should offer some kind of support, so that a few keystrokes will let you push code into the app. Whether a function definition or a printf-type call, whatever.

And that’s where vim felt a bit inadequate: there are a few plugins which try to address this, but they all have to work around the limitation that vim has no built-in terminal.

In Emacs-land, there has always been “SLIME” for traditional Lisp languages, and now there is “CIDER” for Clojure (hey, I didn’t make up those names, I just report them!). In a long-ago past, I once tried to learn Emacs for a very intense month, but I gave up. The multi-key acrobatics is not for me, and I have tons of vim key shortcuts stashed into muscle memory by now. Some people even point to research to say that vim’s way works better.

For an idea of what people can do when they practically live inside their Emacs editor, see this 18-min video. Bit hard to follow, but you can see why some people call Emacs an OS…

Anyway, I’m not willing to unlearn those decades of vim conventions by now. I have used many other editors over the years (including TextMate, Sublime Text, and recently Atom), but I always end up going back. The mouse has no place in editing, and no matter how hard some editors try to offer a “vim emulation mode”, they all fail in very awkward ways.

And then I stumbled upon this thing. All I can say is: “Vim, reloaded”.

Wow – a 99% complete emulation, perhaps one or two keystrokes which work differently. And then it adds a whole new set of commands (based on the space bar, hence the name), incredibly nice pop-up help as you type the shortcuts, and… underneath, it’s… Emacs ???

Spacemacs comes with a ton of nice default configuration settings and plugins. Other than some font changes, some extra language bindings, I hardly change it. My biggest config tweak so far has been to make it start up with a fixed position and window size.

So there you have it. I’m switching my world over to ClojureScript as main programming language (which sounds more dramatic than it is, since it’s still JavaScript + browser + node.js in the end), and I’m switching my main development tool to Emacs (but that too is less invasive than it sounds, since it’s Vim-like and I can keep using vim on remote boxes).

Categories: Community Blog posts

Clojure and ClojureScript

JeeLabs - Tue, 25/08/2015 - 23:01

I’m in awe. There’s a (family of) programming languages which solves everything. Really.

  • it works on the JVM, V8, and CLR, and it interoperates with what already exists
  • it’s efficient, it’s dynamic, and it has parallelism built in (threaded or cooperative)
  • it’s so malleable, that any sort of DSL can trivially be created on top of it

As this fella says at this very point in his videoState. You’re doing it wrong.

I’ve been going about programming in the wrong way for decades (as a side note: the Tcl language did get it right, up to a point, despite some other troublesome shortcomings).

The language I’m talking about re-uses the best of what’s out there, and even embraces it. All the existing libraries in JavaScript can be used when running in the browser or in Node.js, and similarly for Java or C# when running in those contexts. The VM’s, as I already mentioned also get reused, which means that decades of research and optimisation are taken advantage of.

There’s even an experimental version of this (family of) programming languages for Go, so there again, it becomes possible to add this approach to whetever already exists out there, or is being introduced now or in the future.

Due to the universal reach of JavaScript these days, on browsers, servers, and even on some embedded platforms, that really has most interest to me, so what I’ve been putting my teeth into recently is “ClojureScript”, which specifically targets JavaScript.

Let me point out that ClojureScript is not another “pre-processor” like CoffeScript.

“State. You’re doing it wrong.”

As Rich Hickey, who spoke those words in the above video quickly adds: “which is ok, because I was doing it wrong too”. We all took a wrong turn a few decades ago.

The functional programming (FP) people got it right… Haskell, ML, that sort of thing.

Or rather: they saw the risks and went to a place where few people could follow (monads?).

What Clojure and ClojureScript do, is to bring a sane level of FP into the mix, with “immutable persistent datastructures”, which makes it all very practical and far easier to build with and reason about. Code is a transformation: take stuff, do things with it, and return derived / modified / updated / whatever results. But don’t change the input data.

Why does this matter?

Let’s look at a recent project taking the world by storm: React, yet another library for building user interfaces (in the browser and on mobile). The difference with AngularJS is the conceptual simplicity. To borrow another image from a similar approach in CycleJS:

Things happen in a loop: the computer shows stuff on the screen, the user responds, and the computer updates its state. In a talk by CycleJS author Andre Staltz, he actually goes so far as treat the user as a function: screen in, key+mouse actions out. Interesting concept!

Think about it:

  • facts are stored on the disk, somewhere on a network, etc
  • a program is launched which presents (some of it) on the screen
  • the user interface leads us, the monkeys, to respond and type and click
  • the program interprets these as intentions to store / change something
  • it sends out stuff to the network, writes changes to disk (perhaps via a database)
  • these changes lead to changes to what’s shown on-screen, and the cycle repeats

Even something as trivial as scrolling down is a change to a scroll position, which translates to a different part of a list or page being shown on the screen. We’ve been mixing up the view side of things (what gets shown) with the state (some would say “model”) side, which in this case is the scroll position – a simple number. The moment you take them apart, the view becomes nothing more than a function of that value. New value -> new view. Simple.

Nowhere in this story is there a requirement to tie state into the logic. It didn’t really help that object orientation (OO) taught us to always combine and even hide state inside logic.

Yet I (we?) have been programming with variables which remember / change and loops which iterate and increment, all my life. Because that’s how programming works, right?

Wrong. This model leads to madness. Untraceable, undebuggable, untestable, unverifiable.

In a way, Test-Driven-Design (TDD) shows us just how messy it got: we need to explicitly compare what a known input leads to with the expected outcome. Which is great, but writing code which is testable becomes a nightmare when there is state everywhere. So we invented “mocks” and “spies” and what-have-you-not, to be able to isolate that state again.

What if everything we implemented in code were easily reducible to small steps which cleanly compose into larger units? Each step being a function which takes one or more values as state and produces results as new values? Without side-effects or state variables?

Then again, purely functional programming with no side-effects at all is silly in a way: if there are zero side-effects, then the screen wouldn’t change, and the whole computation would be useless. We do need side-effects, because they lead to a screen display, physical-computing stuff such as motion & sound, saved results, messages going somewhere, etc.

What we don’t need, is state sprinkled across just about every single line of our code…

To get back to React: that’s exactly where it revolutionises the world of user interfaces. There’s a central repository of “the truth”, which is in fact usually nothing more than a deeply nested JavaScript data structure, from which everything shown on the web page is derived. No more messing with the DOM, putting all sorts of state into it, having to update stuff everywhere (and all the time!) for dynamic real-time apps.

React (a.k.a. ReactJS) treats an app as a pipeline: state => view => DOM => screen. The programmer designs and writes the first two, React takes care of the DOM and screen.

I’ll get back to ClojureScript, please hang in there…

What’s missing in the above, is user interaction. We’re used to the following:

mouse/keyboard => DOM => controller => state

That’s the Model-View-Controller (MVC) approach, as pioneered by Smalltalk in the 80’s. In other words: user interaction goes in the opposite direction, traversing all those steps we already have in reverse, so that we end up with modified state all the way back to the disk.

This is where AngularJS took off. It was founded on the concept of bi-directional bindings, i.e. creating an illusion that variable changes end up on the screen, and screen interactions end up back in those same variable – automatically (i.e. all taken care of by Angular).

But there is another way.

Enter “reactive programming” (RP) and “functional reactive programming” (FRP). The idea is that user interaction still needs to be interpreted and processed, but that the outcome of such processing completely bypasses all the above steps. Instead of bubbling back up the chain, we take the user interaction, define what effect it has on the original central-repository-of-the-truth, period. No figuring out what our view code needs to do.

So how do we update what’s on screen? Easy: re-create the entire view from the new state.

That might seem ridiculously inefficient: recreating a complete screen / web-page layout from scratch, as if the app was just started, right? But the brilliance of React (and several designs before it, to be fair) is that it actually manages to do this really efficiently.

Amazingly so in fact. React is faster than Angular.

Let’s step back for a second. We have code which takes input (the state) and generates output (some representation of the screen, DOM, etc). It’s a pure function, i.e. it has no side effects. We can write that code as if there is no user interaction whatsoever.

Think – just think – how much simpler code is if it only needs to deal with the one-way task of rendering: what goes where, how to visualise it – no clicks, no events, no updates!

Now we need just two more bits of logic and code:

  1. we tell React which parts respond to events (not what they do, just that they do)

  2. separately, we implement the code which gets called whenever these events fire, grab all relevant context, and report what we need to change in the global state

That’s it. The concepts are so incredibly transparent, and the resulting code so unbelievably clean, that React and its very elegant API is literally taking the Web-UI world by storm.

Back to ClojureScript

So where does ClojureScript fit in, then? Well, to be honest: it doesn’t. Most people seem to be happy just learning “The React Way” in normal main-stream JavaScript. Which is fine.

There are some very interesting projects on top of React, such as Redux and React Hot Loader. This “hot loading” is something you have to see to believe: editing code, saving the file, and picking up the changes in a running browser session without losing context. The effect is like editing in a running app: no compile-run-debug cycle, instant tinkering!

Interestingly, Tcl also supported hot-loading. Not sure why the rest of the world didn’t.

Two weeks ago I stumbled upon ClojureScript. Sure enough, they are going wild over React as well (with Om and Reagent as the main wrappers right now). And with good reason: it looks like Om (built on top of React) is actually faster than React used from JavaScript.

The reason for this is their use of immutable data structures, which forces you to not make changes to variables, arrays, lists, maps, etc. but to return updated copies (which are very efficient through a mechanism called “structural sharing”). As it so happens, this fits the circular FRP / React model like a glove. Shared trees are ridiculously easy to diff, which is the essence of why and how React achieves its good performance. And undo/redo is trivial.

Hot-loading is normal in the Clojure & ClojureScript world. Which means that editing in a running app is not a novelty at all, it’s business as usual. As with any Lisp with a REPL.

Ah, yes. You see, Clojure and ClojureScript are Lisp-like in their notation. The joke used to be that LISP stands for: “Lots of Irritating Little Parentheses”. When you get down to it, it turns out that there are not really much more of those than parens / braces in JavaScript.

But notation is not what this is all about. It’s the concepts and the design which matter.

Clojure (and ClojureScript) seem to be born out of necessity. It’s fully open source, driven by a small group of people, and evolving in a very nice way. The best introduction I’ve found is in the first 21 minutes of the same video linked to at the start of this post.

And if you want to learn more: just keep watching that same video, 2:30 hours of goodness. Better still: this 1 hour video, which I think summarises the key design choices really well.

No static typing as in Go, but I found myself often fighting it (and type hints can be added back in where needed). No callback hell as in JavaScript & Node.js, because Clojure has implemented Go’s CSP, with channels and go-routines as a library. Which means that even in the browser, you can write code as if there where multiple processes, communicating via channels in either synchronous or asynchronous fashion. And yes, it really works.

All the libraries from the browser + Node.js world can be used in ClojureScript without special tricks or wrappers, because – as I said – CLJ & CLJS embrace their host platforms.

The big negative is that CLJ/CLJS are different and not main-stream. But frankly, I don’t care at this point. Their conceptual power is that of Lisp and functional programming combined, and this simply can’t be retrofitted into the popular languages out there.

A language that doesn’t affect the way you think about programming, is not worth knowing — Alan J. Perlis

I’ve been watching many 15-minute videos on Clojure by Tim Baldridge (it costs $4 to get access to all of them), and this really feels like it’s lightyears ahead of everything else. The amazing bit is that a lot of that (such as “core.async”) catapults into plain JavaScript.

As you can probably tell, I’m sold. I’m willing to invest a lot of my time in this. I’ve been doing things all wrong for a couple of decades (CLJ only dates from 2007), and now I hope to get a shot at mending my ways. I’ll report my progress here in a couple of months.

It’s not for the faint of heart. It’s not even easy (but it is simple!). Life’s too short to keep programming without the kind of abstractions CLJ & CLJS offer. Eh… In My Opinion.

Categories: Community Blog posts

A feel for numbers

JeeLabs - Tue, 18/08/2015 - 23:01

It’s often really hard to get a meaningful sense what numbers mean – especially huge ones.

What is a terabyte? A billion euro? A megawatt? Or a thousand people, even?

I recently got our yearly gas bill, and saw that our consumption was about 1600 m3 – roughly the same as last year. We’ve insulated the house, we keep the thermostat set fairly low (19°C), and there is little more we can do – at least in terms of low-hanging fruit. Since the house has an open stairway to the top floors, it’s not easy to keep the heat localised.

But what does such a gas consumption figure mean?

For one, those 1600 m3/y are roughly 30,000 m3 in the next twenty years, which comes to about €20,000, assuming Dutch gas prices will stay the same (a big “if”, obviously).

That 30,000 m3 sounds like a huge amount of gas, for just two people to be burning up.

Then again, a volume of 31 x 31 x 31 m sounds a lot less ridiculous, doesn’t it?

Now let’s tackle it from another angle, using the Wolfram Alpha “computational knowledge engine”, which is a really astonishing free service on the internet, as you’ll see.

How much gas is estimated to be left on this planet? Wolfram Alpha has the answer:

How many people are there in the world?

Ok, let’s assume we give everyone today an equal amount of those gas reserves:

Which means that we will reach our “allowance” (for 2) 30 years from now. Now that is a number I can grasp. It does mean that in 30 years or so it’ll all be gone. Totally. Gone.

I don’t think our children and all future generations will be very pleased with this…

Oh, and for the geeks in us: note how incredibly easy it is to get at some numerical facts, and how accurately and easily Wolfram Alpha handles all the unit conversions. We now live in a world where the well-off western part of the internet-connected crowd has instant and free access to all the knowledge we’ve ammassed (Wikipedia + Google + Wolfram Alpha).

Facts are no longer something you have to learn – just pick up your phone / tablet / laptop!

But let’s not stop at this gloomy result. Here’s another, more satisfying, calculation using figures from an interesting UK site, called Electropedia (thanks, Ard!):

[…] the total Sun’s power it intercepted by the Earth is 1.740×10^17 Watts

When accounting for the earth’s rotation, seasonal and climatic effects, this boils down to:

[…] the actual power reaching the ground generally averages less than 200 Watts per square meter

Aha, that’s a figure I can relate to again, unlike the “10^17″ metric in the total above.

Let’s google for “heat energy radiated by one person”, which leads to this page, and on it:

As I recall, a typical healthy adult human generates in the neighborhood of 90 watts.

Interesting. Now an average adult’s calorie intake of 2400 kcal/day translates to 2.8 kWh. Note how this nicely matches up (at least roughly): 2.8 kWh/day is 116 watt, continuously. So yes, since we humans just burn stuff, it’s bound to end up as mostly heat, right?

But there is more to be said about the total solar energy reaching our little blue planet:

Integrating this power over the whole year the total solar energy received by the earth will be: 25,400 TW X 24 X 365 = 222,504,000 TeraWatthours (TWh)

Yuck, those incomprehensible units again. Luckily, Electropedia continues, and says:

[…] the available solar energy is over 10,056 times the world’s consumption. The solar energy must of course be converted into electrical energy, but even with a low conversion efficiency of only 10% the available energy will be 22,250,400 TWh or over a thousand times the consumption.

That sounds promising: we “just” need to harvest it, and end all fossil fuel consumption.

And to finish it off, here’s a simple calculation which also very much surprised me:

  • take a world population of 7.13 billion people (2013 figures, but good enough)
  • place each person on his/her own square meter
  • put everyone together in one spot (tight, but hey, the subway is a lot tighter!)
  • what you end up, is of course 7.13 billion square meters, i.e. 7,130,000,000 m3
  • sounds like a lot? how about an area of 70 by 100 km? (1/6th of the Netherlands)

Then, googling again, I found out that 71% of the surface of our planet is water.

And with a little more help from Wolfram Alpha, I get this result:

That’s 144 x 144 meters per person, for everyone on this planet. Although not every spot is inhabitable, of course. But at least these are figures I can fit into my head and grasp!

Now if only I could understand why we can’t solve this human tragedy. Maths won’t help.

Categories: Community Blog posts

Lessons from history

JeeLabs - Tue, 11/08/2015 - 23:01

(No, not the kind of history lessons we all got treated to in school…)

What I’d like to talk about, is how to deal with sensor readings over time. As described in last week’s post, there’s the “raw” data:

raw/rf12/868-5/3 "0000000000038d09090082666a" raw/rf12/868-5/3 "0000000000038e09090082666a" raw/rf12/868-5/3 "0000000000038f090900826666"

… and there’s the decoded data, i.e. in this case:

sensor/BATT-2 {"node":"rf12/868-5/3","ping":592269, "vpre":152,"tag":"BATT-2","vbatt":63,"time":1435856290589} sensor/BATT-2 {"node":"rf12/868-5/3","ping":592270, "vpre":152,"tag":"BATT-2","vbatt":63,"time":1435856354579} sensor/BATT-2 {"node":"rf12/868-5/3","ping":592271, "vpre":152,"tag":"BATT-2","vbatt":60,"time":1435856418569}

In both cases, we’re in fact dealing with a series of readings over time. This aspect tends to get lost a bit when using MQTT, since each new reading is sent to the same topic, replacing the previous data. MQTT is (and should be) 100% real-time, but blissfully unaware of time.

The raw data is valuable information, because everything else derives from it. This is why in HouseMon I stored each entry as timestamped text in a logfile. With proper care, the raw data can be an excellent way to “replay” all received data, whether after a major database or other system failure, or to import all the data into a new software application.

So much for the raw data, and keeping a historical archive of it all – which is good practice, IMO. I’ve been saving raw data for some 8 years now. It requires relatively little storage when saved as daily text files and gzip-compressed: about 180 Mb/year nowadays.

Now let’s look a bit more at that decoded sensor data…

When working on HouseMon, I noticed that it’s incredibly useful to have access to both the latest value and the previous value. In the case of these “BATT-*” nodes, for example, having both allows us to determine the elapsed time since the previous reception (using the “time” field), or to check whether any packets have been missed (using the “ping” counter).

With readings of cumulative or aggregating values, the previous reading is in fact essential to be able to calculate an instantaneous rate (think: gas and electricity meters).

In the past, I implemented this by having each entry store a previous and a latest value (and time stamp), but with MQTT we could actually simplify this considerably.

The trick is to use MQTT’s brilliant “RETAIN” flag:

  • in each published sensor message, we set the RETAIN flag to true
  • this causes the MQTT broker (server) to permanently store that message
  • when a new client connects, it will get all saved messages re-sent to it the moment it subscribes to a corresponding topic (or wildcard topic)
  • such re-sent messages are flagged, and can be recognised as such by the client, to distinguish them from genuinely new real-time messages
  • in a way, retained message handling is a bit like a store-and-forward mechanism
  • … but do keep in mind that only the last message for each topic is retained

What’s the point? Ah, glad you asked :)

In MQTT, a RETAINed message is one which can very gracefully deal with client connects and disconnects: a client need not be connected or subscribed at the time such a message is published. With RETAIN, the client will receive the message the moment it connects and subscribes, even if this is after the time of publication.

In other words: RETAIN flags a message as representing the latest state for that topic.

The best example is perhaps a switch which can be either ON or OFF: whenever the switch is flipped we publish either “ON” or “OFF” to topic “my/switch”. What if the user interface app is not running at the time? When it comes online, it would be very useful to know the last published value, and by setting the RETAIN flag we make sure it’ll be sent right away.

The collection of RETAINed messages can also be viewed as a simple key-value database.

For an excellent series of posts about MQTT, see this index page from HiveMQ.

But I digress – back to the history aspect of all this…

If every “sensor/…” topic has its RETAIN flag set, then we’ll receive all the last-known states the moment we connect and subscribe as MQTT client. We can then immediately save these in memory, as “previous” values.

Now, whenever a new value comes in:

  • we have the previous value available
  • we can do whatever we need to do in our application
  • when done, we overwrite the saved previous value with the new one

So in memory, our applications will have access to the previous data, but we don’t have to deal with this aspect in the MQTT broker – it remains totally ignorant of this mechanism. It simply collects messages, and pushes them to apps interested in them: pure pub-sub!

Categories: Community Blog posts

My own cloud based version control tool

mharizanov - Thu, 06/08/2015 - 19:25

There is no second opinion about the importance of version control, it is a must-have for any software project. The option for reversibility, concurrency and history of code edits is what makes it so crucial. I’ve been using mostly GitHub for the purpose, it seems the version control tool of choice for many these days. I have a number of personal projects that I want to keep private though, and with GitHub that is a premium option that costs $7/month. These are dollars well spent, however since I already have a cloud hosted VM, I figured I just use it for my own cloud version control tool. Being spoiled by GitHub’s intuitive web UI, I didn’t want to go for a raw git tool and was rather looking for a minimal change in experience.

My research ended up with me choosing GitLab CE, a powerful tool pretty much like GitHub, but with the ability to run off my own VM. I already run a $5/month 512MB RAM VPS at DigitalOcean, this blog runs off there. Adding GitLab to that machine is a bit of stretch as that is the absolute minimum required configuration. I added a swap file to be able to handle the increased resource demand. Since this will be a private repo,I don’t expect much added load anyway, it will be just used now and then by yours truly only.

My next challenge was to decide on the installation type, GitLab runs on Nginx and PostgreSQL, I didn’t want to mess up my current Apache+MySQL setup. Docker was the natural choice of tool to create isolated environment, and with the incredibly well implemented GitLab Docker image by Sameer Naik I got my own private cloud version control tool few minutes later.

To harden up on security, especially following my recent DDoS/hack attempt woes, I opened the GitLab HTTP port to be visible to only certain IP addresses that I use.

Performance is just excellent, I was worried to run at minimum specs, but obviously my setup is quite low on load and all works smoothly.


  Page views: 1419

Categories: Community Blog posts

Doodling with decoders

JeeLabs - Tue, 04/08/2015 - 23:01

With plenty of sensor nodes here at JeeLabs, I’ve been exploring and doodling a bit, to see how MQTT could fit into this. As expected, it’s all very simple and easy to do.

The first task at hand is to take all those “OK …” lines coming out of a JeeLink running RF12demo, and push them into MQTT. Here’s a quick solution, using Python for a change:

import serial import paho.mqtt.client as mqtt def on_connect(client, userdata, flags, rc): print("Connected with result code "+str(rc)) #client.subscribe("#") def on_message(client, userdata, msg): # TODO pick up outgoing commands and send them to serial print(msg.topic+" "+str(msg.payload)) client = mqtt.Client() client.on_connect = on_connect client.on_message = on_message client.connect("localhost", 1883, 60) # TODO reconnect as needed client.loop_start() ser = serial.Serial('/dev/ttyUSB1', 57600) while True: # read incoming lines and split on whitespace items = ser.readline().split() # only process lines starting with "OK" if len(items) > 1 and items[0] == 'OK': # convert each item string to an int bytes = [int(i) for i in items[1:]] # construct the MQTT topic to publish to topic = 'raw/rf12/868-5/' + str(bytes[0]) # convert incoming bytes to a single hex string hexStr = ''.join(format(i, '02x') for i in bytes) # the payload has 4 extra prefix bytes and is a JSON string payload = '"00000010' + hexStr + '"' # publish the incoming message client.publish(topic, payload) #, retain=True) # debugging print topic, '=', hexStr

Trivial stuff, once you install this MQTT library. Here is a selection of the messages getting published to MQTT – these are for a bunch of nodes running radioBlip and radioBlip2:

raw/rf12/868-5/3 "0000000000038d09090082666a" raw/rf12/868-5/3 "0000000000038e09090082666a" raw/rf12/868-5/3 "0000000000038f090900826666"

What needs to be done next, is to decode these to more meaningful results.

Due to the way MQTT works, we can perform this task in a separate process – so here’s a second Python script to do just that. Note that it subscribes and publishes to MQTT:

import binascii, json, struct, time import paho.mqtt.client as mqtt # raw/rf12/868-5/3 "0000000000030f230400" # raw/rf12/868-5/3 "0000000000033c09090082666a" # avoid having to use "obj['blah']", can use "obj.blah" instead # see end of C = type('type_C', (object,), {}) client = mqtt.Client() def millis(): return int(time.time() * 1000) def on_connect(client, userdata, flags, rc): print("Connected with result code "+str(rc)) client.subscribe("raw/#") def batt_decoder(o, raw): o.tag = 'BATT-0' if len(raw) >= 10: = struct.unpack('<I', raw[6:10])[0] if len(raw) >= 13: o.tag = 'BATT-%d' % (ord(raw[10]) & 0x7F) o.vpre = 50 + ord(raw[11]) if ord(raw[10]) >= 0x80: o.vbatt = o.vpre * ord(raw[12]) / 255 elif ord(raw[12]) != 0: o.vpost = 50 + ord(raw[12]) return True def on_message(client, userdata, msg): o = C(); o.time = millis() o.node = msg.topic[4:] raw = binascii.unhexlify(msg.payload[1:-1]) if msg.topic == "raw/rf12/868-5/3" and batt_decoder(o, raw): #print o.__dict__ out = json.dumps(o.__dict__, separators=(',',':')) client.publish('sensor/' + o.tag, out) #, retain=True) client.on_connect = on_connect client.on_message = on_message client.connect("localhost", 1883, 60) client.loop_forever()

Here is what gets published, as a result of the above three “raw/…” messages:

sensor/BATT-2 {"node":"rf12/868-5/3","ping":592269, "vpre":152,"tag":"BATT-2","vbatt":63,"time":1435856290589} sensor/BATT-2 {"node":"rf12/868-5/3","ping":592270, "vpre":152,"tag":"BATT-2","vbatt":63,"time":1435856354579} sensor/BATT-2 {"node":"rf12/868-5/3","ping":592271, "vpre":152,"tag":"BATT-2","vbatt":60,"time":1435856418569}

So now, the incoming data has been turned into meaningful readings: it’s a node called “BATT-2″, the readings come in roughly every 64 seconds (as expected), and the received counter value is indeed incrementing with each new packet.

Using a dynamic scripting language such as Python (or Lua, or JavaScript) has the advantage that it will remain very simple to extend this decoding logic at any time.

But don’t get me wrong: this is just an exploration – it won’t scale well as it is. We really should deal with decoding logic as data, i.e. manage the set of decoders and their use by different nodes in a database. Perhaps even tie each node to a decoder pulled from GitHub?

Categories: Community Blog posts

Could a coin cell be enough?

JeeLabs - Tue, 28/07/2015 - 23:01

To state the obvious: small wireless sensor nodes should be small and wireless. Doh.

That means battery-powered. But batteries run out. So we also want these nodes to last a while. How long? Well, if every node lasts a year, and there are a bunch of them around the house, we’ll need to replace (or recharge) some battery somewhere several times a year.

Not good.

The easy way out is a fat battery: either a decent-capacity LiPo battery pack or say three AA cells in series to provide us with a 3.6 .. 4.5V supply (depending on battery type).

But large batteries can be ugly and distracting – even a single AA battery is large when placed in plain sight on a wall in the living room, for example.

So… how far could we go on a coin cell?

Let’s define the arena a bit first, there are many types of coin cells. The smallest ones of a few mm diameter for hearing aids have only a few dozen mAh of energy at most, which is not enough as you will see shortly. Here some coin cell examples, from Wikipedia:

The most common coin cell is the CR2032 – 20 mm diameter, 3.2 mm thick. It is listed here as having a capacity of about 200 mAh:

A really fat one is the CR2477 – 24 mm diameter, 7.7 mm thick – and has a whopping 1000 mAh of capacity. It’s far less common than the CR2032, though.

These coin cells supply about 3.0V, but that voltage varies: it can be up to 3.6V unloaded (i.e. when the µC is asleep), down to 2.0V when nearly discharged. This is usually fine with today’s µCs, but we need to be careful with all the other components, and if we’re doing analog stuff then these variations can in some cases really throw a wrench into our project.

Then there are the AAA and AA batteries of 1.2 .. 1.5V each, so we’ll need at least two and sometimes even three of them to make our circuits work across their lifetimes. An AAA cell of 10.5×44.5 mm has about 800..1200 mAh, whereas an AA cell of 14.5×50.5 mm has 1800..2700 mAh of energy. Note that this value doesn’t increase when placed in series!


Let’s see how far we could get with a CR2032 coin cell powering a µC + radio + sensors:

  • one year is 365 x 24 – 8,760 hours
  • one CR2032 coin cell can supply 200 mAh of energy
  • it will last one year if we draw under 23 µA on average
  • it will last two years if we draw under 11 µA on average
  • it will last four years if we draw under 5 µA on average
  • it will last ten years if we draw under 2 µA on average

An LPC8xx in deep sleep mode with its low-power wake-up timer kept running will draw about 1.1 µA when properly set up. The RFM69 draws 0.1 µA in sleep mode. That leaves us roughly a 10 µA margin for all attached sensors if we want to achieve a 2-year battery life.

This is doable. Many simple sensors for temperature, humidity, and pressure can be made to consume no more than a few µA in sleep mode. Or if they consume too much, we could tie their power supply pin to an output pin on the µC and completely remove power from them. This requires an extra I/O pin, and we’ll probably need to wait a bit longer for the chip to be ready if we have to power it up every time. No big deal – usually.

A motion sensor based on passive infrared detection (PIR) draws 60..300 µA however, so that would severely reduce the battery lifetime. Turning it off is not an option, since these sensors need about a minute to stabilise before they can be used.

Note that even a 1 MΩ resistor has a non-negligible 3 µA of constant current consumption. With ultra low-power sensor nodes, every part of the circuit needs to be carefully designed! Sometimes, unexpected consequences can have a substantial impact on battery life, such as grease, dust, or dirt accumulating on an openly exposed PCB over the years…

Door switch

What about sensing the closure of a mechanical switch?

In that case, we can in fact put the µC into deep power down without running the wake-up timer, and let the wake-up pin bring it back to life. Now, power consumption will drop to a fraction of a microamp, and battery life of the coin cell can be increased to over a decade.

Alternately, we could use a contact-less solution, in the form of a Hall effect sensor and a small magnet. No wear, and probably easier to install and hide out of sight somewhere.

The Seiko S-5712 series, for example, draws 1..4 µA when operated at low duty cycle (measuring 5 times per second should be more than enough for a door/window sensor). Its output could be used to wake up the µC, just as with a mechanical switch. Now we’re in the 5 µA ballpark, i.e. about 4 years on a CR2032 coin cell. Quite usable!

It can pay off to carefully review all possible options – for example, if we were to instead use a reed relay as door sensor, we might well end up with the best of both worlds: total shut-off via mechanical switching, yet reliable contact-less activation via a small magnet.

What about the radio

The RFM69 draws from 15 to 45 mA when transmitting a packet. Yet I’m not including this in the above calculations, for good reason:

  • it’s only transmitting for a few milliseconds
  • … and probably less than once every few minutes, on average
  • this means its duty cycle can stay well under 0.001%
  • which translates to less than 0.5 µA – again: on average

Transmitting a short packet only every so often is virtually free in terms of energy requirements. It’s a hefty burst, but it simply doesn’t amount to much – literally!


Aiming for wireless sensor nodes which never need to listen to incoming RF packets, and only send out brief ones very rarely, we can see that a coin cell such as the common CR2032 will be able to support nodes for several years. Assuming that the design of both hardware and software was properly done, of course.

And if the CR2032 doesn’t cut it – there’s always the CR2477 option to help us further.

Categories: Community Blog posts

Forth on a DIP

JeeLabs - Tue, 21/07/2015 - 23:01

In a recent article, I mentioned the Forth language and the Mecrisp implementation, which includes a series of builds for ARM chips. As it turns out, the mecrisp-stellaris-... archive on the download page includes a ready-to-run build for the 28-pin DIP LPC1114 µC, which I happened to have lying around:

It doesn’t take much to get this chip powered and connected through a modified BUB (set to 3.3V!) so it can be flashed with the Mecrisp firmware. Once that is done, you end up with a pretty impressive Forth implementation, with half of flash memory free for user code.

First thing I tried was to connect to it and list out all the commands it knows – known as “words” in Forth parlance, and listed by entering “words” + return:

$ lpc21isp -termonly -control x /dev/tty.usbserial-AH01A0EG 115200 0 lpc21isp version 1.97 Terminal started (press Escape to abort) Mecrisp-Stellaris 2.1.3 with M0 core for LPC1114FN28 by Matthias Koch words words --- Mecrisp-Stellaris Core --- 2dup 2drop 2swap 2nip 2over 2tuck 2rot 2-rot 2>r 2r> 2r@ 2rdrop d2/ d2* dshr dshl dabs dnegate d- d+ s>d um* m* ud* udm* */ */mod u*/ u*/mod um/mod m/mod ud/mod d/mod d/ f* f/ 2! 2@ du< du> d< d> d0< d0= d<> d= sp@ sp! rp@ rp! dup drop ?dup swap nip over tuck rot -rot pick depth rdepth >r r> r@ rdrop rpick true false and bic or xor not clz shr shl ror rol rshift arshift lshift 0= 0<> 0< >= <= < > u>= u<= u< u> <> = min max umax umin move fill @ ! +! h@ h! h+! c@ c! c+! bis! bic! xor! bit@ hbis! hbic! hxor! hbit@ cbis! cbic! cxor! cbit@ cell+ cells flash-khz 16flash! eraseflash initflash hflash! flushflash + - 1- 1+ 2- 2+ negate abs u/mod /mod mod / * 2* 2/ even base binary decimal hex hook-emit hook-key hook-emit? hook-key? hook-pause emit key emit? key? pause serial-emit serial-key serial-emit? serial-key? cexpect accept tib >in current-source setsource source query compare cr bl space spaces [char] char ( \ ." c" s" count ctype type hex. h.s u.s .s words registerliteral, call, literal, create does> <builds ['] ' postpone inline, ret, exit recurse state ] [ : ; execute immediate inline compileonly 0-foldable 1-foldable 2-foldable 3-foldable 4-foldable 5-foldable 6-foldable 7-foldable constant 2constant smudge setflags align aligned align4, align16, h, , ><, string, allot compiletoram? compiletoram compiletoflash (create) variable 2variable nvariable buffer: dictionarystart dictionarynext skipstring find cjump, jump, here flashvar-here then else if repeat while until again begin k j i leave unloop +loop loop do ?do case ?of of endof endcase token parse digit number .digit hold hold< sign #> f#S f# #S # <# f. f.n ud. d. u. . evaluate interpret hook-quit quit eint? eint dint ipsr nop unhandled reset irq-systick irq-fault irq-collection irq-adc irq-i2c irq-uart --- Flash Dictionary --- ok.

That’s over 300 standard Forth words, including all the usual suspects (I’ve shortened the above to only show their names, as Mecrisp actually lists these words one per line).

Here’s a simple way of making it do something – adding 1 and 2 and printing the result:

  • type “1 2 + .” plus return, and it types ” 3 ok.” back at you

Let’s define a new “hello” word:

: hello ." Hello world!" ; ok.

We’ve extended the system! We can now type hello, and guess what comes out:

hello Hello world! ok. ----- + <CR>

Note the confusing output: we typed “hello” + a carriage return, and the system executed our definition of hello and printed the greeting right after it. Forth is highly interactive!

Here’s another definition, of a new word called “count-up”:

: count-up 0 do i . loop ; ok.

It takes one argument on the stack, so we can call it as follows:

5 count-up 0 1 2 3 4 ok.

Again, keep in mind that the ” 0 1 2 3 4 ok.” was printed out, not typed in. We’ve defined a loop which prints increasing numbers. But what if we forget to provide an argument?

count-up 0 1 2 [...] Stack underflow

Whoops. Not so good: stack underflow was properly detected, but not before the loop actually ran and printed out a bunch of numbers (how many depends on what value happened to be in memory). Luckily, a µC is easily reset!

Permanent code

This post isn’t meant to be an introduction to Mecrisp (or Forth), you’ll have to read other documentation for that. But one feature is worth exploring: the ability to interactively store code in flash memory and set up the system so it runs that code on power up. Here’s how:

compiletoflash ok. : count-up 0 do i . loop ; ok. : init 10 count-up ; ok.

In a nutshell: 1) we instruct the system to permanently add new definitions to its own flash memory from now on, 2) we define the count-up word as before, and 3) we (re-)define the special init word which Mecrisp Forth will automatically run for us when it starts up.

Let’s try it, we’ll reset the µC and see what it types out:

$ lpc21isp -termonly -control x /dev/tty.usbserial-AH01A0EG 115200 0 lpc21isp version 1.97 Terminal started (press Escape to abort) Mecrisp-Stellaris 2.1.3 with M0 core for LPC1114FN28 by Matthias Koch 0 1 2 3 4 5 6 7 8 9

Bingo! Our new code has been saved in flash memory, and starts running the moment the LPC1114 chip comes out of reset. Note that we can get rid of it again with “eraseflash“.

As you can see, it would be possible to write a full-blown application in Mecrisp Forth and end up with a standalone µC chip which then works as instructed every time it powers up.


Forth code runs surprisingly fast. Here is a delay loop which does nothing:

: delay 0 do loop ; ok.

And this code:

10000000 delay ok.

… takes about 3.5 seconds before printing out the final “ok.” prompt. That’s some 3 million iterations per second. Not too shabby, if you consider that the LPC1114 runs at 12 MHz!

Categories: Community Blog posts

RFM69s, OOK, and antennas

JeeLabs - Tue, 14/07/2015 - 23:01

Recently, Frank @ SevenWatt has been doing a lot of very interesting work on getting the most out of the RFM69 wireless radio modules.

His main interest is in figuring out how to receive weak OOK signals from a variety of sensors in and around the house. So first, you’ll need to extract the OOK information – it turns out that there are several ways to do this, and when you get it right, the bit patterns that come out snap into very clear-cut 0/1 groups – which can then be decoded:

Another interesting bit of research went into comparing different boards and builds to see how the setups affect reception. The good news is that the RFM69 is fairly consistent (no extreme variations between different modules).

Then, with plenty of data collection skills and tools at hand, Frank has been investigating the effect of different antennas on reception quality – which is a combination of getting the strongest signal and the lowest “noise floor”, i.e. the level of background noise that every receiver has to deal with. Here are the different antenna setups being evaluated:

Last but not least, is an article about decoding packets from the ELV Cost Control with an RFM69 and some clever tricks. These units report power consumption every 5 seconds:

Each of these articles is worth a good read, and yes… the choice of antenna geometry, its build accuracy, the quality of cabling, and the distance to the µC … they all do matter!

Categories: Community Blog posts

How To Take Care Of Your Parrot

FairTradeElectronics - Fri, 10/07/2015 - 11:16

Parrots are birds that are kept by many people as pets. These birds are known to have impressed human beings for many centuries. Parrots were kept by many ancient people starting from kings, warlords, pirates and even the common people. The birds are admired because of their colorful feathers, their high levels of intelligence and their talking ability. There are very many types of parrots and different types have their own style and personality that is very peculiar from the rest. Different types of parrots love to eat certain types of food and they love to live in a different unique manner compared to other types. If you own a parrot you should take care of the bird wisely and maintain it very well to make it have an enjoyable life in your household. Some of the breeds of parrots include; The Colorful Macaw, The Comedic Cockatoo and The Majestic African Grey. You can make your parrot very happy, playful and healthy by purchasing the most perfect parrot cage for your parrot. Different types of parrots love to live in certain specific types of cages. The cage of your parrot should have a good space that will give the bird enough room to play and exercise-since parrots are birds that are playful and they love to have regular exercise. The cage should give room for your parrot to be able to stretch its wings; the cage should have enough space that will give the parrot enough room when feeding. The cage should also give the pilot enough space to play with its toys and also preen its feathers. Most of the parrot cages are manufactured with “play-top” that enables the parrot to play and others have an additional pullout tray beneath the parrot cage that enables you to collect the litter easily.

Some pet shops sell parrot cages that you will be required to assemble before putting your parrot in the cage. You will be supplied with a manual of how to assemble the cage. You will be required to assemble the “bottom stand” first and finish by putting the perches and the feeders into their position. For you to have a healthy parrot you will always be required to clean the parrot cage very well. You will be required to replace the cage liners daily, you will also be required to wipe away all the food-leftovers and the waste daily and you will also be required to wash the food and water dishes daily. You will also be required to thoroughly clean the perches and the toys at least once per week. A thorough cleaning should for the entire parrot cage should be done at least once in every month. Sometimes you will be required to dismantle the whole cage and wash every part very well then you will re-assemble the cage again.

Different types of parrot cages can be purchased in many stores and pet shops. You can also purchase the parrot cages from the online pet supermarkets.

You should take care of your parrot and maintain it very well for the bird to be always happy and attractive, you should feed it well, spray it, clean it and you will be assured of a very good pet.

Categories: Community Blog posts

Rediscover Your Music With the Sennheiser HD700 digital headphones

FairTradeElectronics - Sun, 28/06/2015 - 13:49

Music is only as good as the device you use to listen to it. There are a number of devices that allow you to experience music. One of these devices is the headphones. These are a pair of large electronic speakers that are mounted on a frame that goes over the skull and cups over the ears. There are many headphone brands in the world. Each one professes to make the highest quality headphones. Of them all none is more luxurious, flamboyant and downright effective like the Sennheiser HD700 dynamic stereo headphones.

Introducing Sennheiser

These are an ultra modern pair of headphones. They are open circumaural headphones. This means that the headphone cups have a ventilated back casing. The cups feature a mesh construction that is beautiful to look at and also allows for full, transparent sound. Due to this type of cup construction, these headphones are able to produce warm and balanced music. The Sennheiser headphones are chock full of technology. The speakers in the cups are fitted with ventilated magnets. Thus, when you are listening to music using these headphones, you do not experience distortion of the music by air flowing around the cups. In addition to that, the acoustic casing of these headphones is angled. As a result, they provide superb projection of the music and the notes sound natural.

Sennheiser made use of some advanced drivers in these headphones. Not only are they modern and chic, they can produce high pressure sound. These drivers also respond with a flat frequency. This means that your music has absolutely no distortions. To boost the convenience of using these headphones, Sennheiser made the connector cable of these headphones completely detachable. The cable is made up four wires of silver-plated copper. It is also completely free of oxygen. Thus, it can conduct the music from your devices better even when it is being played at a high frequency.

Technology meets design

The Sennheiser HD700 headphones are a beauty to look at. They have a space age mesh on the outer parts of the cups. The mesh is built to express the industrial processes that make these headphones a reality. The headband on these headphones is coated in silicone. In addition to that, the headband has a dual material yoke. This is aesthetically very pleasing. In addition to that, the cups on the headphones have a soft velour padding. This padding goes all around your ears. This makes the music you are listening to sound crystal clear. Despite the large cups and strong headband frame, these headphones are super light. This is due to the aerospace nature of the materials used to construct these headphones.

Convenient construction

Few headphones in the world are built with convenience in mind. First of all, the cable can be removed and stored separately. You can also upgrade it in case yours gets worn out. The cable actually has an indent in the rubber casing. This indent assists you to insert the cable in the headphone casing at the cups. The braided nature of the cable helps it to survive the wear and tear of day to day activities. Moreover, it shows that this is a pair of very high quality headphones. For

Categories: Community Blog posts

Ultimate guide to buying a water softener system

FairTradeElectronics - Sun, 21/06/2015 - 04:42

There are a good number of water softener manufacturing companies on the market; this can compromise quality. If there is anything that one should not get close to compromising is health. Drinking water should be clean, that is not debatable, treated with the appropriate softener moreover soft water will save you a lot in terms of the soap used to clean, heating elements like the kettle will not form layers and all water related chores will be smoother. This article will provide you a guide to buying the best softener for your water from the diverse water softener providing companies.

The first thing you need to know is how much you will spend on the services, says it doesn’t have to be expensive. You should not overpay for water softener service due to the desperateness of installing the best water softener. You need to understand the pricing so that you invest in an affordable system that in future, it can recoup the investment. Therefore, the key point is you understand the costs of various systems on the market.

How much do you need to efficiently and effectively run the system? An answer to this question provides you with an ability to know a system you can choose. The energy needed to run the system should not greatly on impact your budget. You can decide to venture into non-electrical systems if you realize that water and electrical bills will strain your budget. However, if you are comfortable with the bills then that should not be a pointer.

Size of the equipment is another pointer in the guide to purchasing a water softener will need to take measurements of your compound. Water usage and plumbing measurements will help you realize if the system you are going to purchase is adequate. You should not purchase a system that will be ineffective. If you have a bigger compound with a lot of water utilizing processes then definitely you will suffocate your water needs.

How large is the space you are intending to install your system on? If you overestimate or underestimate your space, definitely you will suffer costs. You will stand higher chances of purchasing a suitable system when you provide as much information as you can. Do not buy something too large for your compound under the pressure of need of a larger system. You can save money by having the appropriate sized system.

Research! Research! Research! Key to getting the best company for the best water softener system is by researching among the available ones. First, it must be licensed, should be insured and bonded. Secondly, you can consider referrals from clients and finally read all reviews left by customers who have installed the systems. Those who have had the system for longer periods should be at a position to judge better.

Are you buying the equipment on you want your supplier to provide them? This is also a factor to consider. Plumber or dealer is inclusive in the instruments. Buying the equipment online translates to installing the equipment on your own. You choose a system after settling on a particular company. This should put you at a position to choose a water softening system that will serve you well.

Categories: Community Blog posts

Your Mercedes Dealer In Chelmsford

FairTradeElectronics - Sat, 20/06/2015 - 13:36

The Mercedes Benz of Chelmsford car dealership is a part of the Jardine Motors Group in the UK which trades under the brand name Lancaster. Jardine Motors Group UK has grown from a family-run car dealer to become a large car dealer representing 23 manufacturers all over Europe. It has 70 different locations dealing in different car brands including one in Chelmsford which specializes in Mercedes Benz retail. Other sites are found at Lakeside, Southend, Colchester, Ipswich and others. The Jardine Motor Group is one of the largest and prestigious automotive retail groups in the country. It is a reputable dealership that has kept a loyal client base and has become the first choice for customers. Mercedes Chelmsford deals in selling new and approved used Benz cars as well as their parts. It provides a friendly environment for its customers as well as offer impeccable care and services. The dealership is stocked excellently with cars ranging from stunning previously owned ones to the very latest releases of the brand. There are over 550 Approved Used Mercedes Benz gracing their extensive stock. The customer is given a variety of colour to choose from when going in to shop for a new car. To spice it all up, the dealer offers its clients unbelievably affordable prices with their flexible finance plans which opens up one’s accessibility to a wider choice of car models. The staff is not only friendly but also has expertise on the Mercedes Benz and are passionate about the brand. Their knowledge is vast and can match the client’s requirements comfortably. Mercedes Chelmsford also has a MyService waiting area complete with free Wi-Fi so that you are occupied and entertained while waiting for your car to get serviced at their site. There are also quality refreshments one can buy while waiting to have the client feeling comfortable. The Business Zone also has an enclosed workspace that clients can utilize when waiting. The Mercedes Benz of Chelmsford showroom is not hard to find. It is located on the White Hart Lane in Springfield Chelmsford on the roundabout opposite Sainsbury. The airy and bright showroom is easy to spot from the road with the forecourt for the new and the approved used Mercedes Benz cars. The location is convenient with close links to various other locations such as East London, Hatfield Peveral, Boreham, Witham, Woodham, Brentwood, Shenfield, Ingatestone, Broomfield among others. Stock delivery can be done across England, Wales, Scotland and Northern Ireland to all of the UK whether for business use or personal use. Their Customer Service is also exceptional. Aftersales services are offered in full complement by their state of the art facilities to keep your Mercedes Benz in excellent condition. The expertly trained staff on the Mercedes Benz attend to your car only with Mercedes Genuine Parts so that you are assured that your car is in the best condition and being properly taken care of. Say goodbye to the hustle that comes with shopping for a good affordable car, whether new or a decent used Mercedes Benz and give Mercedes Chelmsford a call.

Categories: Community Blog posts

Take a Look at the Best rice cookers

FairTradeElectronics - Fri, 05/06/2015 - 11:22

Rice is a type of grain that we can cook and consume with a stew of our choice. It is very versatile and can be cooked in various ways. Today, there are appliances that can cook rice automatically. They save us time and are very convenient. Here are the best rice cookers in the market.


The Aroma rice cooker is heralded as one of the best in the market. It is an affordable appliance that cooks your rice very quickly and makes it super delicious. It is simple to use and has a high capacity. The Aroma rice cookers come in 3 varieties. These are the 6 cup, 8 cup and 20 cup cookers. The 6 cup cooker costs $15 to $25, the 8 cup costs $25 to $35 and the 20 cup costs $30 to $100. All of them can easily steam food and are made of stainless steel. One major advantage of getting this Aroma rice cooker is that you can cook the rice and another dish in it at the same time. You can steam some meat while the rice cooks under it. It also makes use of digital technology. You can set it to activate itself and start cooking after a specific number of hours. Thus, you can load it with rice as you leave the house in the morning and when you get home you’ll find that it has already cooked for you.

Instant Pot

This is another type of rice cooker that is simply amazing to use. This rice cooker can make almost all types of rice for example Spanish rice. In addition to that, it can be programmed to cook your rice after a set number of hours. In addition to rice, this amazing cooker can prepare soups, beans, meat, poultry and stew. All these will get ready to eat in a very short amount of time. It has a capacity of 6 quarts. For only $135 to $235, the Instant Pot rice cooker is definitely one of the best available.

Cuckoo rice cooker

In terms of style and design, the cuckoo rice cooker bests its competitors by far. It has a round, ergonomic shape that is appealing to touch. Moreover, it is very capable in its ability to cook for you. It has a capacity of 6 cups. Its size is compact and it can easily be carried in luggage during trips. The cooker can easily and effectively cook brown and white rice. For an affordable price of $80 to $120, the Cuckoo rice cooker is a great choice of rice cooker.

Miracle rice cooker

Made of stainless steel and colored in pearl white, this rice cooker is one of the shoppers’ favorite. It costs from $70 to $90. In it, there is a high quality steamer for vegetables as well as a stainless steel bowl. It has a capacity of 8 cups of rice. You can easily cook the rice and steam your vegetables at the same time. This ability will save you a lot of time when preparing meals since you can do it all in one go. Thus, it is a multitasking rice cooker for a very affordable price.

Visit this site for more detailed rice cooker reviews: and be quick cause those amazing deals don’t last forever!

Categories: Community Blog posts
Syndicate content