Meet the Tinker Pico (again)

JeeLabs - Tue, 06/10/2015 - 23:01

It’s time to get some new hardware out the door, which I’ve been doodling with here at JeeLabs for quite some time, and which some of you might like to tinker with as well.

The first new board is the Tinker Pico, which was already pre-announced a while ago. Here’s the final PCB, which is being sent off to production now, soon to appear in the shop:

As before, each week consists of one announcement like this, and one or more articles with further details, ready for release on successive days of the week – three in this case:

This is merely a first step, running pretty basic demo code. Stay tuned for more options…

(For comments, visit the forum area)

Categories: Community Blog posts

Getting back in the groove

JeeLabs - Tue, 29/09/2015 - 23:01

This will be the last post in “summer mode”. Next week, I’ll start posting again with articles that will end up in the Jee Book, as before – i.e. trying to create a coherent story again.

The first step has just been completed: clearing up my workspace at JeeLabs. Two days ago, every flat surface in this area was covered with piles of “stuff”. Now it’s cleaned up:

On the menu for the rest of this year: new products, and lots of explorations / experiments in Physical Computing, I hope. I have an idea of where to go, but no definitive plans. There is a lot going on, and there’s a lot of duplication when you surf around on the web. But this weblog will always be about trying out new things, not just repeating what others are doing.

My focus will remain aimed at “Computing stuff tied to the physical world” as the JeeLabs byline says, in essentially two ways: 1) to improve our living environment in and around the house, and 2) to have fun and tinker with low-cost hardware and open source software.

For one, I’d like to replace the wireless sensor network I’ve been running here, or at least gradually evolve all of the nodes to new ARM-based designs. Not for the sake of change but to introduce new ideas and features, get even better battery lifetimes, and help me further in my quest to reduce energy consumption. I’d also like to replace my HouseMon 0.6 setup which has been running here for years now, but with virtually no change or evolution.

An idea I’d love to work on is to sprinkle lots of new room-node like sensors around the house, to find out where the heat is going – then correlate it to outside temperature and wind direction, for example. Is there some window we can replace, or some other measure we could take to reduce our (still substantial) gas consumption during the cold months? Perhaps the heat loss is caused by the cold rising from our garage, below the living room?

Another long-overdue topic, is to start controlling some appliances over wireless, not just collecting the data from what are essentially send-only nodes. Very different, since usually there is power nearby for these nodes, and they need good security against replay-attacks.

I’ll want to be able to see the basic “health” indicators of the house at a glance, perhaps shown inconspicuously on a screen on the wall somewhere (as well as on a mobile device).

As always, all my work at JeeLabs will be fully open source for anyone to inspect, adopt, re-use, extend, modify, whatever. You do what you like with it. If you learn from it and enjoy, that’d be wonderful. And if you share and give back your ideas, time, or code: better still!

Stay tuned. Lots of fun with bits, electrons, and molecules ahead :)

Categories: Community Blog posts

Shedding weight

JeeLabs - Tue, 22/09/2015 - 23:01

I’ve been on a weight loss diet lately. In more ways than one…

As an old fan of the Minimal Mac weblog (now extinct), I’ve always been intrigued by simplification. Fewer applications, a less cluttered desk (oops, not there yet!), simpler tools, and leaner workflows. And with every new laptop over the years, I’ve been toning down the use of tons of apps, widgets, RSS feeds, note taking systems, and reminders.

Life is about flow and zen, not about interruptions or being busy. Not for me, anyway.

One app for all my documents (DevonThink), one app for all my quick notes (nvAlt), one programming-editor convention (vim/spacemacs), one off-line backup system (Arq), one on-line backup (Time Machine), one app launcher / search tool (Spotlight) … and so on.

I’ve recently gone back to doing everything on a single (high-end Mac) laptop. No more tinkering with two machines, Dropbox, syncing, etc. Everything in one place, locally, with a nice monitor plugged in when at my desk. That’s 1920×1200 pixels when on the move, and 2560×1600 otherwise, all gorgeously retina-sharp. I find it amazing how much calmer life becomes when things remain the same every time you come back to it.

I don’t have a smartphone, which probably puts me in the freaky Luddite category. So be it. I now only keep a 4 mm thin credit-card sized junk phone in my pocket for emergency use.

We’ve gone from an iPad each to a shared one for my wife Liesbeth and me. It’s mostly used for internet access and stays in the living room, like newspapers did in the old days.

I’ve gone back to using an e-paper based reader for when I want to sit on the couch or go outside and read. It’s better than an iPad because it’s smaller, lighter, and it’s passively lit, which is dramatically better in daylight than an LCD screen. At night I read less, because in the end it’s much nicer to wake up early and go enjoy daylight again. What a concept, eh?

While reading, I regularly catch myself wanting to access internet. Oops, can’t do. Great!

As for night-time habits: it’s astonishing how much better I sleep when not looking at that standard blueish LCD screen in the evening. Sure, I do still burn the midnight oil banging away on the keyboard, but thanks to a utility called f.lux the screen white balance follows the natural reddening colour shift of the sun across the day. Perfect for a healthy sleep!

Our car sits unused for weeks on end sometimes, as we take the bike and train for almost everything nowadays. It’s too big a step to get rid of it – maybe in a few years from now. So there’s no shedding weight there yet, other than in terms of reducing our CO2 footprint.

And then there’s the classical weight loss stuff. For a few months now, I’ve been following the practice of intermittent fasting, combined with picking up my old habit of going out running again, 2..3 times per week. With these two combined, losing real weight has become ridiculously easy – I’ve shed 5 kg, with 4 more to go until the end of the year.

Eat less and move more – who would have thought that it actually works, eh?

But hey, let me throw in some geek notes as well. Today, I received the Withings Pulse Ox:

(showing the heart rate sensor on the back – the front has an OLED + touch display)

It does exactly what I want: tell the time, count my steps, and measure my running activity, all in a really small package which should last well over a week between charges. It sends its collected data over BLE to a mobile device (i.e. our iPad), with tons of statistics.

Time will tell, but I think this is precisely the one gadget I want to keep in my pocket at all times. And when on the move: keys, credit cards, and that tiny usually-off phone, of course.

Except for one sick detail: why does the Withings “Health Mate” app insist on sending out all my personal fitness tracking data to their website? It’s not a show-stopper, but I hate it. This means that Withings knows all about my activity, and whenever I sync: my location.

So here’s an idea for anyone looking for an interesting privacy-oriented challenge: set up a Raspberry as firewall + proxy which logs all the information leaking out of the house. It won’t address mobile use, but it ought to provide some interesting data for analysis over a period of a few months. What sort of info is being “shared” by all the apps and tools we’ve come to rely on? Although unfortunately, it won’t be of much use with SSL-based sessions.

Categories: Community Blog posts

Bandwagons and islands

JeeLabs - Tue, 15/09/2015 - 23:01

I’ve always been a fan of the Arduino ecosystem, hook, line, and sinker: that little board, with its AVR microcontroller, the extensibility, through those headers and shields, and the multi-platform IDE, with its simple runtime library and access to all its essential hardware.

So much so, that the complete range of JeeNode products has been derived from it.

But I wanted a remote node, a small size, a wireless radio, flexible sensor options, and better battery lifetimes, which is why several trade-offs came out differently: the much smaller physical dimension, the RFM radio, the JeePort headers, and the FTDI interface as alternative for a built-in USB bridge. JeeNodes owe a lot to the Arduino ecosystem.

That’s the thing with big (even at the time) “standards”: they create a common ground, around which lots of people can flock, form a community, and extend it all in often quite surprising and innovative ways. Being able to acquire and re-use knowledge is wonderful.

The Arduino “platform” has a bandwagon effect, whereby synergy and cross-pollination of ideas lead to a huge explosion of projects and add-ons, both on the hardware as on the software side. Just google for “Arduino” … need I say more?

Yet sometimes, being part of the mainstream and building on what has become the “baseline” can be limiting: the 5V conventions of early Arduino’s doesn’t play well with most of the newer sensor chips these days, nor is it optimal for ultra low-power uses. Furthermore, the Wiring library on which the Arduino IDE’s runtime is based is not terribly modular or suitable for today’s newer µC’s. And to be honest, the Arduino IDE itself is really quite limited compared to many other editors and IDE’s. Last but definitely not least, C++ support in the IDE is severely crippled by the pre-processing applied to turn .ino files into normal .cpp files before compilation.

It’s easy to look back and claim 20-20 vision in hindsight, so in a way most of these issues are simply the result of a platform which has evolved far beyond the original designer’s wildest dreams. No one could have predicted today’s needs at that point in time.

There is also another aspect to point out: there is in fact a conflict w.r.t. what this ecosystem is for. Should it be aimed at the non-techie creative artist, who just wants to get some project going without becoming an embedded microelectronics engineer? Or is it a playground for the tech geek, exploring the world of physical computing, diving in to learn how it works, tinkering with every aspect of this playground, and tracing / extending the boundaries of the technology to expand the user’s horizon?

I have decades of software development experience under my belt (and by now probably another decade of physical computing), so for me the Arduino and JeeNode ecosystem has always been about the latter. I don’t want a setup which has been “dumbed down” to hide the details. Sure, I crave for abstraction to not always have to think about all the low-level stuff, but the fascination for me is that it’s truly open all the way down. I want to be able to understand what’s under the hood, and if necessary tinker with it.

The Arduino technology doesn’t have that many secrets any more for me, I suspect. I think I understand how the chips work, how the entire circuit works, how the IDE is set up, how the runtime library is structured, how all the interrupts work together, yada, yada, yada.

And some of it I’m no longer keen to stick to: the basic editing + compilation setup (“any editor + makefiles” would be far more flexible), the choice of µC (so many more ARM fascinating variants out there than what Atmel is offering), and in fact the whole premise of using an edit-compile-upload-run seems limiting (over-the air uploads or visual system construction anyone?).

Which is why for the past year or so, I’ve started bypassing that oh-so-comfy Arduino ecosystem for my new explorations, starting from scratch with an ARM gcc “toolchain”, simple “makefiles”, and using the command-line to drive everything.

Jettisoning everything on the software side has a number of implications. First of all, things become simpler and faster: less tools to use, (much) lower startup delays, and a new runtime library which is small enough to show the essence of what a runtime is. No more.

A nice benefit is that the resulting builds are considerably smaller. Which was an important issue when writing code for that lovely small LPC810 ARM chip, all in an 8-pin DIP.

Another aspect I very much liked, is that this has allowed me to learn and subsequently write about how the inside of a runtime library really works and how you actually set up a serial port, or a timer, or a PWM output. Even just setting up an I/O pin is closer to the silicon than the digitalWrite(...) abstraction provided by the Arduino runtime.

… but that’s also the flip side of this whole coin: ya gotta dive very deep!

By starting from scratch, I’ve had to figure out all the nitty gritty details of how to control the hardware peripherals inside the µC, tweaking bit settings in some very specific way before it all started to work. Which was often quite a trial-and-error ordeal, since there is nothing you can do other than to (re-) read the datasheet and look at proven example code. Tinker till your hair falls out, and then (if you’re lucky) all of a sudden it starts to work.

The reward for me, was a better understanding, which is indeed what I was after. And for you: working examples, with minimal code, and explained in various weblog posts.

Most of all this deep-diving and tinkering can now be found in the embello repository on GitHub, and this will grow and extend further over time, as I learn more tricks.

Embello is also a bit of an island, though. It’s not used or known widely, and it’s likely to stay that way for some time to come. It’s not intended to be an alternative to the Arduino runtime, it’s not even intended to become the ARM equivalent of JeeLib – the library which makes it easy to use the ATMega-based JeeNodes with the Arduino IDE.

As I see it, Embello is a good source of fairly independent examples for the LPC8xx series of ARM µC’s, small enough to be explored in full detail when you want to understand how such things are implemented at the lowest level – and guess what: it all includes a simple Makefile-based build system, plus all the ready-to-upload firmware.bin binary images. With the weblog posts and the Jee Book as “all-in-one” PDF/ePub documentation.

Which leaves me at a bit of a bifurcation point as to where to go from here. I may have to row back from this “Embello island” approach to the “Arduino mainland” world. It’s no doubt a lot easier for others to “just fire up the Arduino IDE” and load a library for the new developments here at JeeLabs planned for later this year. Not everyone is willing to learn how to use the command line, just to be able to power up a node and send out wireless radio packets as part of a sensor network. Even if that means making the code a bit bulkier.

At the same time, I really want to work without having to use the Arduino IDE + runtime. And I suspect there are others who do too. Once you’ve developed other software for a while, you probably have adopted a certain work style and work environment which makes you productive (I know I have!). Being able to stick to it for new embedded projects as well makes it possible to retain that investment (in routine, knowledge, and muscle memory).

Which is why I’m now looking for a way to get the best of both worlds: retain my own personal development preferences (which a few of you might also prefer), while making it easy for everyone else to re-use my code and projects in that mainstream roller coaster fashion called “the Arduino ecosystem”. The good news is that the Arduino IDE has finally evolved to the point where it can actually support alternate platforms, including ARM.

We’ll see how it goes… all suggestions and pointers welcome!

Categories: Community Blog posts

Heat Pump Performance Monitoring Examples

John Cantor - Wed, 09/09/2015 - 10:11
On 10th September I will be giving a brief presentation at the Ground Source Heat Pump Expo at the Ricoh Arena on the topic ‘Energy or Performance Monitoring’, so its timely to do a little blog here to elaborate on some of the examples I will be showing from some OpenEnergyMonitor dashboards I have been using
Note; The below are just examples for this blog, and don't necessarily show the whole story.
I have been working with OpenEnergyMonitor for some time, and now have various installs using OEM kit from In brief, most of the systems I have installed use around 8 temperature sensors, CT power measurement with voltage sensing (real power including power factor), and/or pulse counting from standard electrical kWh meter.  I have used Grundfoss VFS flow sensors, but we are currently working on the direct interrogation of a Kamstrup heat meter, giving heat output
Data is sent via Ethernet to and displayed on dashboards (as samples below). These real-time graphs give a fantastic tool for the installer and the home owner. They show exactly what is happening now, and what has happened over the previous hour/week/month or year. The information can be used to improve the design of a system and also be used to fine tune the user settings.
Let’s start with a SIMPLE dashboard example
This type of dashboard can be accessed on any internet-connected computer using    e.g.
The dashboard above shows a bar graph of daily energy input to the heat pump. This can be checked periodically for unusual values. It is showing high use on 15th March. By mousing-over the graph we can see that on this day, 24.2 kWh were used. The reason for this high use could be investigated.  The time period can easily be changed to anything you wish by zooming in or out.  Below this is the outside temperature. This might be interesting on its own right, but may be interesting if compared to energy used per day.   To the right are a few useful dials and figures -  cylinder temperature, room and outside temperature – things any home owner might like to know.    We can see that as from 6th April, the system is switched off.  This type of simple dashboard is ideal for the home owner, but we can make as many dashboards we like, of varying complexity and detail.  These are very useful for installers and designers, and a far more in depth analysis can be made.
Let’s start with a good example of a GSHP connected to underfloor heating.
This is a 12kW (max output) inverter-drive GSHP operating for a 40 minute period here. The green area represents electrical input and the purple represents heat (direct reading from Kamstrup meter).  The ratio if these two areas gives the COP.  We can see the flow and return temperatures slowly ramping up to a final flow temperature of only 32°C.  Since this is September, the ground collector is exceptionally warm. This, along with the low flow temperature explains why the COP is currently almost 6.  Current conditions are ideal, but from tests earlier in the year, we expect to see average COPs for heating in excess of 4.This graph is showing us a healthy flow-return dt of 6 degrees.  It is also showing how nicely the speed of the compressor drops in response to the rising flow temperature.

Below is another snip. This time from a fixed-speed GSHP. This one shows the source temperature too.
This shows a period of about 1/3rd of a day, and approx. 30min. run durations which is quite acceptable. The flow and return is nice and low, with average flow temperature around 30C.  The underfloor is a good design here, but he source is dipping below zero. This is not ideal, but a zoom-out of yearly temperatures and knowledge of total heat used would give better understanding.  In this case, since the underfloor is so good, it may be acceptable to have a slightly inferior ground source.
Next, a good example of a GSHP heating a domestic hot water cylinder.  The cylinder is copper with the heat exchanger coil in the bottom section of the cylinder. The heat pump is only 3.5kW and the coil is a nice large 3sq m.  

This example is showing the heat pump electrical input power as the shaded yellow area  (no heat meter fitted). The heating period starts at about 1.3kw input and finishes at around 1.7kw.  It also shows four temperatures;  cylinder top and bottom, and the flow and return temperatures.
This graph is showing the early evening heating period having been off by a time clock.As we can see, the top of the cylinder is still at a useable 50°C before heating, but the bottom has dropped to 40°C.  The 24 minute heating period shown here starts by heating the bottom water from 40°C.  Indeed, this system has been set up carefully to ensure the system heats from a lower starting temperature.  The heat pump ‘sees’ flow and return temperatures of only 45/40°C at the start.  The 40°C cylinder bottom (not very hot) ‘pulls down’ the heat pump working temperatures, resulting in a high energy-efficiency.  By looking up the heat pumps performance data, we can estimate the average COP with reasonable accuracy; here it is about 3.5 at the start of the heating-up cycle.As the cylinder warms, we can observe the point just before 18:00 where the bottom is becoming warmer than the top, and natural convection causes the top of the cylinder to rise with the bottom. After about 25 mins the whole cylinder has reached about 53°C.  At the end, the heat pump ‘sees’ temperatures of 55/52°C.  This is getting quite hot, and getting close to the limit of the heat pump’s comfort zone. The COP here may be about 2.8.  (taken from heat pump data).  We can then look at the period of time the heat pump has spent at different COPs, and estimate the COP for the whole DHW heating session. Its somewhere around 3.05.
If the system were enabled 24/7, and the sensor position not optimised,  the lower cylinder would not  drop so far, so the cylinder would  heat  more frequently from a higher starting point. The average working temperature would be much higher, so COP would be lower.  At worse, the COP could be not much better than 2.8.  Added to this, losses from the pipe run, and starting-up losses could result in worse performance.
We can therefore use the monitor to enable us to set the system to operate at a low average temperature, but for the cylinder top to remain at a useful temperature (e.g. say 47°C).We can also see how nice and close the final cylinder temperature (53°C) is to the maximum flow temperature of the heat pump (55°C). This minimises the need for immersion heater (with COP of only 1).  In this case, the compact copper heat exchanger is exceptionally large compared to the heat pump size, and the coil is also only occupying the lower section of the cylinder.  This gives exceptionally good results, and allows us to heat some of the water  in a ‘batch’ from a colder starting point.
For the next example we have a complete contrast. This is a very inefficient system!
This one is a 14kW  ASHP.  The heat pump is fine, and function exceptionally well,  but the cylinder heat exchanger is debatably a little small for this big heat pump.  
Looking at the graph, heating starts when the middle of the cylinder is 48C.  the flow temperature s runs up to 60C within 15mins, at which point, the input power drops and the heat pump ‘tracks’ the 60C flow temperature.  After 30mins running, the cylinder is 55C.  The flow temperature here is considerable higher than our previous example. In part due to the heat pump NOT reducing its speed, and in part due to the smaller heat exchanger coil, but it has not done too bad.  However, the period after 55°C is clearly grossly inefficient.    We can see that the compressor switches off frequently and spends the next 3.5 hrs! attempting to achieve 60°C.  The other thing to mention here is that the distance between the heat pump and cylinder is around 15m. What is actually happening is that most of the heat is simply being lost from the pipe run.  The energy consumed is shown by the yellow area of the power plot.  The final 5 degrees (to 60C) uses several times the power (area) of the first section from 1 to 2.   The biggest problem here is poor use of the controls.Clearly, it would make a lot of sense to adjust the hot water setting to 55°C so that the heat pump stops.  .   The final ‘floor heating’ period is just as terrible as the DHW period. Here, only 1 or 2 underfloor zones are open so the flow rate is far too low, as can be seen by the large temperature difference between the flow and return.  This is in excess of 10 degrees. Again, heat dissipated by the floor is far too small for this large heat pump.This is a clear case of an over-sized heat pump connected to a cylinder and emitter system. A smaller unit would work far better.

Finally, just to top anyone up with a little heat pump theory, I am adding a graph that I used to illustrate heat pump efficiency v the output temperature.   If there is one thing to learn about heat pumps – LEARN THIS.
Here we have the characteristics of 2 sample heat pumps.  The vertical Y axis shows the efficiency, the COP. A 3kW immersion gives 3 kW of heat. It has a COP of 1. Heat pumps give out more heat than they consume because they extract heat from outside.  The X axis shows operating output temperatures ranging from tepid on the left to very hot on the right. 
I am showing 2 typical heat pumps. A typical (R407C refrigerant) unit can reach say 55°C, whilst a ‘high temperature’ (134A refrigerant)  unit may achieve 65°C.  Anyhow whatever type, we can see that to the left, where the water is lukewarm, the heat pump has an easy time, hence the COP is very high (1kw in for 4.5kW of heat). I liken this to driving a car up a slight incline. We should get good fuel economy here, maybe 50mpg .  However, it we heat up to 65°C, the temperature ‘lift’ is great, and this is a little like driving a car up a steep incline – we are in a low gear and the MPG is only 20!.    In the same way that you will NEVER get good fuel economy when driving up a very long steep hill, you will not get a good COP when heating to a high temperature.  That said, it should always be better than using an immersion heater.
So, knowledge of the performance of any heat pump should be understood, and data should be available for all models relating to output temperatures at specific ground source or air-source temperatures. 
If you have a high temperature heat pump, it doesn't mean you always have to operate it at a high temperature.  If you operate it at lower temperatures, the performance should be far better. It is however always a good idea to find out the working limits of you unit.
Some of you may have wondered about the flattening of the curve at very low temperatures.  I have drawn it that way because some heat pumps are not very good at the extremities (limits) of their performance. However, I am finding (and partly guessing) that most inverter heat pumps with electronic expansion valves work very well over a very wide range, so some heat pumps will easily exceed COPs of 5 in ideal conditions (usually late spring or autumn, when not much heating is needed) .Never forget mid-winter conditions - this is the time we need most heat, so the operational area to focus on.

So, it is with an understanding of the characteristics of heat pumps, performance monitoring can be used to great advantage. In general, we want heat pumps to spend as much time at lower output temperatures ( and high source temperatures) as possible.

Categories: Community Blog posts


JeeLabs - Tue, 08/09/2015 - 23:01

No techie post this time, just some pictures from a brief trip last week to Magdeburg:

… and on the inside, even more of a little playful fantasy world:

This was designed by the architect Friedensreich Hundertwasser at the turn of this century. It was the last project he worked on, and the building was in fact completed after his death.

Feels a bit like an Austrian (and more restrained) reincarnation of Antoni Gaudí to me.

A playful note added to a utilitarian construction – I like it!

Categories: Community Blog posts

Space tools

JeeLabs - Tue, 01/09/2015 - 23:01

It’s a worrisome sign when people start to talk about tools. No real work to report on?

With that out of the way, let’s talk about tools :) – programming tools.

Everyone has their favourite programmer’s editor and operating system. Mine happens to be Vim (MacVim) and Mac OSX. Yours will likely be different. Whatever works, right?

Having said that, I found myself a bit between a rock and a hard place lately, while trying out ClojureScript, that Lisp’y programming language I mentioned last week. The thing is that Lispers tend to use something called the REPL – constantly so, during editing in fact.

What’s a REPL for?

Most programming languages use a form of development based on frequent restarts: edit your code, save it, then re-run the app, re-run the test suite, or refresh the browser. Some development setups have turned this into a very streamlined and convenient fine art. This works well – after all, why else would everybody be doing things this way, right?

But there’s a drawback: when you have to stop the world and restart it, it takes some effort to get back to the exact context you’re working on right now. Either by creating a good set of tests, with “mocks” and “spies” to isolate and analyse the context, or by repeating the steps to get to that specific state in case of interactive GUI- or browser-based apps.

Another workaround, depending on the programming language support for it, is to use a debugger, with “breakpoints” and “watchpoints” set to stop the code just where you want it.

But what if you could keep your application running – assuming it hasn’t locked up, that is? So it’s still running, but just not yet doing what it should. What if we could change a few lines of code and see if that fixes the issue? What if we could edit inside a running app?

What if we could in fact build an app from scratch this way? Take a small empty app, define a function, load it in, see if it works, perhaps call the function from a console-style session running inside the application? And then iterate, extend, tweak, fix, add code… live?

This is what people have been doing with Lisp for over half a century. With a “REPL”:

A similar approach has been possible for some time in a few other languages (such as Tcl). But it’s unfortunately not mainstream. It can take quite some machinery to make it work.

While a traditional edit-save-run cycle takes a few seconds, REPL-based coding is instant.

A nice example of this in action is in Tim Baldridge’s videos about Clojure. He never starts up an application in fact: he just fires up the REPL in an editor window, and then starts writing little pieces of code. To try it out, he hits a key combination which sends the parenthesised form currently under the cursor to the REPL, and that’s it. Errors in the code can be fixed and resent at will. Definitions, but also little test calls, anything.

More substantial bits of code are “require’d” in as needed. So what you end up, is keeping a REPL context running at all times, and loading stuff into it. This isn’t limited to server-side code, it also works in the browser: enter “(js/alert "Hello")” and up pops a dialog. All it takes is the REPL to be running inside the browser, and some websocket magic. In the browser, it’s a bit like typing everything into the developer console, but unlike that setup, you get to keep all the code and trials you write – in the editor, with all its conveniences.


Another recent development in ClojureScript land is Figwheel by Bruce Hauman. There’s a 6-min video showing an example of use, and a very nice 45-min video where he goes into things in a lot more detail.

In essence, Figwheel is a file-driven hot reloader: you edit some code in your editor, you save the file, and Figwheel forces the browser (or node.js) to reload the code of just that file. The implementation is very different, but the effect is similar to Dan Abramov’s React Hot Reloader – which works for JavaSript in the browser, when combined with React.

There are some limitations for what you can do in both the REPL-based and the Figwheel approach, but if all else fails you can always restart things and have a clean slate again.

The impact of these two approaches on the development process are hard to understate: it’s as if you’re inside the app, looking at things and tweaking it as it runs. App restarts are far less common, which means server-side code can just keep running as you develop pieces of it further. Likewise, browser side, you can navigate to a specific page and context, and change the code while staying on that page and in that context. Even a scroll position or the contents of an input box will stay the same as you edit and reload code.

For an example Figwheel + REPL setup running both in the browser and in node.js at the same time, see this interesting project on GitHub. It’s able to do hot reloads on the server as well as on (any number of) browsers – whenever code changes. Here’s a running setup:

And here’s what I see when typing “(fig-status)” into Figwheel’s REPL:

Figwheel System Status ---------------------------------------------------- Autobuilder running? : true Focusing on build ids: app, server Client Connections server: 1 connection app: 1 connection ----------------------------------------------------

This uses two processes: a Figwheel-based REPL (JVM), and a node-based server app (v8). And then of course a browser, and an editor for actual development. Both Node.js and the browser(s) connect into the Figwheel JVM, which also lets you type in ClojureScript.


So what do we need to work in this way? Well, for one, the language needs to support it and someone needs to have implemented this “hot reload” or “live code injection” mechanism.

For Figwheel, that’s about it. You need to write your code files in a certain way, allowing it to reload what matters without messing up the current state – “defonce” does most of this.

But the real gem is the REPL: having a window into a running app, and peeking and poking at its innards while in flight. If “REPL” sounds funny, then just think of it as “interactive command prompt”. Several scripting languages support this. Not C, C++, or Go, alas.

For this, the editor should offer some kind of support, so that a few keystrokes will let you push code into the app. Whether a function definition or a printf-type call, whatever.

And that’s where vim felt a bit inadequate: there are a few plugins which try to address this, but they all have to work around the limitation that vim has no built-in terminal.

In Emacs-land, there has always been “SLIME” for traditional Lisp languages, and now there is “CIDER” for Clojure (hey, I didn’t make up those names, I just report them!). In a long-ago past, I once tried to learn Emacs for a very intense month, but I gave up. The multi-key acrobatics is not for me, and I have tons of vim key shortcuts stashed into muscle memory by now. Some people even point to research to say that vim’s way works better.

For an idea of what people can do when they practically live inside their Emacs editor, see this 18-min video. Bit hard to follow, but you can see why some people call Emacs an OS…

Anyway, I’m not willing to unlearn those decades of vim conventions by now. I have used many other editors over the years (including TextMate, Sublime Text, and recently Atom), but I always end up going back. The mouse has no place in editing, and no matter how hard some editors try to offer a “vim emulation mode”, they all fail in very awkward ways.

And then I stumbled upon this thing. All I can say is: “Vim, reloaded”.

Wow – a 99% complete emulation, perhaps one or two keystrokes which work differently. And then it adds a whole new set of commands (based on the space bar, hence the name), incredibly nice pop-up help as you type the shortcuts, and… underneath, it’s… Emacs ???

Spacemacs comes with a ton of nice default configuration settings and plugins. Other than some font changes, some extra language bindings, I hardly change it. My biggest config tweak so far has been to make it start up with a fixed position and window size.

So there you have it. I’m switching my world over to ClojureScript as main programming language (which sounds more dramatic than it is, since it’s still JavaScript + browser + node.js in the end), and I’m switching my main development tool to Emacs (but that too is less invasive than it sounds, since it’s Vim-like and I can keep using vim on remote boxes).

Categories: Community Blog posts

Clojure and ClojureScript

JeeLabs - Tue, 25/08/2015 - 23:01

I’m in awe. There’s a (family of) programming languages which solves everything. Really.

  • it works on the JVM, V8, and CLR, and it interoperates with what already exists
  • it’s efficient, it’s dynamic, and it has parallelism built in (threaded or cooperative)
  • it’s so malleable, that any sort of DSL can trivially be created on top of it

As this fella says at this very point in his videoState. You’re doing it wrong.

I’ve been going about programming in the wrong way for decades (as a side note: the Tcl language did get it right, up to a point, despite some other troublesome shortcomings).

The language I’m talking about re-uses the best of what’s out there, and even embraces it. All the existing libraries in JavaScript can be used when running in the browser or in Node.js, and similarly for Java or C# when running in those contexts. The VM’s, as I already mentioned also get reused, which means that decades of research and optimisation are taken advantage of.

There’s even an experimental version of this (family of) programming languages for Go, so there again, it becomes possible to add this approach to whetever already exists out there, or is being introduced now or in the future.

Due to the universal reach of JavaScript these days, on browsers, servers, and even on some embedded platforms, that really has most interest to me, so what I’ve been putting my teeth into recently is “ClojureScript”, which specifically targets JavaScript.

Let me point out that ClojureScript is not another “pre-processor” like CoffeScript.

“State. You’re doing it wrong.”

As Rich Hickey, who spoke those words in the above video quickly adds: “which is ok, because I was doing it wrong too”. We all took a wrong turn a few decades ago.

The functional programming (FP) people got it right… Haskell, ML, that sort of thing.

Or rather: they saw the risks and went to a place where few people could follow (monads?).

What Clojure and ClojureScript do, is to bring a sane level of FP into the mix, with “immutable persistent datastructures”, which makes it all very practical and far easier to build with and reason about. Code is a transformation: take stuff, do things with it, and return derived / modified / updated / whatever results. But don’t change the input data.

Why does this matter?

Let’s look at a recent project taking the world by storm: React, yet another library for building user interfaces (in the browser and on mobile). The difference with AngularJS is the conceptual simplicity. To borrow another image from a similar approach in CycleJS:

Things happen in a loop: the computer shows stuff on the screen, the user responds, and the computer updates its state. In a talk by CycleJS author Andre Staltz, he actually goes so far as treat the user as a function: screen in, key+mouse actions out. Interesting concept!

Think about it:

  • facts are stored on the disk, somewhere on a network, etc
  • a program is launched which presents (some of it) on the screen
  • the user interface leads us, the monkeys, to respond and type and click
  • the program interprets these as intentions to store / change something
  • it sends out stuff to the network, writes changes to disk (perhaps via a database)
  • these changes lead to changes to what’s shown on-screen, and the cycle repeats

Even something as trivial as scrolling down is a change to a scroll position, which translates to a different part of a list or page being shown on the screen. We’ve been mixing up the view side of things (what gets shown) with the state (some would say “model”) side, which in this case is the scroll position – a simple number. The moment you take them apart, the view becomes nothing more than a function of that value. New value -> new view. Simple.

Nowhere in this story is there a requirement to tie state into the logic. It didn’t really help that object orientation (OO) taught us to always combine and even hide state inside logic.

Yet I (we?) have been programming with variables which remember / change and loops which iterate and increment, all my life. Because that’s how programming works, right?

Wrong. This model leads to madness. Untraceable, undebuggable, untestable, unverifiable.

In a way, Test-Driven-Design (TDD) shows us just how messy it got: we need to explicitly compare what a known input leads to with the expected outcome. Which is great, but writing code which is testable becomes a nightmare when there is state everywhere. So we invented “mocks” and “spies” and what-have-you-not, to be able to isolate that state again.

What if everything we implemented in code were easily reducible to small steps which cleanly compose into larger units? Each step being a function which takes one or more values as state and produces results as new values? Without side-effects or state variables?

Then again, purely functional programming with no side-effects at all is silly in a way: if there are zero side-effects, then the screen wouldn’t change, and the whole computation would be useless. We do need side-effects, because they lead to a screen display, physical-computing stuff such as motion & sound, saved results, messages going somewhere, etc.

What we don’t need, is state sprinkled across just about every single line of our code…

To get back to React: that’s exactly where it revolutionises the world of user interfaces. There’s a central repository of “the truth”, which is in fact usually nothing more than a deeply nested JavaScript data structure, from which everything shown on the web page is derived. No more messing with the DOM, putting all sorts of state into it, having to update stuff everywhere (and all the time!) for dynamic real-time apps.

React (a.k.a. ReactJS) treats an app as a pipeline: state => view => DOM => screen. The programmer designs and writes the first two, React takes care of the DOM and screen.

I’ll get back to ClojureScript, please hang in there…

What’s missing in the above, is user interaction. We’re used to the following:

mouse/keyboard => DOM => controller => state

That’s the Model-View-Controller (MVC) approach, as pioneered by Smalltalk in the 80’s. In other words: user interaction goes in the opposite direction, traversing all those steps we already have in reverse, so that we end up with modified state all the way back to the disk.

This is where AngularJS took off. It was founded on the concept of bi-directional bindings, i.e. creating an illusion that variable changes end up on the screen, and screen interactions end up back in those same variable – automatically (i.e. all taken care of by Angular).

But there is another way.

Enter “reactive programming” (RP) and “functional reactive programming” (FRP). The idea is that user interaction still needs to be interpreted and processed, but that the outcome of such processing completely bypasses all the above steps. Instead of bubbling back up the chain, we take the user interaction, define what effect it has on the original central-repository-of-the-truth, period. No figuring out what our view code needs to do.

So how do we update what’s on screen? Easy: re-create the entire view from the new state.

That might seem ridiculously inefficient: recreating a complete screen / web-page layout from scratch, as if the app was just started, right? But the brilliance of React (and several designs before it, to be fair) is that it actually manages to do this really efficiently.

Amazingly so in fact. React is faster than Angular.

Let’s step back for a second. We have code which takes input (the state) and generates output (some representation of the screen, DOM, etc). It’s a pure function, i.e. it has no side effects. We can write that code as if there is no user interaction whatsoever.

Think – just think – how much simpler code is if it only needs to deal with the one-way task of rendering: what goes where, how to visualise it – no clicks, no events, no updates!

Now we need just two more bits of logic and code:

  1. we tell React which parts respond to events (not what they do, just that they do)

  2. separately, we implement the code which gets called whenever these events fire, grab all relevant context, and report what we need to change in the global state

That’s it. The concepts are so incredibly transparent, and the resulting code so unbelievably clean, that React and its very elegant API is literally taking the Web-UI world by storm.

Back to ClojureScript

So where does ClojureScript fit in, then? Well, to be honest: it doesn’t. Most people seem to be happy just learning “The React Way” in normal main-stream JavaScript. Which is fine.

There are some very interesting projects on top of React, such as Redux and React Hot Loader. This “hot loading” is something you have to see to believe: editing code, saving the file, and picking up the changes in a running browser session without losing context. The effect is like editing in a running app: no compile-run-debug cycle, instant tinkering!

Interestingly, Tcl also supported hot-loading. Not sure why the rest of the world didn’t.

Two weeks ago I stumbled upon ClojureScript. Sure enough, they are going wild over React as well (with Om and Reagent as the main wrappers right now). And with good reason: it looks like Om (built on top of React) is actually faster than React used from JavaScript.

The reason for this is their use of immutable data structures, which forces you to not make changes to variables, arrays, lists, maps, etc. but to return updated copies (which are very efficient through a mechanism called “structural sharing”). As it so happens, this fits the circular FRP / React model like a glove. Shared trees are ridiculously easy to diff, which is the essence of why and how React achieves its good performance. And undo/redo is trivial.

Hot-loading is normal in the Clojure & ClojureScript world. Which means that editing in a running app is not a novelty at all, it’s business as usual. As with any Lisp with a REPL.

Ah, yes. You see, Clojure and ClojureScript are Lisp-like in their notation. The joke used to be that LISP stands for: “Lots of Irritating Little Parentheses”. When you get down to it, it turns out that there are not really much more of those than parens / braces in JavaScript.

But notation is not what this is all about. It’s the concepts and the design which matter.

Clojure (and ClojureScript) seem to be born out of necessity. It’s fully open source, driven by a small group of people, and evolving in a very nice way. The best introduction I’ve found is in the first 21 minutes of the same video linked to at the start of this post.

And if you want to learn more: just keep watching that same video, 2:30 hours of goodness. Better still: this 1 hour video, which I think summarises the key design choices really well.

No static typing as in Go, but I found myself often fighting it (and type hints can be added back in where needed). No callback hell as in JavaScript & Node.js, because Clojure has implemented Go’s CSP, with channels and go-routines as a library. Which means that even in the browser, you can write code as if there where multiple processes, communicating via channels in either synchronous or asynchronous fashion. And yes, it really works.

All the libraries from the browser + Node.js world can be used in ClojureScript without special tricks or wrappers, because – as I said – CLJ & CLJS embrace their host platforms.

The big negative is that CLJ/CLJS are different and not main-stream. But frankly, I don’t care at this point. Their conceptual power is that of Lisp and functional programming combined, and this simply can’t be retrofitted into the popular languages out there.

A language that doesn’t affect the way you think about programming, is not worth knowing — Alan J. Perlis

I’ve been watching many 15-minute videos on Clojure by Tim Baldridge (it costs $4 to get access to all of them), and this really feels like it’s lightyears ahead of everything else. The amazing bit is that a lot of that (such as “core.async”) catapults into plain JavaScript.

As you can probably tell, I’m sold. I’m willing to invest a lot of my time in this. I’ve been doing things all wrong for a couple of decades (CLJ only dates from 2007), and now I hope to get a shot at mending my ways. I’ll report my progress here in a couple of months.

It’s not for the faint of heart. It’s not even easy (but it is simple!). Life’s too short to keep programming without the kind of abstractions CLJ & CLJS offer. Eh… In My Opinion.

Categories: Community Blog posts

A feel for numbers

JeeLabs - Tue, 18/08/2015 - 23:01

It’s often really hard to get a meaningful sense what numbers mean – especially huge ones.

What is a terabyte? A billion euro? A megawatt? Or a thousand people, even?

I recently got our yearly gas bill, and saw that our consumption was about 1600 m3 – roughly the same as last year. We’ve insulated the house, we keep the thermostat set fairly low (19°C), and there is little more we can do – at least in terms of low-hanging fruit. Since the house has an open stairway to the top floors, it’s not easy to keep the heat localised.

But what does such a gas consumption figure mean?

For one, those 1600 m3/y are roughly 30,000 m3 in the next twenty years, which comes to about €20,000, assuming Dutch gas prices will stay the same (a big “if”, obviously).

That 30,000 m3 sounds like a huge amount of gas, for just two people to be burning up.

Then again, a volume of 31 x 31 x 31 m sounds a lot less ridiculous, doesn’t it?

Now let’s tackle it from another angle, using the Wolfram Alpha “computational knowledge engine”, which is a really astonishing free service on the internet, as you’ll see.

How much gas is estimated to be left on this planet? Wolfram Alpha has the answer:

How many people are there in the world?

Ok, let’s assume we give everyone today an equal amount of those gas reserves:

Which means that we will reach our “allowance” (for 2) 30 years from now. Now that is a number I can grasp. It does mean that in 30 years or so it’ll all be gone. Totally. Gone.

I don’t think our children and all future generations will be very pleased with this…

Oh, and for the geeks in us: note how incredibly easy it is to get at some numerical facts, and how accurately and easily Wolfram Alpha handles all the unit conversions. We now live in a world where the well-off western part of the internet-connected crowd has instant and free access to all the knowledge we’ve ammassed (Wikipedia + Google + Wolfram Alpha).

Facts are no longer something you have to learn – just pick up your phone / tablet / laptop!

But let’s not stop at this gloomy result. Here’s another, more satisfying, calculation using figures from an interesting UK site, called Electropedia (thanks, Ard!):

[…] the total Sun’s power it intercepted by the Earth is 1.740×10^17 Watts

When accounting for the earth’s rotation, seasonal and climatic effects, this boils down to:

[…] the actual power reaching the ground generally averages less than 200 Watts per square meter

Aha, that’s a figure I can relate to again, unlike the “10^17” metric in the total above.

Let’s google for “heat energy radiated by one person”, which leads to this page, and on it:

As I recall, a typical healthy adult human generates in the neighborhood of 90 watts.

Interesting. Now an average adult’s calorie intake of 2400 kcal/day translates to 2.8 kWh. Note how this nicely matches up (at least roughly): 2.8 kWh/day is 116 watt, continuously. So yes, since we humans just burn stuff, it’s bound to end up as mostly heat, right?

But there is more to be said about the total solar energy reaching our little blue planet:

Integrating this power over the whole year the total solar energy received by the earth will be: 25,400 TW X 24 X 365 = 222,504,000 TeraWatthours (TWh)

Yuck, those incomprehensible units again. Luckily, Electropedia continues, and says:

[…] the available solar energy is over 10,056 times the world’s consumption. The solar energy must of course be converted into electrical energy, but even with a low conversion efficiency of only 10% the available energy will be 22,250,400 TWh or over a thousand times the consumption.

That sounds promising: we “just” need to harvest it, and end all fossil fuel consumption.

And to finish it off, here’s a simple calculation which also very much surprised me:

  • take a world population of 7.13 billion people (2013 figures, but good enough)
  • place each person on his/her own square meter
  • put everyone together in one spot (tight, but hey, the subway is a lot tighter!)
  • what you end up, is of course 7.13 billion square meters, i.e. 7,130,000,000 m3
  • sounds like a lot? how about an area of 70 by 100 km? (1/6th of the Netherlands)

Then, googling again, I found out that 71% of the surface of our planet is water.

And with a little more help from Wolfram Alpha, I get this result:

That’s 144 x 144 meters per person, for everyone on this planet. Although not every spot is inhabitable, of course. But at least these are figures I can fit into my head and grasp!

Now if only I could understand why we can’t solve this human tragedy. Maths won’t help.

Categories: Community Blog posts

How To Take Care Of Your Parrot

FairTradeElectronics - Fri, 10/07/2015 - 11:16

Parrots are birds that are kept by many people as pets. These birds are known to have impressed human beings for many centuries. Parrots were kept by many ancient people starting from kings, warlords, pirates and even the common people. The birds are admired because of their colorful feathers, their high levels of intelligence and their talking ability. There are very many types of parrots and different types have their own style and personality that is very peculiar from the rest. Different types of parrots love to eat certain types of food and they love to live in a different unique manner compared to other types. If you own a parrot you should take care of the bird wisely and maintain it very well to make it have an enjoyable life in your household. Some of the breeds of parrots include; The Colorful Macaw, The Comedic Cockatoo and The Majestic African Grey. You can make your parrot very happy, playful and healthy by purchasing the most perfect parrot cage for your parrot. Different types of parrots love to live in certain specific types of cages. The cage of your parrot should have a good space that will give the bird enough room to play and exercise-since parrots are birds that are playful and they love to have regular exercise. The cage should give room for your parrot to be able to stretch its wings; the cage should have enough space that will give the parrot enough room when feeding. The cage should also give the pilot enough space to play with its toys and also preen its feathers. Most of the parrot cages are manufactured with “play-top” that enables the parrot to play and others have an additional pullout tray beneath the parrot cage that enables you to collect the litter easily.

Some pet shops sell parrot cages that you will be required to assemble before putting your parrot in the cage. You will be supplied with a manual of how to assemble the cage. You will be required to assemble the “bottom stand” first and finish by putting the perches and the feeders into their position. For you to have a healthy parrot you will always be required to clean the parrot cage very well. You will be required to replace the cage liners daily, you will also be required to wipe away all the food-leftovers and the waste daily and you will also be required to wash the food and water dishes daily. You will also be required to thoroughly clean the perches and the toys at least once per week. A thorough cleaning should for the entire parrot cage should be done at least once in every month. Sometimes you will be required to dismantle the whole cage and wash every part very well then you will re-assemble the cage again.

Different types of parrot cages can be purchased in many stores and pet shops. You can also purchase the parrot cages from the online pet supermarkets.

You should take care of your parrot and maintain it very well for the bird to be always happy and attractive, you should feed it well, spray it, clean it and you will be assured of a very good pet.

Categories: Community Blog posts

Rediscover Your Music With the Sennheiser HD700 digital headphones

FairTradeElectronics - Sun, 28/06/2015 - 13:49

Music is only as good as the device you use to listen to it. There are a number of devices that allow you to experience music. One of these devices is the headphones. These are a pair of large electronic speakers that are mounted on a frame that goes over the skull and cups over the ears. There are many headphone brands in the world. Each one professes to make the highest quality headphones. Of them all none is more luxurious, flamboyant and downright effective like the Sennheiser HD700 dynamic stereo headphones.

Introducing Sennheiser

These are an ultra modern pair of headphones. They are open circumaural headphones. This means that the headphone cups have a ventilated back casing. The cups feature a mesh construction that is beautiful to look at and also allows for full, transparent sound. Due to this type of cup construction, these headphones are able to produce warm and balanced music. The Sennheiser headphones are chock full of technology. The speakers in the cups are fitted with ventilated magnets. Thus, when you are listening to music using these headphones, you do not experience distortion of the music by air flowing around the cups. In addition to that, the acoustic casing of these headphones is angled. As a result, they provide superb projection of the music and the notes sound natural.

Sennheiser made use of some advanced drivers in these headphones. Not only are they modern and chic, they can produce high pressure sound. These drivers also respond with a flat frequency. This means that your music has absolutely no distortions. To boost the convenience of using these headphones, Sennheiser made the connector cable of these headphones completely detachable. The cable is made up four wires of silver-plated copper. It is also completely free of oxygen. Thus, it can conduct the music from your devices better even when it is being played at a high frequency.

Technology meets design

The Sennheiser HD700 headphones are a beauty to look at. They have a space age mesh on the outer parts of the cups. The mesh is built to express the industrial processes that make these headphones a reality. The headband on these headphones is coated in silicone. In addition to that, the headband has a dual material yoke. This is aesthetically very pleasing. In addition to that, the cups on the headphones have a soft velour padding. This padding goes all around your ears. This makes the music you are listening to sound crystal clear. Despite the large cups and strong headband frame, these headphones are super light. This is due to the aerospace nature of the materials used to construct these headphones.

Convenient construction

Few headphones in the world are built with convenience in mind. First of all, the cable can be removed and stored separately. You can also upgrade it in case yours gets worn out. The cable actually has an indent in the rubber casing. This indent assists you to insert the cable in the headphone casing at the cups. The braided nature of the cable helps it to survive the wear and tear of day to day activities. Moreover, it shows that this is a pair of very high quality headphones. For

Categories: Community Blog posts

Ultimate guide to buying a water softener system

FairTradeElectronics - Sun, 21/06/2015 - 04:42

There are a good number of water softener manufacturing companies on the market; this can compromise quality. If there is anything that one should not get close to compromising is health. Drinking water should be clean, that is not debatable, treated with the appropriate softener moreover soft water will save you a lot in terms of the soap used to clean, heating elements like the kettle will not form layers and all water related chores will be smoother. This article will provide you a guide to buying the best softener for your water from the diverse water softener providing companies.

The first thing you need to know is how much you will spend on the services, says it doesn’t have to be expensive. You should not overpay for water softener service due to the desperateness of installing the best water softener. You need to understand the pricing so that you invest in an affordable system that in future, it can recoup the investment. Therefore, the key point is you understand the costs of various systems on the market.

How much do you need to efficiently and effectively run the system? An answer to this question provides you with an ability to know a system you can choose. The energy needed to run the system should not greatly on impact your budget. You can decide to venture into non-electrical systems if you realize that water and electrical bills will strain your budget. However, if you are comfortable with the bills then that should not be a pointer.

Size of the equipment is another pointer in the guide to purchasing a water softener will need to take measurements of your compound. Water usage and plumbing measurements will help you realize if the system you are going to purchase is adequate. You should not purchase a system that will be ineffective. If you have a bigger compound with a lot of water utilizing processes then definitely you will suffocate your water needs.

How large is the space you are intending to install your system on? If you overestimate or underestimate your space, definitely you will suffer costs. You will stand higher chances of purchasing a suitable system when you provide as much information as you can. Do not buy something too large for your compound under the pressure of need of a larger system. You can save money by having the appropriate sized system.

Research! Research! Research! Key to getting the best company for the best water softener system is by researching among the available ones. First, it must be licensed, should be insured and bonded. Secondly, you can consider referrals from clients and finally read all reviews left by customers who have installed the systems. Those who have had the system for longer periods should be at a position to judge better.

Are you buying the equipment on you want your supplier to provide them? This is also a factor to consider. Plumber or dealer is inclusive in the instruments. Buying the equipment online translates to installing the equipment on your own. You choose a system after settling on a particular company. This should put you at a position to choose a water softening system that will serve you well.

Categories: Community Blog posts

Your Mercedes Dealer In Chelmsford

FairTradeElectronics - Sat, 20/06/2015 - 13:36

The Mercedes Benz of Chelmsford car dealership is a part of the Jardine Motors Group in the UK which trades under the brand name Lancaster. Jardine Motors Group UK has grown from a family-run car dealer to become a large car dealer representing 23 manufacturers all over Europe. It has 70 different locations dealing in different car brands including one in Chelmsford which specializes in Mercedes Benz retail. Other sites are found at Lakeside, Southend, Colchester, Ipswich and others. The Jardine Motor Group is one of the largest and prestigious automotive retail groups in the country. It is a reputable dealership that has kept a loyal client base and has become the first choice for customers. Mercedes Chelmsford deals in selling new and approved used Benz cars as well as their parts. It provides a friendly environment for its customers as well as offer impeccable care and services. The dealership is stocked excellently with cars ranging from stunning previously owned ones to the very latest releases of the brand. There are over 550 Approved Used Mercedes Benz gracing their extensive stock. The customer is given a variety of colour to choose from when going in to shop for a new car. To spice it all up, the dealer offers its clients unbelievably affordable prices with their flexible finance plans which opens up one’s accessibility to a wider choice of car models. The staff is not only friendly but also has expertise on the Mercedes Benz and are passionate about the brand. Their knowledge is vast and can match the client’s requirements comfortably. Mercedes Chelmsford also has a MyService waiting area complete with free Wi-Fi so that you are occupied and entertained while waiting for your car to get serviced at their site. There are also quality refreshments one can buy while waiting to have the client feeling comfortable. The Business Zone also has an enclosed workspace that clients can utilize when waiting. The Mercedes Benz of Chelmsford showroom is not hard to find. It is located on the White Hart Lane in Springfield Chelmsford on the roundabout opposite Sainsbury. The airy and bright showroom is easy to spot from the road with the forecourt for the new and the approved used Mercedes Benz cars. The location is convenient with close links to various other locations such as East London, Hatfield Peveral, Boreham, Witham, Woodham, Brentwood, Shenfield, Ingatestone, Broomfield among others. Stock delivery can be done across England, Wales, Scotland and Northern Ireland to all of the UK whether for business use or personal use. Their Customer Service is also exceptional. Aftersales services are offered in full complement by their state of the art facilities to keep your Mercedes Benz in excellent condition. The expertly trained staff on the Mercedes Benz attend to your car only with Mercedes Genuine Parts so that you are assured that your car is in the best condition and being properly taken care of. Say goodbye to the hustle that comes with shopping for a good affordable car, whether new or a decent used Mercedes Benz and give Mercedes Chelmsford a call.

Categories: Community Blog posts

Take a Look at the Best rice cookers

FairTradeElectronics - Fri, 05/06/2015 - 11:22

Rice is a type of grain that we can cook and consume with a stew of our choice. It is very versatile and can be cooked in various ways. Today, there are appliances that can cook rice automatically. They save us time and are very convenient. Here are the best rice cookers in the market.


The Aroma rice cooker is heralded as one of the best in the market. It is an affordable appliance that cooks your rice very quickly and makes it super delicious. It is simple to use and has a high capacity. The Aroma rice cookers come in 3 varieties. These are the 6 cup, 8 cup and 20 cup cookers. The 6 cup cooker costs $15 to $25, the 8 cup costs $25 to $35 and the 20 cup costs $30 to $100. All of them can easily steam food and are made of stainless steel. One major advantage of getting this Aroma rice cooker is that you can cook the rice and another dish in it at the same time. You can steam some meat while the rice cooks under it. It also makes use of digital technology. You can set it to activate itself and start cooking after a specific number of hours. Thus, you can load it with rice as you leave the house in the morning and when you get home you’ll find that it has already cooked for you.

Instant Pot

This is another type of rice cooker that is simply amazing to use. This rice cooker can make almost all types of rice for example Spanish rice. In addition to that, it can be programmed to cook your rice after a set number of hours. In addition to rice, this amazing cooker can prepare soups, beans, meat, poultry and stew. All these will get ready to eat in a very short amount of time. It has a capacity of 6 quarts. For only $135 to $235, the Instant Pot rice cooker is definitely one of the best available.

Cuckoo rice cooker

In terms of style and design, the cuckoo rice cooker bests its competitors by far. It has a round, ergonomic shape that is appealing to touch. Moreover, it is very capable in its ability to cook for you. It has a capacity of 6 cups. Its size is compact and it can easily be carried in luggage during trips. The cooker can easily and effectively cook brown and white rice. For an affordable price of $80 to $120, the Cuckoo rice cooker is a great choice of rice cooker.

Miracle rice cooker

Made of stainless steel and colored in pearl white, this rice cooker is one of the shoppers’ favorite. It costs from $70 to $90. In it, there is a high quality steamer for vegetables as well as a stainless steel bowl. It has a capacity of 8 cups of rice. You can easily cook the rice and steam your vegetables at the same time. This ability will save you a lot of time when preparing meals since you can do it all in one go. Thus, it is a multitasking rice cooker for a very affordable price.

Visit this site for more detailed rice cooker reviews: and be quick cause those amazing deals don’t last forever!

Categories: Community Blog posts
Syndicate content