WebStorm and TypeScript

Watching Uncle Bob's videos (those in particular where he refactors stuff with ease thanks to IntelliJ IDEA), I always wanted to try those IntelliJ IDEs out. Not for Java though as I'm not a big fan if it. When JetBrains did a special discount sale, I had to try it out: WebStorm for JavaScript that is. And heck, it's good. It's also complicated to set up, but it figures out a lot of things by itself, e.g. it understands package.json files, the run scripts inside, it knows linters, transpilers and so much more. But the learning curve is steep. And it takes quite some (about 20) seconds to get started. Sublime Text is way faster to start (less than 1 second), so that'll stay my go-to-choice for smaller things, but once a full project is set up, WebStorm is very helpful to have.

So this weekend I took some time to set up WebStorm for TypeScript with its transpiler, linter, unit tests etc. This video was a great help to get everything set up in a sensible way. I used another boilerplate setup for no particular reason. This is the result.

Once cloned and WebStorm is started, it'll ask to run "npm install". Let it do that. Open the file structure (top left) and you should see all files. Find "package.json" and double-click on it. Go to the section of "scripts". You should find them all automatically marked with a triangle:

If you click on the triangle for "test" and let it run it, WebStorm will:

  1. run tslint
  2. run tsc to transpile TypeScript into JavaScript into ./build/ for both source (in ./src) and tests (in ./__test__)
  3. run the tests incl. coverage checks
  4. report the result of all commands in the "run" pane
  5. Also you now have "TEST" in the configuration:

Neat! I start to like WebStorm as I can see that it's made to solve the little problems programmers have. But there's a lot of keyboard shortcuts to memorize...

Comments

Typhoon No. 12 and Air Pressure

I was measuring air pressure during Typhoon no. 12 this year and here is a graph of the air pressure change while the typhoon was passing Tokyo. It dropped from 993 hPa to 984 hPa. While not a huge drop, it's still notable for the rate of change.

Here the path. Tokyo is where the blue dot is. It's quite far away and we had not much wind.

Comments

Espruino and InfluxDB

Espruino did unexpectedly have a module to talk to InfluxDB directly: https://github.com/espruino/EspruinoDocs/blob/master/modules/InfluxDB.js. Given that it's a simple HTTP POST request (see previous blog entry), I should not have been surprised.

That simplifies data ingestion: no need for MQTT broker, no need for an MQTT-to-InfluxDB converter. The InfluxDB instance is on the local network this way since SSL is still not doable on an ESP8266.

This is the Espruino code:

// InfluxDB configuration
 var influxDBParams = {
    influxDBHost: "192.168.1.14",
    influxPort: 8086,
    influxDBName: "demo",
    influxUserName: "submit",  
    influxPassword: "submitpw",
    influxAgentName: "ESP32"
   };

var influxDB = require("InfluxDB").setup(influxDBParams);

// bme() is the function to read from the BME280 sensor
let temperature=bme.readTemperature()

let data="env "+temperature+"="+temperature;
influxDB.write(data);

That's it. As simple as MQTT.

Comments

Data Logging and Displaying with InfluxDB and Grafana

I  was able to collect sensor data (temperature, humidy, pressure) for a while and send the data out via MQTT. What I never finished is to display the data in a nice graphical UI. Unfortunately to do this via Internet and any hosted service (e.g. Adafruit, or AWS or Google), a requirement is to use TLS (HTTPS or MQTT over TLS). Not a problem in general, but for an ESP8266 with Espruino and about 20 KByte RAM available for user programs, the TLS stack is way too large and RAM consuming. See here. ESP32 is better (512 KByte RAM ionstead of 80 KByte in total) and it could do TLS, but I have 1 ESP32 and about 10 ESP8266 modules...

Thus I am forced to use simple and unencrypted MQTT locally with a local MQTT broker. One subscriber process can get updates and send to an Internet connectable service (i.e. the many IoT related services there are). There the data is stored and displayed.

But then I thought that instead of sending the data to another hosted DB, I can instead host a small DB myself. I can run a time series DB and a graphical front-end. E.g. ELK stack or InfluxDB & Grafana. See here for a short comparison of ElasticSearch vs. InfluxDB. For the latter I found a very useful Docker Compose setup here, so InfluxDB & Grafana it was!

How to create users in InfluxDB

Grafana should only have read-only access. The submit process should only have a write-only access. Create those users:

CREATE USER submit WITH PASSWORD 'submitpw';
GRANT WRITE ON demo TO submit;
CREATE USER grafanaro WITH PASSWORD 'testro';
GRANT READ ON demo TO grafanaro;

Then there's the DB root account of course which you can find in the docker-compose.yml file.

How to submit data into InfluxDB

This simple shell script will do and quickly generate some moderately useful data:

#!/bin/bash
pingAvg=`ping -c 5 router.lan | tail -1 | awk -F/ '{print $5}'`
curl -i -XPOST 'http://influxdb.lan:8086/write?db=demo' -u submit:submitpw --data-binary 'ping router='$pingAvg

Run it at least twice as Grafana won't display much if there's only one data point. Then configure a panel on a new dashboard in Grafana:

Reload the graph, set the time frame to the last 15min, and you should see something which proves that data ingestion works.

Next step

Create a small program to subscribe data updates from the MQTT broker and send it securely to InfluxDB.

Comments

AWS S3 Signed URL's

I saw some questions on the web regarding signed S3 URLs. Those would allow someone else (not an AWS IAM user) to access S3 objects. E.g. if I have a program which has permissions to a given S3 object, I can create a signed URL which allows anyone with the knowledge of that URL to (e.g.) read the object. Or write. A simple example would be a video training web site: I could give the user a URL which is valid 24h to they can watch a video as many times as they like, but 24h only. The alternative would be the URL of the S3 object directly.

There are many ways to solve this problem, but signed URLs is what AWS offers.

Since there were so many postings and questions around this, I wondered what the problem was. The documentation at https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrl-property certainly looked straightforward.

So a quick program created:

const AWS = require('aws-sdk')

const s3 = new AWS.S3()
// above is using ~/.aws/config.json to get my API key credentials
// That API key inside config.json obviously has permission to the object.
// A normal web browser cannot access the S3 URL ythough as the
// bucket is not public.

const myBucket = 'BUCKET'
const myKey = 'FILE.json'
const signedUrlExpireSeconds = 60 * 5 // 5min

const url = s3.getSignedUrl('getObject', {
    Bucket: myBucket,
    Key: myKey,
    Expires: signedUrlExpireSeconds
})

console.log(url)

and it all worked (AccessKeyId has access to the S3 object):

harald@blue:~/js/aws$ node sign.js 
https://BUCKET.s3.amazonaws.com/FILE.json?AWSAccessKeyId=AXXXXXXXXXXXXXXXXXXA&Expires=1529832632&Signature=D7eArF9AMFyWr%2FLoXcCQ0pA72i8%3D
harald@blue:~/js/aws$ curl "https://BUCKET.s3.amazonaws.com/FILE.json?AWSAccessKeyId=AXXXXXXXXXXXXXXXXXXA&Expires=1529832632&Signature=D7eArF9AMFyWr%2FLoXcCQ0pA72i8%3D"
{
      "AWSTemplateFormatVersion" : "2010-09-09",
      "Resources" : {
[...]
}

It's as easy as I thought.

Comments