Backlog#

Iteration +0#

Iteration +1#

Iteration +2#

  • [o] Data plane refactoring

    • Fused/combined data model of sensors vs. data/readings

  • [x] IRCELINE: Progressively/chunked fetching of timeseries information - not all at once.

  • [o] Refactor Markdown documentation in Grafana Dashboards

  • [o] Improve IRCELINE data processing efficiency

  • [o] Error with invalid timestamps when requesting IRCELINE: “statusCode” error

  • [o] Grafana: Drill down to detail view via circle https://vmm.hiveeyes.org/grafana/d/gG-dP2kWk/luftdaten-viewer-ldi-trend?var-ldi_station_id=8667

  • [o] Grafana: Enhanced popover with structured (meta)data transfer

  • [o] Grafana: Drill down to detail view via popover

  • [o] As the Panodata Map Panel (ex. Grafana Worldmap Panel) decodes the geohash to lat/lon using decodeGeoHash() anyway, let’s go back to storing the position as lat/lon again.

  • [o] Use Grafana Folder “Luftdatenpumpe” for storing dashboards.

  • [o] When acquiring data from specific sensors, use API endpoints like http://api.luftdaten.info/v1/sensor/25735/.

  • [o] Look at world air quality data

Iteration +3#

IRCELINE#

Supporting the Flanders Environment Agency (VMM). Thanks likewise for supporting us.

http://shiny.irceline.be/examples/

Documentation#

  • [o] Adjust inline Markdown: Rename “About Luftdatenpumpe” to “Luftdaten-Viewer » About” and rephrase content appropriately.

  • [o] By default, gets the last reading. Querying IRCELINE w/o timestamp yields a whole week. IRCELINE has hourly, while LDI has 5-minute measurement intervals.

  • [o] Improve inline documentation on irceline.py

  • [o] Add “read this section carefully” to documentation pages

  • [o] Add Sphinx documentation renderer, publish to hiveeyes.org/doc/luftdatenpumpe

  • [o] Add more “About us” to luftdaten-info-trend dashboard

  • [o] Cross-reference map- and trend-dashboards

  • [o] Link to https://pypi.org/project/luftdaten/ and https://github.com/dr-1/airqdata

  • [o] Remark about database license from https://github.com/dr-1/airqdata

  • [o] Decrease logo size: https://pypi.org/project/luftdatenpumpe/

  • [o] Announce @ GitHub: luftdatenpumpe readings --station=28 --target=mqtt://mqtt.example.org/luftdaten.info/testdrive --target=stream://sys.stdout | jq .

Iteration +4#

Spatial index on a geography table#

CREATE INDEX nyc_subway_stations_geog_gix
ON nyc_subway_stations_geog USING GIST (geog);

http://postgis.net/workshops/postgis-intro/geography.html

Iteration +5#

Iteration +6#

  • [o] Use https://grafana.com/grafana/plugins/ryantxu-ajax-panel/ to show other content

  • [o] What to do with high P1/P2 values > 1.000 and more?

  • [o] CSV import: Add more sensor types

  • [o] Link from sticky overlay to station trend dashboard

  • [o] Refactor for handling multiple data sources and targets

  • [o] Run some metric about total count of measuremnts per feed action

  • [o] Use more export formats from tablib

  • [o] Output data in tabular, markdown or rst formats

  • [o] Publish to MQTT with separate topics

  • [o] Store “boundingbox” attribute to RDBMS database

  • [o] Dry-run for RDBMS storage

  • Command line filters

    • [o] by sensor type

    • [o] by time range. e.g. for CSV file import.

  • Panodata Map Panel

    • [o] Handle multiple languages with Nominatim. Use English as default.

    • [o] Get English (or configurable) country labels from Nominatim

    • [o] JSON endpoint: Add formatter jq '[ .[] | {key: .station_id | tostring, name: .name} ]'

    • [o] JSON endpoint: Map by geohash only

    • [o] Link to Nominatim place_id, see https://nominatim.hiveeyes.org/details.php?place_id=8110875

  • [o] Migration documentation from https://getkotori.org/docs/applications/luftdaten.info/

  • [o] Mention other projects

  • [o] How to improve Panodata Map Panel JSON document becoming stale? /public/json/ldi-stations.json?_cache=4

  • [o] Check out wizzy for Grafana provisioning? https://github.com/utkarshcmu/wizzy

  • [o] Docs? https://github.com/grafana/worldmap-panel/issues/176

Email address for Nominatim#

email=<valid email address>

If you are making large numbers of request please include a valid email address or alternatively include your email address as part of the User-Agent string. This information will be kept confidential and only used to contact you in the event of a problem, see Usage Policy for more details.

https://wiki.openstreetmap.org/wiki/Nominatim

Iteration +7#

Iteration +8#

Grafana#

Appendix
========

Add text widget containing total number of stations in database.

Variable ``station_count```::

    SHOW TAG VALUES CARDINALITY WITH KEY = station_id;

Documentation#

  • [x] poe docs-html

  • [x] poe docs-linkcheck: cd docs; sphinx-build -b linkcheck . _build

  • [x] Update links in README.rst

  • [x] Development: Add README and CHANGELOG

  • [x] Section about development / contributions

  • [x] Add and update CONTRIBUTORS

  • [x] Testimonials => Gallery. Fix links to https://vmm.panodata.net/

  • [x] –help => Usage

  • [x] Refer to PostgreSQL “trust”-based authentication

  • [x] Change copyright name

  • [x] Add sphinx-copybutton and sphinx-tabs

  • [x] Remove version number at left top?

  • [/] Trim left-hand menu

  • [x] Copyright year

  • [x] Other projects: Add SCxxx

  • [x] Interlink with forum

  • [/] Use sphinx-inline-tabs

  • [x] Add sphinxext-opengraph

  • [x] Improve gallery

  • [x] Add a bit of eye candy to the landing page

Done#

All the machinery#

  • [x] Download cache for data feed (5 minutes)

  • [x] Write metadata directly to Postgres

  • [x] Redesign commandline interface

  • [x] Create CHANGES.rst, update documentation and repository (tags)

  • [x] Add tooling for packaging

  • [x] Publish to PyPI

  • [x] Write measurement data directly to InfluxDB

  • [x] Store stations / data while processing

  • [x] Make a sensor type chooser in Grafana. How would that actually select multiple(!) stations by id through Grafana?

  • [x] Store Geohash into InfluxDB database again. Check for sensor_id.

  • [x] Probe Redis when starting

  • [x] Add Grafana assets

  • [x] Import historical data from http://archive.luftdaten.info/

  • [x] Check User-Agent settings

  • [x] Overhaul station metadata process:

    1. Collect station information from API or CSV into PostgreSQL

    2. Export station information from PostgreSQL as JSON, optionally in format suitable for Panodata Map Panel.

  • [x] Improve README

    • [x] Add link to Demo #5

    • [x] Mention InfluxDB storage and historical data

    • [x] Add some screenshots

  • [x] Add more sensors:

    • archive.luftdaten.info/2017-10-08/2017-10-08_pms3003_sensor_366.csv

    • archive.luftdaten.info/2017-10-08/2017-10-08_pms7003_sensor_5920.csv

    • archive.luftdaten.info/2017-11-25/2017-11-25_hpm_sensor_7096.csv

    • archive.luftdaten.info/2017-11-26/2017-11-26_bmp280_sensor_2184.csv

    • archive.luftdaten.info/2017-11-26/2017-11-26_htu21d_sensor_2875.csv

  • [x] Speed up CSV data import using UDP?

  • [x] Add PostgreSQL view “ldi_view” with ready-computed name+station_id things and more

  • [x] Improve RDBMS database schema

    • [x] Rename “weatherbase” to “weatherbase”

    • [x] Rename id => station_id

    • [x] Rename osm => osm_*

    • [x] Rename ldi_view => ldi_network

  • [x] Fix Grafana vt+kn exports

  • [x] Overhaul Grafana dashboards

  • [x] Display number of sensors per family

  • [x] Remove –help from README

  • [x] Improve README re. setup

  • [x] Entrypoints for rendering Grafana JSONs

  • [x] New sensor type DS18B20, e.g. WARNING: Skip import of /var/spool/archive.luftdaten.info/2019-01-01/2019-01-01_ds18b20_sensor_11301.csv. Unknown sensor type

  • [x] Add station_id to “choose multiple stations” chooser

  • [x] Add GRANT SQL statements and bundle with “–create-view” to “–setup-database”

  • [x] Progressbar for emitting data to target subsystems

  • [x] Data plane refactoring

    • Put “sensor_id” into “data/reading” item

    • Streamline processing of multiple readings

More#

  • [x] Fixed:

    2019-01-21 02:54:44,787 [luftdatenpumpe.core           ] WARNING: Could not make reading from {'sensordatavalues': [{'value': '81.40', 'value_type': 'humidity', 'id': 5790214143}, {'value': '0.20', 'value_type': 'temperature', 'id': 5790214142}], 'sensor': {'sensor_type': {'name': 'DHT22', 'manufacturer': 'various', 'id': 9}, 'pin': '7', 'id': 19755}, 'timestamp': '2019-01-21 01:50:56', 'id': 2724801826, 'location': {'longitude': '', 'latitude': '47.8120', 'altitude': '58.0', 'country': 'DE'}, 'sampling_rate': None}.
    Traceback (most recent call last):
      File "/opt/luftdatenpumpe/luftdatenpumpe/core.py", line 230, in request_live_data
        reading = self.make_reading(item)
      File "/opt/luftdatenpumpe/luftdatenpumpe/core.py", line 290, in make_reading
        self.enrich_station(reading.station)
      File "/opt/luftdatenpumpe/luftdatenpumpe/core.py", line 308, in enrich_station
        station.position.geohash = geohash_encode(station.position.latitude, station.position.longitude)
      File "/opt/luftdatenpumpe/luftdatenpumpe/geo.py", line 351, in geohash_encode
        geohash = geohash2.encode(float(latitude), float(longitude))
    TypeError: float() argument must be a string or a number, not 'NoneType'
    
  • [x] Spotted this:

        2019-01-23 16:08:45,230 [luftdatenpumpe.core           ] WARNING: Could not make reading from {'location': {'latitude': 48.701, 'longitude': 9.316}, 'timestamp': '2018-11-03T02:51:15', 'sensor': {'sensor_type': {'name': 'BME280'}, 'id': 17950}}.
        Traceback (most recent call last):
          File "/home/elmyra/develop/luftdatenpumpe/lib/python3.5/site-packages/luftdatenpumpe/core.py", line 510, in csv_reader
            if not self.csvdata_to_reading(record, reading, fieldnames):
          File "/home/elmyra/develop/luftdatenpumpe/lib/python3.5/site-packages/luftdatenpumpe/core.py", line 538, in csvdata_to_reading
            reading.data[fieldname] = float(value)
        ValueError: could not convert string to float: '985.56 1541213415071633'
    
        2019-01-23 16:08:45,282 [luftdatenpumpe.core           ] WARNING: Could not make reading from {'location': {'latitude': 48.701, 'longitude': 9.316}, 'timestamp': '2018-11-03T08:52:15', 'sensor': {'sensor_type': {'name': 'BME280'}, 'id': 17950}}.
        Traceback (most recent call last):
          File "/home/elmyra/develop/luftdatenpumpe/lib/python3.5/site-packages/luftdatenpumpe/core.py", line 510, in csv_reader
            if not self.csvdata_to_reading(record, reading, fieldnames):
          File "/home/elmyra/develop/luftdatenpumpe/lib/python3.5/site-packages/luftdatenpumpe/core.py", line 538, in csvdata_to_reading
            reading.data[fieldname] = float(value)
        ValueError: could not convert string to float: '985.97 1541235075187801'
    
    Update: Seems to work already, see ``luftdatenpumpe readings --network=ldi --sensor=17950 --reverse-geocode``.
    

IRCELINE#

  • [x] Add IRCELINE SOS data plane

  • [x] Add IRCELINE SOS to Grafana and documentation

  • [x] Add filtering for SOS API, esp. by station id

  • [x] Add time control, date => start, stop parameters or begin/end

  • [x] Fix slugification of IRCELINE name “wind-speed-scalar-”

  • [x] Ignore --country=BE when operating on IRCELINE