Issue starting docker image

Hi all.

Trying to install 1.1 docker image via portainer (and OpenMediaVault) on Raspberry Pi 4 and getting an error when it tries to start the container. Portainer etc is up to date.

Error log shows the following:-

  • WEBTHINGS_HOME=/home/node/.webthings
  • args=
  • start_task=run-only
  • is_container
  • ‘[’ -f /.dockerenv ‘]’
  • return 0
    ++ node --version
    ++ egrep -o ‘[0-9]+’
    ++ head -n1
  • _node_version=12
  • [[ ! -f /home/node/.webthings/.node_version ]]
  • cd /home/node/webthings/gateway
  • mkdir -p /home/node/.webthings/config
  • ./tools/update-addons.sh
    Opening database: /home/node/.webthings/config/db.sqlite3
    /home/node/webthings/gateway/build/models/things.js:377
    addon_manager_1.default.on(Constants.THING_ADDED, (thing) => {
    ^
    TypeError: Cannot read property ‘on’ of undefined
    at Object. (/home/node/webthings/gateway/build/models/things.js:377:25)
    at Module._compile (internal/modules/cjs/loader.js:999:30)
    at Object.Module._extensions…js (internal/modules/cjs/loader.js:1027:10)
    at Module.load (internal/modules/cjs/loader.js:863:32)
    at Function.Module._load (internal/modules/cjs/loader.js:708:14)
    at Module.require (internal/modules/cjs/loader.js:887:19)
    at require (internal/modules/cjs/helpers.js:74:18)
    at Object. (/home/node/webthings/gateway/build/models/actions.js:40:34)
    at Module._compile (internal/modules/cjs/loader.js:999:30)
    at Object.Module._extensions…js (internal/modules/cjs/loader.js:1027:10)

Don’t really know where to start looking to see what is wrong/missing.

Version 1.0 worked fine :slight_smile:

See also: https://github.com/WebThingsIO/gateway/issues/3043

It appears you’re not the only person having this problem, but 1.1 docker images are working for some people so I’m not yet sure what the issue is.

Edit: Sorry, I pasted the wrong link.

I was wondering whether it was because I had started from scratch and something is not defined that would be if there was a fully working config there to start with.
Only other thought was that something in my Pi/docker setup is at a different version.

I am facing the same issue as you. Gateway 1.0 image works fine in docker, but 1.1 did not. Whats your docker compose/portainer stack config?
Mine is:

version: ‘4’
services:
webthings:
container_name: webthings
image: “webthingsio/gateway:1.0.0”
volumes:
- /home/dell/docker/webthings/:/home/node/.webthings
- /etc/localtime:/etc/localtime
network_mode: host
restart: always
logging:

  options:
    max-size: "1m"
    max-file: "10"

My docker WT 1.0 on a RPI 2B+ upgraded correctly to the 1.1 image after about 5-6 minutes delay. I freaked a bit as there were copious errors in the log for the first 5 minutes. Eventually a few addons updated and correctly loaded (DateTime, Zwave, X10, etc) and my 20 or so Things were shown. Seems to be operating correctly after a few days.

OK so after few attempts I find out where the problem was. An incorrect config in the portainer stack with timezone settings. Image with Gateway 1.1.0 is fully working now! The correct config for portainer stack or docker-compose is :

version: '4'
services:
  webthings:
    container_name: webthings
    image: "webthingsio/gateway"
    volumes:
      - /home/dell/docker/webthings/:/home/node/.webthings
    network_mode: host
    environment:
      - "TZ=Europe/Prague"
    restart: always
    logging:

      options:
        max-size: "1m"
        max-file: "10"

Does not appear to be my problem - using either method of setting the timezone has no change on the error generated when trying to start the container.

For reference, here is my working docker script I use to start WT on my RPI. You may want to try starting it manually.

It’s advisable to:

  • Copy the latest working .webthings folder to a release-specific version each time you upgrade

  • tag and execute images using a release so you can revert to running any previous release using a backup copy

  • If on an RPI, mount a USB stick and put WT volume on it to reduce disk writes

    docker rm gateway110

    docker run -d --restart unless-stopped --net=host
    -e TZ=America/Chicago
    –device /dev/ttyACM0:/dev/ttyACM0
    –log-opt max-size=1m
    –log-opt max-file=5
    -v /mnt/EdbergFS/mozilla-iot:/home/node/.mozilla-iot
    –name gateway110 webthingsio/gateway:1.1.0

Same here.
Could not get docker image V1.1 running neither with docker run nor with docker compose.

There’s some evidence that the 1.1 image only works for upgrades, not fresh installs.

Has anyone successfully installed the 1.1 image as a fresh install?

I created an empty folder to act as the persistent docker volume for the .webthings folder and started 1.1.0. The following errors were immediate displayed in Docker logs.

I’m not sure how to “correctly” start a new new docker webthings image but will try again if alternate instructions are provided.

Do I need to have a core set of addons in .webthings?

I seem to remember this error when performing the real upgrade from the previous version to 1.1.0. Not sure what was going on in the background but addons and/or SW upgrades were taking place. It took 5 minutes before my production system upgrade actually worked again.

ele@elelinux:~$ docker logs 5719e2334a27
+ WEBTHINGS_HOME=/home/node/.webthings
+ args=
+ start_task=run-only
+ is_container
+ '[' -f /.dockerenv ']'
+ return 0
++ node --version
++ egrep -o '[0-9]+'
++ head -n1
+ _node_version=12
+ [[ ! -f /home/node/.webthings/.node_version ]]
+ cd /home/node/webthings/gateway
+ mkdir -p /home/node/.webthings/config
+ ./tools/update-addons.sh
Creating database: /home/node/.webthings/config/db.sqlite3
/home/node/webthings/gateway/build/models/things.js:377
addon_manager_1.default.on(Constants.THING_ADDED, (thing) => {
                        ^
TypeError: Cannot read property 'on' of undefined
    at Object.<anonymous> (/home/node/webthings/gateway/build/models/things.js:377:25)
    at Module._compile (internal/modules/cjs/loader.js:999:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
    at Module.load (internal/modules/cjs/loader.js:863:32)
    at Function.Module._load (internal/modules/cjs/loader.js:708:14)
    at Module.require (internal/modules/cjs/loader.js:887:19)
    at require (internal/modules/cjs/helpers.js:74:18)
    at Object.<anonymous> (/home/node/webthings/gateway/build/models/actions.js:40:34)
    at Module._compile (internal/modules/cjs/loader.js:999:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
Process 10 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid)

It created the sqlite db before core dumping:

ele@elelinux:~$ ls -ld /home/webthingsnew/
drwxrwxr-x 3 ele ele 4096 Jan 19 06:36 /home/webthingsnew/
ele@elelinux:~$ ls -ld /home/webthingsnew/config
drwxr-xr-x 2 ele ele 4096 Jan 19 06:36 /home/webthingsnew/config
ele@elelinux:~$ ls -l /home/webthingsnew/config
total 0
-rw-r--r-- 1 ele ele 0 Jan 19 06:36 db.sqlite3

So, I ran up a container with a 1.0 image with a persistent volume for /home/node/.webthings and got to login (tried to setup domain knowing it would fail and created a user).
Then deleted the container and created a new one using 1.1 and the volume from above. Container starts fine.
Deleted the contents of the persistent volume so it was empty (trying to see if it was a specific folder it was having problems with) and then started the container again. Everything works fine - creates all the folders and starts from the beginning of the process and lets me reclaim the domain.
I don’t have time at the moment, but I do some more tests starting from scratch with a new image and the persistent volume with various bits deleted and see what happens - my first thought is that it needs something on that volume to get going but once it has got going it no longer needs it.

Since were using docker there is no way to store any information locally after a persistent volume is deleted. Each time the container starts it depends on the info in the volume.

Wonder if there is some process in the WT cloud that is not initialized correctly for new installations e.g.: it returns different information for new and previously
installed/upgraded environments.

Think I have narrowed it down. After a lot of trial and error, I have found that the 1.1.0 image will start correctly on a new install
IF
the .node_version file is created in /home/node/.webthings with 12 as its contents.
Don’t need anything else in there, just the .node_version file.

1 Like

I confirm that restoring the missing “.node_version” file allows a new docker 1.1.0 image to start correctly. ps: I like docker. It took me 2 minutes to stop and start images to test the work-around, and it will take me another 2 to revert to the real production 1.1.0 image (using it’s dedicated volume).

@tim_holden : do you want to create a new github issue specifying the work-around?

I had the same problem. Thank you for finding the solution :slight_smile:

A month ago I built a new WT / Wireguard gateway at my remote residence and fretted when the WT docker image would not start on that brand new rpi4. Took me an hour to remember about the missing .node_version file which caused that fault. Manually creating/initializing the file allowed WT to start correctly once again…

Thank you for the updates, would someone be willing to create a PR with a fix so that we can push out a new docker image?