How does assistant work?

The Mozilla IoT user guide mentions the Assistant feature, which is cool, and I want to understand it better. I didn’t find many details about it in a cursory search.

I see a completed issue that begins to help, but doesn’t point to any documentation.

I’d love to know how to control Things with this (and would happily contribute to the mycroft mozilla skill if I understood this better).

Reading into it a bit more, it seems like the Adapt parser gets trained to recognize the title of each button.

It’d be cool if it were possible to specify extra utterances in a config that could get passed to Adapt as well.

The vocabulary is fairly limited right now. Here’s a small bit of documentation: https://github.com/mozilla-iot/gateway/blob/master/src/controllers/commands_controller.js#L6-L16

Essentially, the flow is this:

  1. Command is posted to the commands controller.
  2. Command is passed into the intent parser model
  3. Command is passed to the intent parser server

From a “user” point of view the “assistant” that is within the gateway software itself works OK both on a phone & in a real browser ( tried a few!)
That is really good work…
Now I have been playing with Mycroft.ai ( picroft technically) on & off for years, and setting up the skill for mycroft to pass the utterances over to the Moz-IOT gateway was pretty easy… It was not “mainstream” because it failed the first time almost every time ( I have started setting it up from scratch now 3 times!) but it works exactly as advertised. :slight_smile:
The vocabulary is very limited, but I have only got two switches ( called “light” & “power” and they are working like a bought one!

Whilst I can’t code, I am pretty good a breaking stuff , and testing for weirdness, and I will continue to do that! - So I am here to help test.

Oh okay, that makes sense now that I’ve read all three parts of it. Thanks!

So if the titles are currently what are being passed through to the parser, would it be possible to also check each thing for specific attributes like extra_setter_keywords, extra_getter_keywords, extra_entity_keywords and pass those through too?

It already seems like it works for a lot of cases, just thinking out loud. It might be over-engineering, but it seems like allowing adapters easy ways to extend this functionality might be easier than adding new keywords to the IntentParser.

We’ve actually been considering making the entire assistant piece a standalone add-on, so the architecture is kind of up in the air right now. I like what you’re getting at, though.

Very cool, I’ll be curious to follow along!