Skip to content

Conversation

Nadahar
Copy link
Contributor

@Nadahar Nadahar commented Mar 5, 2025

Rule file provider

  • Create rule file provider for JSON files found in conf/automation/rules
  • Create rule file provider for YAML files in accordance with the "new YAML format" (RFC: YAML configuration #3666)
  • Create rule template provider for YAML files in accordance with the "new YAML format" (RFC: YAML configuration #3666)
  • Fix DELETE event bug in FolderObserver

Description

This PR has changed quite a bit from when it was originally created, so I've updated the description.

The primary task of this PR is to provide file-based YAML and JSON rules. The JSON parser supports multiple rules in one file if they are supplied within an array, which means enclosed in [ and ] and separated by ,.

YAML files also support rule templates, both in the "new YAML format" developed in #3666. There has been a separate RFC in #4797 discussing how to define the syntax for rules and rules templates. Rules use the rules element, white rule templates use the ruleTemplates element.

This PR is very loosely coupled to #4591 in that they are both split out as PRs from the same "work branch". The FolderObserver file watcher DELETE fix has been included here, since it was discovered after #4591 was submitted.

It should also be mentioned that the documentation currently claims support for JSON based rules, but no evidence that this exists, or has existed in previous versions, has been found in the code:

The automation engine reads rule json files from the {openhab-dir}/automation/*.json directory.

The documentation should probably be updated to specify {openhab-dir}/automation/rules/*.json

Original PR description below:

Rule file provider

  • Create rule file provider for JSON and YAML files
  • Fix DELETE event bug in FolderObserver

Description

The primary task of this PR is to provide file-based YAML and JSON rules. It supports multiple rules in one file if they are supplied within an array, which means enclosed in [ and ] and separated by ,.

This PR is very loosely coupled to #4591 in that they are both split out as PRs from the same "work branch", and this PR use the AbstractJacksonYAMLParser introduced in #4591. The FolderObserver file watcher DELETE fix has been included here, since it was discovered after #4591 was submitted.

There is one issue that might need to be discussed, and that is the location to look for the YAML and JSON files. There are two "obvious candidates", conf/automation/rules and conf/rules. The latter is the folder already in use for Rules DSL files, while the former isn't currently used (or automatically created when OH is installed). In either case, some minor changes to the documentation should be made as well, but it's hard to write that until the final location has been decided.

Currently, this PR uses conf/rules, but that's very easy to change. From a user's perspective, this might be the most logical, as they are still rules, although the format is different. It doesn't seem like this causes any conflicts, but a debug log message is created both from the Rules DSL parser (that it ignores the .yaml or .json file) and from the AbstractFileProvider that it doesn't have a parser for .rules files. None of these are shown by default. The involved classes could be modified to not log these cases, as they are expected, but both are "generic in nature" and introducing exceptions to concrete file extensions doesn't really "fit".

If conf/automation/rules are used instead, the "conflict" with two parsers monitoring the same folder would be avoided, but it might be more confusing for the end user to have to place the rule files in two different locations. It should also be considered if this folder should be created when the installation is set up, like with many of the other folders.

It should also be mentioned that the documentation currently claims support for JSON based rules, but no evidence that this exists, or has existed in previous versions, has been found in the code:

The automation engine reads rule json files from the {openhab-dir}/automation/*.json directory.

If this actually is wrong, the documentation should be corrected.

@Nadahar Nadahar requested a review from a team as a code owner March 5, 2025 16:24
@Nadahar Nadahar force-pushed the rule-file-provider branch from ba2b127 to 7fde6c3 Compare March 5, 2025 16:28
@lolodomo
Copy link
Contributor

lolodomo commented Mar 9, 2025

The primary task of this PR is to provide file-based YAML and JSON rules.

We are already in the process to add YAML for different kinds of things with the ability to put everything in a same file.
I don't know what you are doing exactly with this PR but we have to take care to have something globally consistent. And also use the same YAML library.
In case it makes no sense to consider the other discussions about YAML in the current context, you can forget my message.

@Nadahar
Copy link
Contributor Author

Nadahar commented Mar 9, 2025

We are already in the process to add YAML for different kinds of things with the ability to put everything in a same file.
I don't know what you are doing exactly with this PR but we have to take care to have something globally consistent. And also use the same YAML library.

I'm not familiar with this effort and exactly what it will end up doing, but this PR adds a rule file provider for both JSON and YAML. The YAML parser is the same as the one already used for rule templates, and it is the same YAML parser (Jackson) as I've found other places where YAML is parsed in Core.

This isn't related to any UI or how MainUI deals with YAML. It simply deserializes "object" in either YAML or JSON into a "OH Rule object".

@Nadahar Nadahar force-pushed the rule-file-provider branch from 7fde6c3 to cd52101 Compare March 12, 2025 19:36
@Nadahar
Copy link
Contributor Author

Nadahar commented Mar 13, 2025

What I need is some feedback on which folder to use. If conf/rules is preferred, this PR is ready. If not, I need to make some modifications.

@lolodomo
Copy link
Contributor

You complains about no feedback on your PRs but I see I already answered to you. I am still thinking and I am not alone that we should have a global approach for YAML, the idea is the ability to put all sorts of things in our new YAML file format including items, things, but also rules. If you propose to create another YAML file, I am not sure it is the best idea.
Typically, we should simply add a "rules" element in our new YAML format.
That may be the reason you got no other feedback.

Regarding your last question about the folder, sorry I have not the answer, rule management is one of the OH parts I am knowing less.

@Nadahar
Copy link
Contributor Author

Nadahar commented Apr 28, 2025

You complains about no feedback on your PRs but I see I already answered to you. I am still thinking and I am not alone that we should have a global approach for YAML, the idea is the ability to put all sorts of things in our new YAML file format including items, things, but also rules. If you propose to create another YAML file, I am not sure it is the best idea.
Typically, we should simply add a "rules" element in our new YAML format.
That may be the reason you got no other feedback.You complains about no feedback on your PRs but I see I already answered to you. I am still thinking and I am not alone that we should have a global approach for YAML, the idea is the ability to put all sorts of things in our new YAML file format including items, things, but also rules. If you propose to create another YAML file, I am not sure it is the best idea.
Typically, we should simply add a "rules" element in our new YAML format.
That may be the reason you got no other feedback.

If that's the case, it should be made much more clear. I interpreted your comment more as a "please remember to think of this", not as a blocker.

To explain, this is a part of a larger work that I have isolated out as a separate PR to make things more manageable to review. So, I have other code that builds on the capabilities this provides.

Currently, there are only two sources of rules in standard OH, and that is using rule DSL files or managed rules created in the UI. In addition, add-ons can of course inject rules any way they like, but I don't think this is done much except for by the scripting add-ons.

This means that there are no way to provide unmanaged rules unless they are written as rule DSL. The rule DSL syntax doesn't support rule templates, so if you want to use rule templates, you must use managed rules only. This PR provides a way to create unmanaged rules of any type, it's not bound to rules DSL (although the rules CAN use rules DSL or any other scripting language) and it supports rule templates.

According to the current documentation, this already exists for JSON - but I've both tested it and searched everywhere in core for a trace of this, and it's simply not there. So, this in fact provides functionality that's already claimed to be there.

The reason for adding YAML in addition to JSON is because there's a "unique problem" with JSON when dealing with scripts. Almost all rules will contain at least one script, and since JSON doesn't allow multi-line strings, you have to provide the whole script as one long line separated by \n. That's not really practical, which is why YAML is very helpful, because it does have multi-line string support. If it wasn't for that, I'd rather see YAML burn in a certain place, because I just can't stand white-space based formats. I think they are highly "volatile" to work with, and not any easier to read or write. It takes very little time to get used to a properly formatted syntax, and people should invest that little time instead of going down the path of all the problems white-space based formats cause.

The YAML parser used for this is the same YAML parser already used for rule templates. Rules and rule templates are "almost the same thing", a rule template is a rule with some placeholders not yet filled in. So, it would be very logical, to me at least, that they support the same syntax and have the same possibilities.

I tried to find information about this "global approach for YAML", and I didn't really find it. I read through some PRs that I thought might be related, I searched the forum, but I still don't know what the idea really is. As such, it's difficult for me to know how these things would work together or conflict.

If the idea is that you can have one huge YAML file with all the different kind of configuration in it, I'd say that I personally wouldn't find that attractive. It's comparable to write a program as one huge class/unit of continuous code instead of separating it into a logical structure separate unrelated things. I can see that it can be handy in some cases, it might be handy to be able to define a Thing and the corresponding Item(s) in the same file. That's fine, but to go from there to forcing "everything" into the same file seems strange to me. I don't know if that's the idea though.

But, in any case, this new "global parser" would have to implement a huge number of Providers. As such, it would have to implement RuleProvider to support rules. There's no conflict with having a standalone rule provider and one "combined provider" that also provides rules. Are you planning to remove all the existing standalone providers for different things when this "combined provider" is in place? If so, the rule provider in this PR could be removed along with all the others. I still don't quite see the conflict.

@lolodomo
Copy link
Contributor

lolodomo commented Apr 29, 2025

This means that there are no way to provide unmanaged rules unless they are written as rule DSL. The rule DSL syntax doesn't support rule templates, so if you want to use rule templates, you must use managed rules only. This PR provides a way to create unmanaged rules of any type, it's not bound to rules DSL (although the rules CAN use rules DSL or any other scripting language) and it supports rule templates.
...
The YAML parser used for this is the same YAML parser already used for rule templates. Rules and rule templates are "almost the same thing", a rule template is a rule with some placeholders not yet filled in. So, it would be very logical, to me at least, that they support the same syntax and have the same possibilities.

DSL rule syntax should have been enhanced to support template. As you explained it, there is almost no difference between a rule and a rule template. This is something I could have a look (but not soon).

I tried to find information about this "global approach for YAML", and I didn't really find it. I read through some PRs that I thought might be related, I searched the forum, but I still don't know what the idea really is. As such, it's difficult for me to know how these things would work together or conflict.

There is mainly a very big issue/RFC #3666, if I am not wrong, you even posted a message in it.

If the idea is that you can have one huge YAML file with all the different kind of configuration in it, I'd say that I personally wouldn't find that attractive. It's comparable to write a program as one huge class/unit of continuous code instead of separating it into a logical structure separate unrelated things. I can see that it can be handy in some cases, it might be handy to be able to define a Thing and the corresponding Item(s) in the same file. That's fine, but to go from there to forcing "everything" into the same file seems strange to me. I don't know if that's the idea though.

Putting several kind of things in the same YAML file will just be an option, of course. You could also have one file per item, per thing, per rule, if that is your preference.
I am pinging @rkoshak as I believe he is one of the big sponsors of a YAML file format that could contain different kind of information and I believe his idea was also for rules.

But, in any case, this new "global parser" would have to implement a huge number of Providers. As such, it would have to implement RuleProvider to support rules. There's no conflict with having a standalone rule provider and one "combined provider" that also provides rules. Are you planning to remove all the existing standalone providers for different things when this "combined provider" is in place? If so, the rule provider in this PR could be removed along with all the others. I still don't quite see the conflict.

No, we add something new but there is no plan to remove existing providers.

My point is that you want to define a new YAML file format dedicated to rules and rule templates, I am just not sure it should be done outside our YAML model repository. And the syntax should be discussed to be the most user friendly as possible. I see in your PR that you made it aligned to the JSON object.
@rkoshak: what is your feeling with that ?

My suggestion would be at this time to just consider a JSON provider + a potential extension of the DSL rule to support rule template. But once again, that is only MY suggestion and not a decision against your proposal. Maybe there is something I do not understand well as I am not familiar with rule template.

I am finishing these days the support of items/metadata/channel links in the new YAML file format. The next step is probably the support of rules and rule templates. The syntax will certainly be discussed in #3666 .

@Nadahar
Copy link
Contributor Author

Nadahar commented Apr 29, 2025

DSL rule syntax should have been enhanced to support template. As you explained it, there is almost no difference between a rule and a rule template. This is something I could have a look (but not soon).

I don't think that would be very practical, but I guess I'd have to make some examples to show what I mean. However, this PR allows the use of templates both for Rules DSL and other scripts. The reason that I don't think it's "practical" is that there's nowhere to store the "rule stub"/"rule configuration" in Rules DSL. This is stored in the configuration object of the rule itself, which doesn't "exist" in a Rules DSL context. Remember that a Rules DSL file is parsed into a Rule, and during that parsing, the content is split into trigger(s) and an action. The Rules DSL format is quite limited in that it doesn't support conditions and doesn't support multiple actions, for example. Neither does it allow storage of for example the configuration object. But, the resulting object that is created by parsing a Rules DSL script does. With this PR, you can supply such an "already parsed" Rules DSL rule, where you can have as many triggers, conditions and actions that you want - in addition to said configuration.

There is mainly a very big issue/RFC #3666, if I am not wrong, you even posted a message in it.

OK, then I failed to grasp the full intention of #3666. I tried to read it, but it is extremely long, so I couldn't keep "full focus" all the way.

Putting several kind of things in the same YAML file will just be an option, of course. You could also have one file per item, per thing, per rule, if that is your preference.
I am pinging @rkoshak as I believe he is one of the big sponsors of a YAML file format that could contain different kind of information and I believe his idea was also for rules.

Yes, but fundamentally, to make it possible to put several things in the same YAML file, you must define some syntax that allow these to be parsed as different objects. This PR does no such thing, it only tries to deserialize a Rule from either YAML or JSON. As such, I still don't think there is much conflict, although you could maybe argue that this provider might become redundant when #3666 is in place, depending on how this is actually implemented. But, I still think this provider has its purpose as a complement to the rule template parser that is already there, not to mention that the documentation states that this functionality (minus YAML) already is in OH.

My point is that you want to define a new YAML file format dedicated to rules and rule templates,

Again, I don't see it as "defining a format", as it's just a "raw mapping" of the actual structure of a Rule. There is no syntax that has any meaning specific to this provider, there is nothing for anybody to "learn". You can copy what you find in the "Code" tab in the UI and paste it in a file that will then be picked up by this provider.

My suggestion would be at this time to just consider a JSON provider + a potential extension of the DSL rule to support rule template.

The problem with that, as I see it, is how terribly inconvenient it is to work with scripts in JSON, having to edit/maintain an entire script that is one line, separated by \n. Humans aren't really suitable for that, so it would probably mean to have to write it in YAML or some other format, and then do the conversion to JSON manually, outside of OH. It makes much more sense that OH will do that conversion for you.

@rkoshak
Copy link

rkoshak commented Apr 29, 2025

OK, then I failed to grasp the full intention of #3666. I tried to read it, but it is extremely long, so I couldn't keep "full focus" all the way.

#3666 is creating a standardized way to define everything that is currently definable via DSL (i.e. .items, .things, .persist, and .rules) using YAML. It's coupled with other PRs which added ability to have OH translate between the formats and UI updates which allow one to see the DSL version of a Thing (for example) and not just a YAML representation of the JSON.

Anything done on this PR needs to be reconciled with what is going on on #3666 because we don't want to inadvertantly introduce multiple YAML approaches.

You can copy what you find in the "Code" tab in the UI and paste it in a file that will then be picked up by this provider.

With #3666 and related PRs the YAML you find in the code tab will not necessarily be a straight mapping of the JSON any more. It might not even be YAML any more.

The problem with that, as I see it, is how terribly inconvenient it is to work with scripts in JSON, having to edit/maintain an entire script that is one line, separated by \n. Humans aren't really suitable for that, so it would probably mean to have to write it in YAML or some other format, and then do the conversion to JSON manually, outside of OH. It makes much more sense that OH will do that conversion for you.

I think @lolodomo's intent is to suggest that the YAML part of this waits until more progress is made on #3666 and then support for YAML templates would be added as part of #3666 when they get to that point.

I am pinging @rkoshak as I believe he is one of the big sponsors of a YAML file format that could contain different kind of information and I believe his idea was also for rules.

I definitely want rules to be included as part of #3666 and I would very much like the ability to define, load, and use rule tempaltes as well.

My point is that you want to define a new YAML file format dedicated to rules and rule templates, I am just not sure it should be done outside our YAML model repository. And the syntax should be discussed to be the most user friendly as possible. I see in your PR that you made it aligned to the JSON object.
@rkoshak: what is your feeling with that ?

I agree. I think we have an opportunity to take advantage of both PRs and come to a uinfied solution.

I think the main problem here is that the two efforts have been going on in parallel without coordination. I don't think we necessarily need to wait for5 one to finish before working on the other, but I think more coordination is needed to come out with a unified approach.

@Nadahar
Copy link
Contributor Author

Nadahar commented Apr 29, 2025

Anything done on this PR needs to be reconciled with what is going on on #3666 because we don't want to inadvertantly introduce multiple YAML approaches.

I get that, I'm just not sure I understand exactly how to "reconcile" it. #3666 is about a new format, based on YAML, if I understand it correctly, while this PR merely converts YAML to JSON - just like what is already done with rule templates. I think there should be parity between rules and rule templates as well.

Are there any examples of the formats that has already been implemented by #3666, so that I can get a better understanding of the idea?

The current implementation for rule templates doesn't really "bind" much, since rule templates as a whole is undocumented. As such, very few know that this even exists or how to use it. #3666 could probably simply replace the YAML parsing for rule templates without actually "breaking" anything out there. The question is how far into the future this is.

It could be argued that the same could be done with this. Merge it now with the "raw mapping" and then remove the YAML parsing when the result of #3666 is ready. As long as it's not documented, it won't see much use anyway, but it will at least be "usable" without having to do YAML -> JSON using an external converter for any change to the rules.

@Nadahar
Copy link
Contributor Author

Nadahar commented Apr 29, 2025

This is some of the test files I've used, it's the same two rules in JSON and YAML. See how utterly useless JSON is for scripts..

JSON:

[
  {
    "actions": [
      {
        "configuration": {
          "script": "// Version 1.0\nvar {TimerMgr, helpers} = require('openhab_rules_tools');\nconsole.loggerName = 'org.openhab.automation.rules_tools.TimeStateMachine';\n//osgi.getService('org.apache.karaf.log.core.LogService').setLevel(console.loggerName, 'DEBUG');\n\nhelpers.validateLibraries('4.2.0', '2.0.3');\n\nconsole.debug('Starting state machine in ten seconds...');\n\n// Properties\nvar STATE_ITEM  = \"DemoDateTime\";\nvar DT_GROUP    = \"DemoSwitchGroup\";\nvar DAY_TYPES   = ['custom', 'holiday', 'dayset', 'weekend', 'weekday', 'default'];\nvar NAMESPACE   = 'tsm';\nvar USAGE =   'Time Based State Machine Usage:\\n'\n            + 'All date times must be a member of ' + DT_GROUP + '.\\n'\n            + 'Each member of the Group must have ' + NAMESPACE + ' Item metadata of the following format:\\n'\n            + '  .items file: ' + NAMESPACE +'=\"STATE\"[type=\"daytype\", set=\"dayset\", file=\"uri\"]\\n'\n            + \"  UI YAML: use '\" + NAMESPACE + \"' for the namespace and metadata format:\\n\"\n            + '    value: STATE\\n'\n            + '    config:\\n'\n            + '      type: daytype\\n'\n            + '      set: dayset\\n'\n            + '      file: uri\\n'\n            + 'Where \"STATE\" is the state machine state that begins at the time stored in that Item, '\n            + '\"daytype\" is one of \"default\", \"weekday\", \"weekend\", \"dayset\", \"holiday\", or \"custom\". '\n            + 'If \"dayset\" is chosen for the type, the \"set\" property is required indicating the name of the '\n            + 'custom dayset configured in Ephemeris. If \"custom\" is chosen as the type, the \"file\" property '\n            + 'is required and should be the fully qualified path the the Ephemeris XML file with the custom '\n            + 'holidays defined. The \"set\" and \"file\" properties are invalid when choosing any of the other '\n            + '\"types\".';\n\n/**\n * Validates the passed in Item has valid NAMESPACE metadata.\n *\n * @param {string} itemName name of the Item to check\n * @throws exception if the metadata doesn't exist or is invalid\n */\nvar validateItemConfig = (itemName) => {\n  const md = items[itemName].getMetadata()[NAMESPACE];\n\n  if(md.value === undefined || md.value === null || md.value === '') {\n    throw itemName + ' has malformed ' + NAMESPACE + ' metadata, no value found!';\n  }\n\n  const dayType = md.configuration['type'];\n  if(!dayType) {\n    throw itemName + ' has malformed ' + NAMESPACE + ' metadata, required \"type\" property is not found!';\n  }\n\n  if(dayType == 'dayset' && !md.configuration['set']) {\n    throw itemName + ' has malformed ' + NAMESPACE + ' metadata, type is \"dayset\" but required \"set\" property is not found!';\n  }\n\n  if(dayType == 'custom' && !md.configuration['file']) {\n    throw itemName + ' has malformed ' + NAMESPACE + ' metadata, type is \"custom\" but required \"file\" property is not found!';\n  }\n\n  if(!items[itemName].type.startsWith('DateTime')) {\n    throw itemName + ' is not a DateTime Item!';\n  }\n\n  if(items[itemName].isUninitialized) {\n    throw itemName + \" is not initialized!: \" + items[itemName].state;\n  }\n\n  console.debug(itemName+ ' is valid');\n};\n\n/**\n * Return all members of the DT_GROUP that has a \"type\" metadata configuration property that\n * matches the passed in type.\n *\n * @param {string} type the day type defined in the metadata we want to get the Items for\n * @returns {Array} all the Items with the matching type in the metadata\n */\nvar getItemsOfType = (type) => {\n  const allItems = items[DT_GROUP].members;\n  return allItems.filter( item => item.getMetadata()[NAMESPACE].configuration['type'] == type);\n};\n\n/**\n * Returns true if all the Items of the given type have a unique \"state\" value\n * in the metadata.\n *\n * @param {string} the day type\n * @returns {boolean} true if all states are unique, false otherwise\n */\nvar checkUniqueStates = (type) => {\n  const allItems = getItemsOfType(type);\n  const states = new Set(allItems.map(i => { return i.getMetadata()[NAMESPACE].value; }));\n  return !allItems.length || allItems.length == states.size;\n};\n\n/**\n * Check that all Items are configured correctly.\n */\nvar validateAllConfigs = () => {\n  console.debug('Validating Item types, Item metadata, and Group membership');\n\n  // Check that all members of the Group have metadata\n  const itemsWithMD = items[DT_GROUP].members.filter(item => item.getMetadata(NAMESPACE)).length;\n  if(itemsWithMD != items[DT_GROUP].members.length) {\n    const noMdItems = items[DT_GROUP].members.filter(item => !item.getMetadata(NAMESPACE));\n    console.warn('The following Items do not have required ' + NAMESPACE + ' metadata: ' + noMdItems.map(item => item.name).join(', '));\n    return false; // no sense on performing any additional tests\n  }\n\n  // Check each Item's metadata\n  let isGood = helpers.checkGrpAndMetadata(NAMESPACE, DT_GROUP, validateItemConfig, USAGE);\n\n  // Check the state item\n  if(!items[STATE_ITEM]){\n    console.warn('The state Item ' + STATE_ITEM + ' does not exist!');\n    isGood = false;\n  }\n\n  if(!items[STATE_ITEM].type.startsWith('String')) {\n    console.warn('The state Item ' + STATE_ITEM + ' is not a String Item!');\n    isGood = false;\n  }\n\n  // Check to see if we have a default set of Items\n  if(!getItemsOfType('default')) {\n    console.warn('There are no \"default\" day type Items defined! Make sure you have all day types covered!');\n    // we'll not invalidate if there are no \"default\" items\n  }\n\n  // Check that each data set has a unique state for each Item\n  DAY_TYPES.forEach(type => {\n    if(!checkUniqueStates(type)) {\n      console.warn('Not all the metadata values for Items of type ' + type + ' are unique!');\n      isGood = false;\n    }\n  })\n\n  // Report if all configs are good or not\n  if(isGood) {\n    console.debug('All ' + NAMESPACE + ' Items are configured correctly');\n  }\n  return isGood;\n};\n\n/**\n * Pull the set of Items for today based on Ephemeris. The Ephemeris hierarchy is\n * - custom\n * - holiday\n * - dayset\n * - weeekend\n * - weekday\n * - default\n *\n * If there are no DateTime Items defined for today's type, null is returned.\n */\nvar getTodayItems = () => {\n  // Get all the DateTime Items that might apply to today given what type of day it is\n  // For example, if it's a weekend, there will be no weekday Items pulled. Whether or not\n  // the entry in this dict has an array of Items determines whether today is of that day\n  // type.\n  const startTimes = [\n    { 'type': 'custom',  'times' : getItemsOfType('custom').filter(item => actions.Ephemeris.isBankHoliday(0, item.getMetadata()[NAMESPACE].configuration['file'])) },\n    { 'type': 'holiday', 'times' : (actions.Ephemeris.isBankHoliday()) ? getItemsOfType('holiday') : [] },\n    { 'type': 'dayset',  'times' : getItemsOfType('dayset').filter(item => actions.Ephemeris.isInDayset(items.getMetadata()[NAMESPACE].configuration['set'])) },\n    { 'type': 'weekend', 'times' : (actions.Ephemeris.isWeekend()) ? getItemsOfType('weekend') : [] },\n    { 'type': 'weekday', 'times' : (!actions.Ephemeris.isWeekend()) ? getItemsOfType('weekday') : [] },\n    { 'type': 'default', 'times' : getItemsOfType('default') }\n  ];\n\n  // Go through startTimes in order and choose the first one that has a non-empty list of Items\n  const dayStartTimes = startTimes.find(dayset => dayset.times.length);\n\n  if(dayStartTimes === null) {\n    console.warn('No DateTime Items found for today');\n    return null;\n  }\n  else {\n    console.info('Today is a ' + dayStartTimes.type + ' day.');\n    return dayStartTimes.times;\n  }\n};\n\n/**\n * Returns a function called to transition the state machine from one state to the next\n *\n * @param {string} state the new state to transition to\n * @param {function} the function that transitions the state\n */\nvar stateTransitionGenerator = (state) => {\n  return function() {\n    console.info('Transitioning Time State Machine from ' + items[STATE_ITEM].state + ' to ' + state);\n    items[STATE_ITEM].sendCommand(state);\n  }\n}\n\n/**\n * Returns a function that generates the timers for all the passed in startTimes\n *\n * @param {Array} startTimes list of today's state start times\n * @param {timerMgr.TimerMgr} timers collection of timers\n * @returns {function} called to generate the timers to transition between the states\n */\nvar createTimersGenerator = (timers) => {\n  return function() {\n\n    if(validateAllConfigs()) {\n\n      // Cancel the timers, skipping the debounce timer\n      console.debug('Cancelling existing timers');\n      timers.cancelAll();\n\n      // Get the set of Items for today's state machine\n      console.debug(\"Acquiring today's state start times\");\n      const startTimes = getTodayItems();\n\n      // Get the state and start time, sort them ignoring the date, skip the ones that have\n      // already passed and create a timer to transition for the rest.\n      console.debug('Creating timers for times that have not already passed');\n      var mapped = startTimes.map(i => { return { 'state': i.getMetadata()[NAMESPACE].value,\n                                                  'time' : time.toZDT(i.state).toToday() } });\n      mapped.sort((a,b) => {\n               if(a.time.isBefore(b.time)) return -1;\n                else if(a.time.isAfter(b.time)) return 1;\n                else return 0;\n              })\n              .filter(tod => tod.time.isAfter(time.toZDT()))\n              .forEach(tod => {\n                // TODO: see if we can move to rules instead of timers\n                console.debug('Creating timer for ' + tod.state + ' at ' + tod.time);\n                timers.check(tod.state, tod.time.toString(), stateTransitionGenerator(tod.state));\n              });\n\n      // Figure out the current time of day and move to that state if necessary\n      var beforeTimes = mapped.sort((a,b) => {\n                                if(a.time.isAfter(b.time)) return -1;\n                                else if(a.time.isBefore(b.time)) return 1;\n                                else return 0;\n                              })\n                              .filter(tod => tod.time.isBefore(time.toZDT()));\n      if(!beforeTimes.length) {\n        console.debug(\"There is no date time for today before now, we can't know what the current state is, keeping the current time of day state of \" + items[STATE_ITEM].state + \".\");\n      }\n      else {\n        const currState = beforeTimes[0].state\n        const stateItem = items[STATE_ITEM];\n        console.info('The current state is ' + currState);\n        if(stateItem.state != currState) stateItem.sendCommand(currState)\n      }\n    }\n    else {\n      console.warn('The config is not valid, cannot proceed!');\n    }\n\n  };\n};\n\nvar timers = cache.private.get('timers', () => TimerMgr());\n\n// Wait a minute after the last time the rule is triggered to make sure all Items are done changing (e.g.\n// Astro Items) before calculating the new state.\ntimers.check('debounce',\n             'PT10S',\n             createTimersGenerator(timers),\n             true,\n             () => { console.debug('Flapping detected, waiting before creating timers for today'); });\n",
          "type": "application/javascript"
        },
        "id": "3",
        "inputs": {
        },
        "type": "script.ScriptAction"
      }
    ],
    "conditions": [
    ],
    "configDescriptions": [
      {
        "context": "item",
        "description": "String Item that holds the current time of day's state.",
        "filterCriteria": [
          {
            "name": "type",
            "value": "String"
          }
        ],
        "label": "Time of Day State Item",
        "name": "timeOfDay",
        "required": true,
        "type": "TEXT"
      },
      {
        "context": "item",
        "description": "Has as members all the DateTime Items that define time of day states.",
        "filterCriteria": [
          {
            "name": "type",
            "value": "Group"
          }
        ],
        "label": "Times of Day Group",
        "name": "timesOfDayGrp",
        "required": true,
        "type": "TEXT"
      },
      {
        "description": "The Item metadata namespace (e.g. \"tsm\").",
        "label": "Time of Day Namespace",
        "name": "namespace",
        "required": true,
        "type": "TEXT"
      }
    ],
    "description": "Creates timers to transition a state Item to a new state at defined times of day.",
    "name": "Time Based State Machine rule",
    "triggers": [
      {
        "configuration": {
          "groupName": "DemoSwitchGroup"
        },
        "id": "1",
        "type": "core.GroupStateChangeTrigger"
      },
      {
        "configuration": {
          "startlevel": 100
        },
        "id": "2",
        "type": "core.SystemStartlevelTrigger"
      },
      {
        "configuration": {
          "time": "00:05"
        },
        "id": "4",
        "type": "timer.TimeOfDayTrigger"
      }
    ],
    "uid": "rules_tools:tsm4"
  },
  {
    "actions": [
      {
        "configuration": {
          "script": "var from = parseFloat(oldState.toString().split(' ')[0]);\nvar to = parseFloat(newState.toString().split(' ')[0]);\n\nprint(from + '>' + to);\n\nif (to < 2 && from >= 2) {\n  events.sendCommand('DemoSwitch', 'LSELECT');\n}\n",
          "type": "application/javascript"
        },
        "id": "2",
        "inputs": {
        },
        "type": "script.ScriptAction"
      }
    ],
    "conditions": [
    ],
    "configDescriptions": [
      {
        "context": "item",
        "description": "Item that holds the power (in watts) of the washing machine. Can be a quantity type (Number:Power).",
        "label": "Power Item",
        "name": "powerItem",
        "required": true,
        "type": "TEXT"
      },
      {
        "defaultValue": 2,
        "description": "When the power measurement was at or above the threshold and crosses below it, trigger the alert.",
        "label": "Threshold",
        "name": "threshold",
        "required": true,
        "type": "DECIMAL"
      },
      {
        "context": "item",
        "description": "Item to send a command to when the measured power gets below the threshold. For instance, a Hue light advanced Alert channel.",
        "label": "Alert Item",
        "name": "alertItem",
        "required": true,
        "type": "TEXT"
      },
      {
        "defaultValue": "LSELECT",
        "description": "Command to send to the alert item (for an item linked to a Hue light alert channel, LSELECT will flash the light for a few seconds).",
        "label": "Alert Command",
        "name": "alertCommand",
        "required": true,
        "type": "TEXT"
      }
    ],
    "description": "This will monitor the power consumption of a washing machine and send an alert command when it gets below a threshold, meaning it has finished.",
    "name": "Alert when Washing Machine Finished rule",
    "triggers": [
      {
        "configuration": {
          "itemName": "CurrentPower",
          "state": ""
        },
        "id": "1",
        "type": "core.ItemStateChangeTrigger"
      }
    ],
    "uid": "ysc:washing_machine_alert2.4"
  }
]

YAML:

- actions:
    - configuration:
        script: >
          // Version 1.0

          var {TimerMgr, helpers} = require('openhab_rules_tools');

          console.loggerName =
          'org.openhab.automation.rules_tools.TimeStateMachine';

          //osgi.getService('org.apache.karaf.log.core.LogService').setLevel(console.loggerName,
          'DEBUG');


          helpers.validateLibraries('4.2.0', '2.0.3');


          console.debug('Starting state machine in ten seconds...');


          // Properties

          var STATE_ITEM  = "DemoDateTime";

          var DT_GROUP    = "DemoSwitchGroup";

          var DAY_TYPES   = ['custom', 'holiday', 'dayset', 'weekend',
          'weekday', 'default'];

          var NAMESPACE   = 'tsm';

          var USAGE =   'Time Based State Machine Usage:\n'
                      + 'All date times must be a member of ' + DT_GROUP + '.\n'
                      + 'Each member of the Group must have ' + NAMESPACE + ' Item metadata of the following format:\n'
                      + '  .items file: ' + NAMESPACE +'="STATE"[type="daytype", set="dayset", file="uri"]\n'
                      + "  UI YAML: use '" + NAMESPACE + "' for the namespace and metadata format:\n"
                      + '    value: STATE\n'
                      + '    config:\n'
                      + '      type: daytype\n'
                      + '      set: dayset\n'
                      + '      file: uri\n'
                      + 'Where "STATE" is the state machine state that begins at the time stored in that Item, '
                      + '"daytype" is one of "default", "weekday", "weekend", "dayset", "holiday", or "custom". '
                      + 'If "dayset" is chosen for the type, the "set" property is required indicating the name of the '
                      + 'custom dayset configured in Ephemeris. If "custom" is chosen as the type, the "file" property '
                      + 'is required and should be the fully qualified path the the Ephemeris XML file with the custom '
                      + 'holidays defined. The "set" and "file" properties are invalid when choosing any of the other '
                      + '"types".';

          /**
           * Validates the passed in Item has valid NAMESPACE metadata.
           *
           * @param {string} itemName name of the Item to check
           * @throws exception if the metadata doesn't exist or is invalid
           */
          var validateItemConfig = (itemName) => {
            const md = items[itemName].getMetadata()[NAMESPACE];

            if(md.value === undefined || md.value === null || md.value === '') {
              throw itemName + ' has malformed ' + NAMESPACE + ' metadata, no value found!';
            }

            const dayType = md.configuration['type'];
            if(!dayType) {
              throw itemName + ' has malformed ' + NAMESPACE + ' metadata, required "type" property is not found!';
            }

            if(dayType == 'dayset' && !md.configuration['set']) {
              throw itemName + ' has malformed ' + NAMESPACE + ' metadata, type is "dayset" but required "set" property is not found!';
            }

            if(dayType == 'custom' && !md.configuration['file']) {
              throw itemName + ' has malformed ' + NAMESPACE + ' metadata, type is "custom" but required "file" property is not found!';
            }

            if(!items[itemName].type.startsWith('DateTime')) {
              throw itemName + ' is not a DateTime Item!';
            }

            if(items[itemName].isUninitialized) {
              throw itemName + " is not initialized!: " + items[itemName].state;
            }

            console.debug(itemName+ ' is valid');
          };


          /**
           * Return all members of the DT_GROUP that has a "type" metadata configuration property that
           * matches the passed in type.
           *
           * @param {string} type the day type defined in the metadata we want to get the Items for
           * @returns {Array} all the Items with the matching type in the metadata
           */
          var getItemsOfType = (type) => {
            const allItems = items[DT_GROUP].members;
            return allItems.filter( item => item.getMetadata()[NAMESPACE].configuration['type'] == type);
          };


          /**
           * Returns true if all the Items of the given type have a unique "state" value
           * in the metadata.
           *
           * @param {string} the day type
           * @returns {boolean} true if all states are unique, false otherwise
           */
          var checkUniqueStates = (type) => {
            const allItems = getItemsOfType(type);
            const states = new Set(allItems.map(i => { return i.getMetadata()[NAMESPACE].value; }));
            return !allItems.length || allItems.length == states.size;
          };


          /**
           * Check that all Items are configured correctly.
           */
          var validateAllConfigs = () => {
            console.debug('Validating Item types, Item metadata, and Group membership');

            // Check that all members of the Group have metadata
            const itemsWithMD = items[DT_GROUP].members.filter(item => item.getMetadata(NAMESPACE)).length;
            if(itemsWithMD != items[DT_GROUP].members.length) {
              const noMdItems = items[DT_GROUP].members.filter(item => !item.getMetadata(NAMESPACE));
              console.warn('The following Items do not have required ' + NAMESPACE + ' metadata: ' + noMdItems.map(item => item.name).join(', '));
              return false; // no sense on performing any additional tests
            }

            // Check each Item's metadata
            let isGood = helpers.checkGrpAndMetadata(NAMESPACE, DT_GROUP, validateItemConfig, USAGE);

            // Check the state item
            if(!items[STATE_ITEM]){
              console.warn('The state Item ' + STATE_ITEM + ' does not exist!');
              isGood = false;
            }

            if(!items[STATE_ITEM].type.startsWith('String')) {
              console.warn('The state Item ' + STATE_ITEM + ' is not a String Item!');
              isGood = false;
            }

            // Check to see if we have a default set of Items
            if(!getItemsOfType('default')) {
              console.warn('There are no "default" day type Items defined! Make sure you have all day types covered!');
              // we'll not invalidate if there are no "default" items
            }

            // Check that each data set has a unique state for each Item
            DAY_TYPES.forEach(type => {
              if(!checkUniqueStates(type)) {
                console.warn('Not all the metadata values for Items of type ' + type + ' are unique!');
                isGood = false;
              }
            })

            // Report if all configs are good or not
            if(isGood) {
              console.debug('All ' + NAMESPACE + ' Items are configured correctly');
            }
            return isGood;
          };


          /**
           * Pull the set of Items for today based on Ephemeris. The Ephemeris hierarchy is
           * - custom
           * - holiday
           * - dayset
           * - weeekend
           * - weekday
           * - default
           *
           * If there are no DateTime Items defined for today's type, null is returned.
           */
          var getTodayItems = () => {
            // Get all the DateTime Items that might apply to today given what type of day it is
            // For example, if it's a weekend, there will be no weekday Items pulled. Whether or not
            // the entry in this dict has an array of Items determines whether today is of that day
            // type.
            const startTimes = [
              { 'type': 'custom',  'times' : getItemsOfType('custom').filter(item => actions.Ephemeris.isBankHoliday(0, item.getMetadata()[NAMESPACE].configuration['file'])) },
              { 'type': 'holiday', 'times' : (actions.Ephemeris.isBankHoliday()) ? getItemsOfType('holiday') : [] },
              { 'type': 'dayset',  'times' : getItemsOfType('dayset').filter(item => actions.Ephemeris.isInDayset(items.getMetadata()[NAMESPACE].configuration['set'])) },
              { 'type': 'weekend', 'times' : (actions.Ephemeris.isWeekend()) ? getItemsOfType('weekend') : [] },
              { 'type': 'weekday', 'times' : (!actions.Ephemeris.isWeekend()) ? getItemsOfType('weekday') : [] },
              { 'type': 'default', 'times' : getItemsOfType('default') }
            ];

            // Go through startTimes in order and choose the first one that has a non-empty list of Items
            const dayStartTimes = startTimes.find(dayset => dayset.times.length);

            if(dayStartTimes === null) {
              console.warn('No DateTime Items found for today');
              return null;
            }
            else {
              console.info('Today is a ' + dayStartTimes.type + ' day.');
              return dayStartTimes.times;
            }
          };


          /**
           * Returns a function called to transition the state machine from one state to the next
           *
           * @param {string} state the new state to transition to
           * @param {function} the function that transitions the state
           */
          var stateTransitionGenerator = (state) => {
            return function() {
              console.info('Transitioning Time State Machine from ' + items[STATE_ITEM].state + ' to ' + state);
              items[STATE_ITEM].sendCommand(state);
            }
          }


          /**
           * Returns a function that generates the timers for all the passed in startTimes
           *
           * @param {Array} startTimes list of today's state start times
           * @param {timerMgr.TimerMgr} timers collection of timers
           * @returns {function} called to generate the timers to transition between the states
           */
          var createTimersGenerator = (timers) => {
            return function() {

              if(validateAllConfigs()) {

                // Cancel the timers, skipping the debounce timer
                console.debug('Cancelling existing timers');
                timers.cancelAll();

                // Get the set of Items for today's state machine
                console.debug("Acquiring today's state start times");
                const startTimes = getTodayItems();

                // Get the state and start time, sort them ignoring the date, skip the ones that have
                // already passed and create a timer to transition for the rest.
                console.debug('Creating timers for times that have not already passed');
                var mapped = startTimes.map(i => { return { 'state': i.getMetadata()[NAMESPACE].value,
                                                            'time' : time.toZDT(i.state).toToday() } });
                mapped.sort((a,b) => {
                         if(a.time.isBefore(b.time)) return -1;
                          else if(a.time.isAfter(b.time)) return 1;
                          else return 0;
                        })
                        .filter(tod => tod.time.isAfter(time.toZDT()))
                        .forEach(tod => {
                          // TODO: see if we can move to rules instead of timers
                          console.debug('Creating timer for ' + tod.state + ' at ' + tod.time);
                          timers.check(tod.state, tod.time.toString(), stateTransitionGenerator(tod.state));
                        });

                // Figure out the current time of day and move to that state if necessary
                var beforeTimes = mapped.sort((a,b) => {
                                          if(a.time.isAfter(b.time)) return -1;
                                          else if(a.time.isBefore(b.time)) return 1;
                                          else return 0;
                                        })
                                        .filter(tod => tod.time.isBefore(time.toZDT()));
                if(!beforeTimes.length) {
                  console.debug("There is no date time for today before now, we can't know what the current state is, keeping the current time of day state of " + items[STATE_ITEM].state + ".");
                }
                else {
                  const currState = beforeTimes[0].state
                  const stateItem = items[STATE_ITEM];
                  console.info('The current state is ' + currState);
                  if(stateItem.state != currState) stateItem.sendCommand(currState)
                }
              }
              else {
                console.warn('The config is not valid, cannot proceed!');
              }

            };
          };


          var timers = cache.private.get('timers', () => TimerMgr());


          // Wait a minute after the last time the rule is triggered to make
          sure all Items are done changing (e.g.

          // Astro Items) before calculating the new state.

          timers.check('debounce',
                       'PT10S',
                       createTimersGenerator(timers),
                       true,
                       () => { console.debug('Flapping detected, waiting before creating timers for today'); });
        type: application/javascript
      id: '3'
      inputs: {}
      type: script.ScriptAction
  conditions: []
  configDescriptions:
    - context: item
      description: String Item that holds the current time of day's state.
      filterCriteria:
        - name: type
          value: String
      label: Time of Day State Item
      name: timeOfDay
      required: true
      type: TEXT
    - context: item
      description: Has as members all the DateTime Items that define time of day states.
      filterCriteria:
        - name: type
          value: Group
      label: Times of Day Group
      name: timesOfDayGrp
      required: true
      type: TEXT
    - description: The Item metadata namespace (e.g. "tsm").
      label: Time of Day Namespace
      name: namespace
      required: true
      type: TEXT
  description: >-
    Creates timers to transition a state Item to a new state at defined times of
    day.
  name: Time Based State Machine rule
  triggers:
    - configuration:
        groupName: DemoSwitchGroup
      id: '1'
      type: core.GroupStateChangeTrigger
    - configuration:
        startlevel: 100
      id: '2'
      type: core.SystemStartlevelTrigger
    - configuration:
        time: '00:05'
      id: '4'
      type: timer.TimeOfDayTrigger
  uid: 'rules_tools:tsm5'
- actions:
    - configuration:
        script: |
          var from = parseFloat(oldState.toString().split(' ')[0]);
          var to = parseFloat(newState.toString().split(' ')[0]);

          print(from + '>' + to);

          if (to < 2 && from >= 2) {
            events.sendCommand('DemoSwitch', 'LSELECT');
          }
        type: application/javascript
      id: '2'
      inputs: {}
      type: script.ScriptAction
  conditions: []
  configDescriptions:
    - context: item
      description: >-
        Item that holds the power (in watts) of the washing machine. Can be a
        quantity type (Number:Power).
      label: Power Item
      name: powerItem
      required: true
      type: TEXT
    - defaultValue: 2
      description: >-
        When the power measurement was at or above the threshold and crosses
        below it, trigger the alert.
      label: Threshold
      name: threshold
      required: true
      type: DECIMAL
    - context: item
      description: >-
        Item to send a command to when the measured power gets below the
        threshold. For instance, a Hue light advanced Alert channel.
      label: Alert Item
      name: alertItem
      required: true
      type: TEXT
    - defaultValue: LSELECT
      description: >-
        Command to send to the alert item (for an item linked to a Hue light
        alert channel, LSELECT will flash the light for a few seconds).
      label: Alert Command
      name: alertCommand
      required: true
      type: TEXT
  description: >-
    This will monitor the power consumption of a washing machine and send an
    alert command when it gets below a threshold, meaning it has finished.
  name: Alert when Washing Machine Finished rule
  triggers:
    - configuration:
        itemName: CurrentPower
        state: ''
      id: '1'
      type: core.ItemStateChangeTrigger
  uid: 'ysc:washing_machine_alert2.5'

@rkoshak
Copy link

rkoshak commented Apr 29, 2025

Are there any examples of the formats that has already been implemented by #3666, so that I can get a better understanding of the idea?

I think Items and Things are done but rules haven't been started yet AFAIK.

The question is how far into the future this is.

The goal is #3666 will be part of OH 5.0. That seems achievable.

@Nadahar Nadahar force-pushed the rule-file-provider branch 2 times, most recently from 5e607ab to 2c52e91 Compare May 3, 2025 12:49
@lolodomo
Copy link
Contributor

lolodomo commented May 3, 2025

Are there any examples of the formats that has already been implemented by #3666, so that I can get a better understanding of the idea?

Yes, you can find it in #4691 and #4776.

The question is how far into the future this is.

The goal is #3666 will be part of OH 5.0. That seems achievable.

Yes but you will not have everything supported in the YAML format in OH 5.0. The delay is too short.

It could be argued that the same could be done with this. Merge it now with the "raw mapping" and then remove the YAML parsing when the result of #3666 is ready. As long as it's not documented, it won't see much use anyway, but it will at least be "usable" without having to do YAML -> JSON using an external converter for any change to the rules.

That is a possible option.

@Nadahar Nadahar force-pushed the rule-file-provider branch from 2c52e91 to fd4cc51 Compare May 3, 2025 13:25
@Nadahar
Copy link
Contributor Author

Nadahar commented May 3, 2025

Yes, you can find it in #4691 and #4776.

Thanks.

I'm trying to debug YamlModelRepositoryImpl to see how the files are parsed (I find that easier than browsing to related and unrelated code, to get my bearings), but I can't get it to start (breakpoints don't trigger, there's nothing logged by it in the log). Is it disabled somehow in current main, and if so, how do I "enable" it?

@lolodomo
Copy link
Contributor

lolodomo commented May 3, 2025

In Eclipse, you need to add org.openhab.core.model.yaml
Probably something to change to have it by default.

@Nadahar
Copy link
Contributor Author

Nadahar commented May 3, 2025

In Eclipse, you need to add org.openhab.core.model.yaml

Add where? To the "demo app" run requirements?

@lolodomo
Copy link
Contributor

lolodomo commented May 3, 2025

The best is to discuss the syntax first ho get an agreement.

@Nadahar
Copy link
Contributor Author

Nadahar commented May 3, 2025

The best is to discuss the syntax first ho get an agreement.

I just want to run/debug to understand how it works, including how the files are parsed.

@lolodomo
Copy link
Contributor

lolodomo commented May 3, 2025

image

@lolodomo
Copy link
Contributor

lolodomo commented May 3, 2025

I just want to run/debug to understand how it works, including how the files are parsed.

Files are parsed in class YamlModelRepositoryImpl and resulting DTO for each type of elements (things, items, tags, ...) are "distributed" to providers.
So to add a new kind of elements, you need mainly to add a DTO class + a provider in charge to handle DTO to provide final elements (things, items, ...) to one of our existing registries.

@Nadahar
Copy link
Contributor Author

Nadahar commented May 3, 2025

Files are parsed in class YamlModelRepositoryImpl and resulting DTO for each type of elements (things, items, tags, ...) are "distributed" to providers.
So to add a new kind of elements, you need mainly to add a DTO class + a provider in charge to provide a registry.

Yes, I've gotten the basic idea of this. There are some details that aren't clear to me though, which I think I will figure out quickly using the debugger. That is things like the "watch folder" where these files are picked up, and how the object itself is deserialized. I mean, if it only supports "declared" fields, or if it will support anything that can be deserialized. Likewise, if there is some kind of default mechanism for unspecified fields.

When it comes to supporting e.g rules in itself using @YamlElementName("rules") is quite obvious, so I'm more interested in the details. I usually find that figuring those things out using a debugger is more efficient than me asking lots and lots of questions 😉

@Nadahar
Copy link
Contributor Author

Nadahar commented May 3, 2025

Adding it to "Run Requirements" did it, thanks.

I now see that it's watching the whole of conf for .yaml files, except the automation subfolder. Has this been discussed and decided to be the "best" solution, or is it somewhat preliminary?

I'm thinking that it might lead to some potential trouble if this parser will claim "all" YAML files, given that YAML is used for so many things these days. There can be e.g addons that want to use YAML for configuration or similar. Given that this is a "specific format" built upon YAML, it might be better to come up with an entirely new extension and leave .yaml to general use. That way, this parser could "more rightfully" require that the specific structure with element names etc. are followed for these files, instead of throwing a tantrum over files that are for other things.

@lolodomo
Copy link
Contributor

lolodomo commented May 4, 2025

now see that it's watching the whole of conf for .yaml files, except the automation subfolder. Has this been discussed and decided to be the "best" solution, or is it somewhat preliminary?

It was done because YAML files could be encountered with one of the rule engine I believe but I don't remember the details.
My idea was rather to watch the usual sub-folders like conf/tags, conf/items, conf/things, ... but as there was a wish to put potentially different stuff in the same YAML file, a YAML file containing items and things for example could be put anywhere in the conf folder.

Normally, this conf folder should contain only configuration files for openHAB and we know which ones.

There can be e.g addons that want to use YAML for configuration or similar.

But addons never read in conf folder. Is it something required for automation engines ?

Given that this is a "specific format" built upon YAML, it might be better to come up with an entirely new extension and leave .yaml to general use.

I am not in favour of that because having the yaml extension has many advantages like properly supported by editors.

@Nadahar
Copy link
Contributor Author

Nadahar commented May 4, 2025

Normally, this conf folder should contain only configuration files for openHAB and we know which ones.

Yes, my point is that this isn't YAML the "format", it's a particular form of YAML with very specific rules. By claiming all YAML files in all the configuration tree, you basically exclude using YAML for any other kind of configuration in the future.

But addons never read in conf folder. Is it something required for automation engines ?

Some certainly do - I don't have the great overview over add-ons, but I've seen several of the scripting languages read from conf subfolders (which is probably why the automation folder was excluded. But, in addition to what is now, why make it so that this won't be a possibility also in the future?

I am not in favour of that because having the yaml extension has many advantages like properly supported by editors.

I see that point, although most editors are easy to tell what format to "interpret as" if they come across an unknown extension. But, maybe a dedicated subfolder of conf where files read by this parser would live instead? Users could create further subfolders there to organize it as they would like.

@Nadahar
Copy link
Contributor Author

Nadahar commented Jul 23, 2025

I feel there's a need to try to explain the idea behind ModularDTO. It might also be helpful to look at the commit where it is introduced, to separate it from other code: 581a3af

Unfortunately, it's been so long since I write this now, that I don't remember all the factors that went into the decisions, but I'll try my best to explain from what I still remember.

The "challenge" is that when you use e.g. Jackson to deserialize a document into a Java object, you have very little flexibility. It must basically match the target class perfectly. To allow the parsing to be more flexible, it's better to parse the document into a node hierarchy first, where you can place logic like generating ID's or inferring information from other fields etc., before applying the result to the target object. This is mostly solved using a static "map" method in OH, often in a separate "mapper class".

The problem is when DTOs contain other DTOs (that might contain other DTOs again), and all (or several) of these require "customized" mapping. I wanted to try to find a way to "chain" DTOs so that you can write the mapping logic for a class once, and then use that mapping everywhere where this conversion is needed, without requiring intimate knowledge of the DTO in question. Thus, ModularDTO.

To make this "generic" so that it can be used without knowing the details, I needed to move the "mapping" function to the DTO itself. Unfortunately, inheritance and interfaces don't work with static methods - which is unfortunate, because the mapping "should" really have been static. So, what I made is a "compromise" where you in rare circumstances must instantiate an object just to call the mapping function.

I tried various other designs, but everything I tried ended up working for one reason or another, which is why I ended up with the current design. I'm not really happy with it because the mapping method "should have been static", but it's the best I managed to come up with.

@Nadahar
Copy link
Contributor Author

Nadahar commented Jul 23, 2025

Your "rebase" leads to a lost of the history of your changes, that is definitively a nightmare for reviewers as I have now absolutely no idea what were you changes since my last review and I have to restart a full review ! For the next steps, please just push additional commits, keeping what was previously pushed unchanged. Keep in mind that when the PR will be merged, all your commits will be squashed.

I know, that's why I didn't want to do it. But you said that it was too disturbing that the stuff that is no longer a part of this PR was there.

Are PRs always squashed? That's a bad practice IMO.

}
YamlRuleDTO other = (YamlRuleDTO) obj;
return Objects.equals(actions, other.actions) && Objects.equals(conditions, other.conditions)
&& Objects.equals(config, other.config) && Objects.equals(configDescriptions, other.configDescriptions)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the method in util class for config ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, when thinking of it, I'm not sure that it's safe to use equalsConfig() for this. The rule is that hashCode() and equals() must be consistent. See the JavaDocs for Object.hashCode() where all this is explained, specifically:

If two objects are equal according to the equals(Object) method, then calling the hashCode() method on each of the two objects must produce the same integer result.

The reason for this is how HashMap and HashTable, that are both very fundamental to Java and are used a lot of places, use hashCode(). Two "equal" objects must produce the same hashcode.

I can't see how that is possible to fulfill when using the utility method.

I also don't know why you're converting lists into array for comparison in equalsConfigValue(), it's just a waste of CPU cycles and memory as far as I can understand.

If you look at the definition of Map.equals(), you will see that it already does what your utility method does. It will make sure that:

     (e1.getKey()==null ?
      e2.getKey()==null : e1.getKey().equals(e2.getKey()))  &&
     (e1.getValue()==null ?
      e2.getValue()==null : e1.getValue().equals(e2.getValue()))

In other words, a Map doesn't require the order to be the same, only that the elements are equal. A List also requires that the order is the same, in addition to the elements themselves.

return false;
}
YamlRuleDTO other = (YamlRuleDTO) obj;
return Objects.equals(actions, other.actions) && Objects.equals(conditions, other.conditions)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For List, I have a doubt if Objects.equals is working as you expect?

Copy link
Contributor Author

@Nadahar Nadahar Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? This is the JavaDoc:

Compares the specified object with this list for equality. Returns true if and only if the specified object is also a list, both lists have the same size, and all corresponding pairs of elements in the two lists are equal. (Two elements e1 and e2 are equal if Objects.equals(e1, e2).) In other words, two lists are defined to be equal if they contain the same elements in the same order. This definition ensures that the equals method works properly across different implementations of the List interface.

This is the way I compare all Collections for equality, and as understand it, it's the way it's intended to be done.

Again, I use Eclipse to generate these methods and only modify them if I see something wrong. As far as I know, this is the "standard way" to do it. The caveat is of course that if the objects in the list don't implement equals(), equality is reduced to a simple "same instance" (==) check, but there's nothing one can do about that except to also implement hashCode() and equals() for the "child objects". Sometimes, comparing instances is OK though, it depends on the nature of the objects.

Comment on lines +260 to +262
// Check that the rule either has configuration (rule stub) or that it has at least one module
if ((config == null || config.isEmpty()) && (triggers == null || triggers.isEmpty())
&& (conditions == null || conditions.isEmpty()) && (actions == null || actions.isEmpty())) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is no config, you always need at least one action, no ?
What would be a rule without action ?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While it would indeed be meaningless, the actions are optional so a rule without actions would be valid. The same goes for triggers and conditions, though a rule without those make more sense.

In the UI this makes sense because you may save the rule incrementally before you get to the actions.

I have no strong opinions as to whether a rule without actions should be allowed in file based rules. I just wanted to give some context.

Copy link
Contributor Author

@Nadahar Nadahar Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any reason to impose arbitrary restrictions. It's hard to see that a rule without an Action is useful, but it isn't "harmful" to the system or cause any trouble, so if somebody wants to do it, I see no reason to deny it. I did look other places in the code originally to try to figure out what "constitutes a valid rule", and came to the conclusion that the definition was "have at least one module". I don't remember how I came to this now, but I don't see any problem with the definition itself.

You could always argue that there could be a warning if a rule was created that have no Action (not that I think it's a big point), but in that case I'd argue that this shouldn't be in the parser, but in the "rule validation" that takes place when rules are created, regardless of the source of the rule.

Comment on lines +313 to +314
return Objects.hash(actions, conditions, configDescriptions, description, label, tags, triggers, uid,
visibility);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather use getVisibility()

Copy link
Contributor Author

@Nadahar Nadahar Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? The hashCode() and equals() methods are called implicitly quite a lot (by collections, streams etc.). They should be as cheap as possible to call, thus calling other methods when not necessary will just make everything slower.

&& Objects.equals(configDescriptions, other.configDescriptions)
&& Objects.equals(description, other.description) && Objects.equals(label, other.label)
&& Objects.equals(tags, other.tags) && Objects.equals(triggers, other.triggers)
&& Objects.equals(uid, other.uid) && visibility == other.visibility;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather compare getVisibility()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same argument as above applies... I don't agree. Also, I use Eclipse to generate hash() and equals(), so this "is considered the correct way" by whoever wrote the generator functions for Eclipse.

return ok;
}

private boolean enumerateModuleIds(@Nullable List<@NonNull ? extends YamlModuleDTO> modules,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is the same as in YamlRuleDTO.
You could create an utility method to avoid redundant code ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really consider it redundant. The error message is different, and although rules and rule templates are extremely similar, that might not be true in the future. The methods are almost identical as a consequence of the fact that the classes they work with are almost identical - but crucially they are not the same. As such, I consider it to be a "logical error" to use a single code for both, because it implies that they have a direct relationship. The only relationship between them is that rule templates are used to generate rules, of which is follows that they will contain very similar information.

If a need to treat them differently, for whatever reason, in the future, one would have to split the "common code" again to allow individual treatment. I think this is mostly a "philosophical issue", but I feel that it's better to treat them as different entities.

@lolodomo
Copy link
Contributor

I know, that's why I didn't want to do it. But you said that it was too disturbing that the stuff that is no longer a part of this PR was there.

You could have removed the code in a new commit and the history will not have been lost.
But that is OK, I have restarted a quick and partial check of everything I have previously reviewed.

@Nadahar
Copy link
Contributor Author

Nadahar commented Jul 23, 2025

You could have removed the code in a new commit and the history will not have been lost.

It's "against my nature" to work with such "messy commit history". I always clean up commits and try to make the changes as isolated and logical as possible. It's nearly impossible to understand the history if looking at it later if stuff is added, changed, and removed without any order. Therefore, I always go back and apply the changes to where they should originally have been.

@Nadahar
Copy link
Contributor Author

Nadahar commented Jul 23, 2025

Regarding the squashing of PRs, it that is the rule, then CONTRIBUTING.md should be changed. I've been working according to what is stated there:

Commit messages must start with a capitalized and short summary (max. 50 chars) written in the imperative, followed by an optional, more detailed explanatory text which is separated from the summary by an empty line.

Code review comments may be added to your pull request. Discuss, then make the suggested modifications and push additional commits to your feature branch. Be sure to post a comment after pushing. The new commits will show up in the pull request automatically, but the reviewers will not be notified unless you comment.

Before the pull request is merged, make sure that you squash your commits into logical units of work using git rebase -i and git push -f. After every commit the test suite should be passing. Include documentation changes in the same commit so that a revert would remove all traces of the feature or fix.

Commits that fix or close an issue should include a reference like Closes #XXX or Fixes #XXX, which will automatically close the issue when merged.

All the above is pointless if commits are squashed upon merge. It even says that you should organize the commits into "logical units of work", just like I do.

@wborn wborn requested a review from Copilot July 25, 2025 15:28
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds file-based rule providers for YAML and JSON files in openHAB automation, implementing support for both rules and rule templates. The implementation includes a comprehensive YAML format for rules and templates following the "new YAML format" specification, along with JSON support for rules through file providers. Additionally, it fixes a bug in the FolderObserver DELETE event handling.

Key changes:

  • Implements YAML rule and rule template providers with comprehensive DTO mapping and validation
  • Adds JSON/YAML rule file providers with proper watch service integration
  • Introduces modular DTO pattern for flexible deserialization of complex rule structures
  • Includes module type aliases and MIME type aliases for improved user experience

Reviewed Changes

Copilot reviewed 29 out of 29 changed files in this pull request and generated no comments.

Show a summary per file
File Description
SerializationException.java New exception class for DTO serialization/deserialization errors
ModularDTO.java Interface for DTOs that can handle complex deserialization from tree nodes
YamlRuleProvider.java YAML rule provider implementation with model listener integration
YamlRuleTemplateProvider.java YAML rule template provider for automation templates
YamlRuleDTO.java Data transfer object for YAML rule serialization with validation
YamlRuleTemplateDTO.java Data transfer object for YAML rule template serialization
RuleFileProvider.java Abstract base class for file-based rule providers
ModuleTypeAliases.java Utility class providing user-friendly aliases for automation module types
MIMETypeAliases.java Utility class for scripting language MIME type aliases
Comments suppressed due to low confidence (4)

bundles/org.openhab.core.model.yaml/src/main/java/org/openhab/core/model/yaml/internal/rules/YamlRuleTemplateDTO.java:202

  • The variable name 'config' is confusing as it shadows the module instance variable. Consider using a more descriptive name like 'moduleConfig'.
        if ((config = module.config) != null && config.containsKey("script")

bundles/org.openhab.core.model.yaml/src/main/java/org/openhab/core/model/yaml/internal/rules/YamlRuleDTO.java:209

  • The variable name 'config' is confusing as it shadows the module instance variable. Consider using a more descriptive name like 'moduleConfig'.
        if ((config = module.config) != null && config.containsKey("script")

bundles/org.openhab.core.model.yaml/src/main/java/org/openhab/core/model/yaml/internal/rules/YamlRuleTemplateDTO.java:143

  • [nitpick] The variable name 'config' is used in multiple contexts within this method. Consider using more specific names like 'moduleConfig' for module configurations to improve clarity.
            Map<@NonNull String, @NonNull Object> config;

bundles/org.openhab.core.model.yaml/src/main/java/org/openhab/core/model/yaml/internal/rules/YamlRuleDTO.java:207

  • [nitpick] The variable name 'config' is used in multiple contexts within this method. Consider using more specific names like 'moduleConfig' for module configurations to improve clarity.
        Map<@NonNull String, @NonNull Object> config;

@Nadahar
Copy link
Contributor Author

Nadahar commented Jul 25, 2025

For info: I intentionally didn't sign the "Fixes" commit because it contains changes based on review feedback. Once the changes are final, I will rebase these changes into their appropriate commits that are signed.

@openhab-bot
Copy link
Collaborator

This pull request has been mentioned on openHAB Community. There might be relevant details there:

https://community.openhab.org/t/rules-and-concurrency/166577/11

@Nadahar
Copy link
Contributor Author

Nadahar commented Oct 6, 2025

@lolodomo I need to rebase this because of changes in main. We should complete/agree on your current review comments first, or they will probably be hard to figure out. Could you say if you have anything more to add to the current comments, or if they are "resolved" as you see it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants