diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1459896-queues-defaults.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1459896-queues-defaults.patch
new file mode 100644
index 0000000..e7bdc91
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1459896-queues-defaults.patch
@@ -0,0 +1,138 @@
+From c8be9a713a57f07311560af50c24267b30bef21b Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Tue, 29 Aug 2017 16:32:15 +0200
+Subject: [PATCH] fixed queue default values
+
+---
+ source/concepts/queues.rst                                       | 7 +++----
+ source/configuration/global/index.rst                            | 6 +++---
+ source/configuration/global/options/rsconf1_mainmsgqueuesize.rst | 2 +-
+ source/rainerscript/queue_parameters.rst                         | 15 ++++++++++++---
+ source/configuration/action/index.rst                            | 12 ++++++------
+ 5 files changed, 24 insertions(+), 16 deletions(-)
+
+diff --git a/source/concepts/queues.rst b/source/concepts/queues.rst
+index c71413c..9b41128 100644
+--- a/source/concepts/queues.rst
++++ b/source/concepts/queues.rst
+@@ -273,10 +273,9 @@ actually needed.
+ The water marks can be set via the "*$<object>QueueHighWatermark*\ "
+ and  "*$<object>QueueLowWatermark*\ " configuration file directives.
+ Note that these are actual numbers, not percentages. Be sure they make
+-sense (also in respect to "*$<object>QueueSize*\ "), as rsyslodg does
+-currently not perform any checks on the numbers provided. It is easy to
+-screw up the system here (yes, a feature enhancement request is filed
+-;)).
++sense (also in respect to "*$<object>QueueSize*\ "). Rsyslodg does
++perform some checks on the numbers provided, and issues warning when
++numbers are "suspicious".
+ 
+ Limiting the Queue Size
+ -----------------------
+diff --git a/source/configuration/global/index.rst b/source/configuration/global/index.rst
+index 2738f21..a53ef23 100644
+--- a/source/configuration/global/index.rst
++++ b/source/configuration/global/index.rst
+@@ -137,13 +137,13 @@ To understand queue parameters, read
+ -  **$MainMsgQueueDequeueSlowdown** <number> [number is timeout in
+    *micro*\ seconds (1000000us is 1sec!), default 0 (no delay). Simple
+    rate-limiting!]
+--  **$MainMsgQueueDiscardMark** <number> [default 9750]
++-  **$MainMsgQueueDiscardMark** <number> [default 98000]
+ -  **$MainMsgQueueDiscardSeverity** <severity> [either a textual or
+    numerical severity! default 4 (warning)]
+ -  **$MainMsgQueueFileName** <name>
+--  **$MainMsgQueueHighWaterMark** <number> [default 8000]
++-  **$MainMsgQueueHighWaterMark** <number> [default 80000]
+ -  **$MainMsgQueueImmediateShutdown** [on/**off**]
+--  **$MainMsgQueueLowWaterMark** <number> [default 2000]
++-  **$MainMsgQueueLowWaterMark** <number> [default 20000]
+ -  **$MainMsgQueueMaxFileSize** <size\_nbr>, default 1m
+ -  **$MainMsgQueueTimeoutActionCompletion** <number> [number is timeout in
+    ms (1000ms is 1sec!), default 1000, 0 means immediate!]
+diff --git a/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst b/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst
+index 050407c..3e902cf 100644
+--- a/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst
++++ b/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst
+@@ -3,7 +3,7 @@ $MainMsgQueueSize
+ 
+ **Type:** global configuration directive
+ 
+-**Default:** 10000
++**Default:** 100000
+ 
+ **Description:**
+ 
+diff --git a/source/rainerscript/queue_parameters.rst b/source/rainerscript/queue_parameters.rst
+index 4453721..3f2b7a2 100644
+--- a/source/rainerscript/queue_parameters.rst
++++ b/source/rainerscript/queue_parameters.rst
+@@ -33,8 +33,14 @@ read the :doc:`queues <../concepts/queues>` documentation.
+    For more information on the current status of this restriction see
+    the `rsyslog FAQ: "lower bound for queue
+    sizes" <http://www.rsyslog.com/lower-bound-for-queue-sizes/>`_.
++
++   The default depends on queue type and, if you need
++   a specific value, please specify it. Otherwise rsyslog selects what
++   it consideres appropriate. For example, ruleset queues have a default
++   size of 50000 and action queues which are configured to be non-direct
++   have a size of 1000.
+ -  **queue.dequeuebatchsize** number
+-   default 16
++   default 128
+ -  **queue.maxdiskspace** number
+    The maximum size that all queue files together will use on disk. Note
+    that the actual size may be slightly larger than the configured max,
+@@ -46,8 +47,9 @@ read the :doc:`queues <../concepts/queues>` documentation.
+    processing, because disk queue mode is very considerably slower than
+    in-memory queue mode. Going to disk should be reserved for cases
+    where an output action destination is offline for some period.
++   default 90% of queue size
+ -  **queue.lowwatermark** number
+-   default 2000
++   default 70% of queue size
+ -  **queue.fulldelaymark** number 
+    Number of messages when the queue should block delayable messages. 
+    Messages are NO LONGER PROCESSED until the queue has sufficient space 
+@@ -59,9 +61,11 @@ read the :doc:`queues <../concepts/queues>` documentation.
+    out of space. Please note that if you use a DA queue, setting the 
+    fulldelaymark BELOW the highwatermark makes the queue never activate 
+    disk mode for delayable inputs. So this is probably not what you want.
++   default 97% of queue size
+ -  **queue.lightdelaymark** number
++   default 70% of queue size
+ -  **queue.discardmark** number
+-   default 9750
++   default 80% of queue size
+ -  **queue.discardseverity** number
+    \*numerical\* severity! default 8 (nothing discarded)
+ -  **queue.checkpointinterval** number
+diff --git a/source/configuration/action/index.rst b/source/configuration/action/index.rst
+index 3e7cd24..9352866 100644
+--- a/source/configuration/action/index.rst
++++ b/source/configuration/action/index.rst
+@@ -163,18 +163,18 @@ following action, only. The next and all other actions will be
+ in "direct" mode (no real queue) if not explicitely specified otherwise.
+
+ -  **$ActionQueueCheckpointInterval** <number>
+--  **$ActionQueueDequeueBatchSize** <number> [default 16]
++-  **$ActionQueueDequeueBatchSize** <number> [default 128]
+ -  **$ActionQueueDequeueSlowdown** <number> [number is timeout in
+    *micro*\ seconds (1000000us is 1sec!), default 0 (no delay). Simple
+    rate-limiting!]
+--  **$ActionQueueDiscardMark** <number> [default 9750]
+--  **$ActionQueueDiscardSeverity** <number> [\*numerical\* severity! default
+-   4 (warning)]
++-  **$ActionQueueDiscardMark** <number> [default 80% of queue size]
++-  **$ActionQueueDiscardSeverity** <number> [\*numerical\* severity! default
++   8 (nothing discarded)]
+ -  **$ActionQueueFileName** <name>
+--  **$ActionQueueHighWaterMark** <number> [default 8000]
++-  **$ActionQueueHighWaterMark** <number> [default 90% of queue size]
+ -  **$ActionQueueImmediateShutdown** [on/**off**]
+ -  **$ActionQueueSize** <number>
+--  **$ActionQueueLowWaterMark** <number> [default 2000]
++-  **$ActionQueueLowWaterMark** <number> [default 70% of queue size]
+ -  **$ActionQueueMaxFileSize** <size\_nbr>, default 1m
+ -  **$ActionQueueTimeoutActionCompletion** <number> [number is timeout in ms
+    (1000ms is 1sec!), default 1000, 0 means immediate!]
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1507028-recover_qi.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1507028-recover_qi.patch
new file mode 100644
index 0000000..42a69b1
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1507028-recover_qi.patch
@@ -0,0 +1,27 @@
+From ff07a7cfc171dc2151cc8afe44776525d34a9e01 Mon Sep 17 00:00:00 2001
+From: jvymazal <jvymazal@redhat.com>
+Date: Tue, 3 Jan 2017 10:24:26 +0100
+Subject: [PATCH] Update queues.rst
+
+Update queues.rst
+---
+ source/concepts/queues.rst | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/source/concepts/queues.rst b/source/concepts/queues.rst
+index eb394e8..c71413c 100644
+--- a/source/concepts/queues.rst
++++ b/source/concepts/queues.rst
+@@ -153,6 +153,12 @@ can be requested via "*<object>QueueSyncQueueFiles on/off* with the
+ default being off. Activating this option has a performance penalty, so
+ it should not be turned on without reason.
+ 
++If you happen to lose or otherwise need the housekeeping structures and 
++have all yours queue chunks you can use perl script included in rsyslog
++package to generate it. 
++Usage: recover_qi.pl -w *$WorkDirectory* -f QueueFileName -d 8 > QueueFileName.qi
++
++
+ In-Memory Queues
+ ~~~~~~~~~~~~~~~~
+ 
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1507145-omelastic-client-cert-and-config.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1507145-omelastic-client-cert-and-config.patch
new file mode 100644
index 0000000..ae24862
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1507145-omelastic-client-cert-and-config.patch
@@ -0,0 +1,458 @@
+diff --git a/source/configuration/modules/omelasticsearch.rst b/source/configuration/modules/omelasticsearch.rst
+index 914fd67..4aee1ac 100644
+--- a/source/configuration/modules/omelasticsearch.rst
++++ b/source/configuration/modules/omelasticsearch.rst
+@@ -208,18 +208,354 @@ readability):
+   reconfiguration (e.g. dropping the mandatory attribute) a resubmit may
+   be succesful.
+ 
+-**Samples:**
++.. _tls.cacert:
++
++tls.cacert
++^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++This is the full path and file name of the file containing the CA cert for the
++CA that issued the Elasticsearch server cert.  This file is in PEM format.  For
++example: `/etc/rsyslog.d/es-ca.crt`
++
++.. _tls.mycert:
++
++tls.mycert
++^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++This is the full path and file name of the file containing the client cert for
++doing client cert auth against Elasticsearch.  This file is in PEM format.  For
++example: `/etc/rsyslog.d/es-client-cert.pem`
++
++.. _tls.myprivkey:
++
++tls.myprivkey
++^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++This is the full path and file name of the file containing the private key
++corresponding to the cert `tls.mycert` used for doing client cert auth against
++Elasticsearch.  This file is in PEM format, and must be unencrypted, so take
++care to secure it properly.  For example: `/etc/rsyslog.d/es-client-key.pem`
++
++.. _omelasticsearch-bulkid:
++
++bulkid
++^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++This is the unique id to assign to the record.  The `bulk` part is misleading - this
++can be used in both bulk mode or in index
++(record at a time) mode.  Although you can specify a static value for this
++parameter, you will almost always want to specify a *template* for the value of
++this parameter, and set `dynbulkid="on"` :ref:`omelasticsearch-dynbulkid`.  NOTE:
++you must use `bulkid` and `dynbulkid` in order to use `writeoperation="create"`
++:ref:`omelasticsearch-writeoperation`.
++
++.. _omelasticsearch-dynbulkid:
++
++dynbulkid
++^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "binary", "off", "no", "none"
++
++If this parameter is set to `"on"`, then the `bulkid` parameter :ref:`omelasticsearch-bulkid`
++specifies a *template* to use to generate the unique id value to assign to the record.  If
++using `bulkid` you will almost always want to set this parameter to `"on"` to assign
++a different unique id value to each record.  NOTE:
++you must use `bulkid` and `dynbulkid` in order to use `writeoperation="create"`
++:ref:`omelasticsearch-writeoperation`.
++
++.. _omelasticsearch-writeoperation:
++
++writeoperation
++^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "index", "no", "none"
++
++The value of this parameter is either `"index"` (the default) or `"create"`.  If `"create"` is
++used, this means the bulk action/operation will be `create` - create a document only if the
++document does not already exist.  The record must have a unique id in order to use `create`.
++See :ref:`omelasticsearch-bulkid` and :ref:`omelasticsearch-dynbulkid`.  See
++:ref:`omelasticsearch-writeoperation-example` for an example.
++
++.. _omelasticsearch-retryfailures:
++
++retryfailures
++^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "binary", "off", "no", "none"
++
++If this parameter is set to `"on"`, then the module will look for an
++`"errors":true` in the bulk index response.  If found, each element in the
++response will be parsed to look for errors, since a bulk request may have some
++records which are successful and some which are failures.  Failed requests will
++be converted back into records and resubmitted back to rsyslog for
++reprocessing.  Each failed request will be resubmitted with a local variable
++called `$.omes`.  This is a hash consisting of the fields from the response.
++See below :ref:`omelasticsearch-retry-example` for an example of how retry
++processing works.
++*NOTE* The retried record will be resubmitted at the "top" of your processing
++pipeline.  If your processing pipeline is not idempotent (that is, your
++processing pipeline expects "raw" records), then you can specify a ruleset to
++redirect retries to.  See :ref:`omelasticsearch-retryruleset` below.
++
++`$.omes` fields:
++
++* writeoperation - the operation used to submit the request - for rsyslog
++  omelasticsearch this currently means either `"index"` or `"create"`
++* status - the HTTP status code - typically an error will have a `4xx` or `5xx`
++  code - of particular note is `429` - this means Elasticsearch was unable to
++  process this bulk record request due to a temporary condition e.g. the bulk
++  index thread pool queue is full, and rsyslog should retry the operation.
++* _index, _type, _id - the metadata associated with the request
++* error - a hash containing one or more, possibly nested, fields containing
++  more detailed information about a failure.  Typically there will be fields
++  `$.omes!error!type` (a keyword) and `$.omes!error!reason` (a longer string)
++  with more detailed information about the rejection.  NOTE: The format is
++  apparently not described in great detail, so code must not make any
++  assumption about the availability of `error` or any specific sub-field.
++
++There may be other fields too - the code just copies everything in the
++response.  Here is an example of a detailed error response, in JSON format, from
++Elasticsearch 5.6.9:
++
++.. code-block:: json
++
++    {"omes":
++      {"writeoperation": "create",
++       "_index": "rsyslog_testbench",
++       "_type": "test-type",
++       "_id": "92BE7AF79CD44305914C7658AF846A08",
++       "status": 400,
++       "error":
++         {"type": "mapper_parsing_exception",
++          "reason": "failed to parse [msgnum]",
++          "caused_by":
++            {"type": "number_format_exception",
++             "reason": "For input string: \"x00000025\""}}}}
++
++Reference: https://www.elastic.co/guide/en/elasticsearch/guide/current/bulk.html#bulk
++
++.. _omelasticsearch-retryruleset:
++
++retryruleset
++^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "", "no", "none"
++
++If `retryfailures` is not `"on"` (:ref:`omelasticsearch-retryfailures`) then
++this parameter has no effect.  This parameter specifies the name of a ruleset
++to use to route retries.  This is useful if you do not want retried messages to
++be processed starting from the top of your processing pipeline, or if you have
++multiple outputs but do not want to send retried Elasticsearch failures to all
++of your outputs, and you do not want to clutter your processing pipeline with a
++lot of conditionals.  See below :ref:`omelasticsearch-retry-example` for an
++example of how retry processing works.
++
++.. _omelasticsearch-ratelimit.interval:
++
++ratelimit.interval
++^^^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "integer", "600", "no", "none"
++
++If `retryfailures` is not `"on"` (:ref:`omelasticsearch-retryfailures`) then
++this parameter has no effect.  Specifies the interval in seconds onto which
++rate-limiting is to be applied. If more than ratelimit.burst messages are read
++during that interval, further messages up to the end of the interval are
++discarded. The number of messages discarded is emitted at the end of the
++interval (if there were any discards).
++Setting this to value zero turns off ratelimiting.
++
++.. _omelasticsearch-ratelimit.burst:
++
++ratelimit.burst
++^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "integer", "20000", "no", "none"
++
++If `retryfailures` is not `"on"` (:ref:`omelasticsearch-retryfailures`) then
++this parameter has no effect.  Specifies the maximum number of messages that
++can be emitted within the ratelimit.interval interval. For futher information,
++see description there.
++
++.. _omelasticsearch-statistic-counter:
++
++Statistic Counter
++=================
++
++This plugin maintains global statistics ,
++which accumulate all action instances. The statistic is named "omelasticsearch".
++Parameters are:
++
++-  **submitted** - number of messages submitted for processing (with both
++   success and error result)
++
++-  **fail.httprequests** - the number of times a http request failed. Note
++   that a single http request may be used to submit multiple messages, so this
++   number may be (much) lower than fail.http.
++
++-  **fail.http** - number of message failures due to connection like-problems
++   (things like remote server down, broken link etc)
++
++-  **fail.es** - number of failures due to elasticsearch error reply; Note that
++   this counter does NOT count the number of failed messages but the number of
++   times a failure occured (a potentially much smaller number). Counting messages
++   would be quite performance-intense and is thus not done.
++
++The following counters are available when `retryfailures="on"` is used:
++
++-  **response.success** - number of records successfully sent in bulk index
++   requests - counts the number of successful responses
++
++-  **response.bad** - number of times omelasticsearch received a response in a
++   bulk index response that was unrecognized or unable to be parsed.  This may
++   indicate that omelasticsearch is attempting to communicate with a version of
++   Elasticsearch that is incompatible, or is otherwise sending back data in the
++   response that cannot be handled
++
++-  **response.duplicate** - number of records in the bulk index request that
++   were duplicates of already existing records - this will only be reported if
++   using `writeoperation="create"` and `bulkid` to assign each record a unique
++   ID
++
++-  **response.badargument** - number of times omelasticsearch received a
++   response that had a status indicating omelasticsearch sent bad data to
++   Elasticsearch.  For example, status `400` and an error message indicating
++   omelasticsearch attempted to store a non-numeric string value in a numeric
++   field.
++
++-  **response.bulkrejection** - number of times omelasticsearch received a
++   response that had a status indicating Elasticsearch was unable to process
++   the record at this time - status `429`.  The record can be retried.
++
++-  **response.other** - number of times omelasticsearch received a
++   response not recognized as one of the above responses, typically some other
++   `4xx` or `5xx` http status.
++
++**The fail.httprequests and fail.http counters reflect only failures that
++omelasticsearch detected.** Once it detects problems, it (usually, depends on
++circumstances) tell the rsyslog core that it wants to be suspended until the
++situation clears (this is a requirement for rsyslog output modules). Once it is
++suspended, it does NOT receive any further messages. Depending on the user
++configuration, messages will be lost during this period. Those lost messages will
++NOT be counted by impstats (as it does not see them).
++
++Note that some previous (pre 7.4.5) versions of this plugin had different counters.
++These were experimental and confusing. The only ones really used were "submits",
++which were the number of successfully processed messages and "connfail" which were
++equivalent to "failed.http".
++
++How Retries Are Handled
++=======================
++
++When using `retryfailures="on"` (:ref:`omelasticsearch-retryfailures`), the
++original `Message` object (that is, the original `smsg_t *msg` object) **is not
++available**.  This means none of the metadata associated with that object, such
++as various timestamps, hosts/ip addresses, etc. are not available for the retry
++operation.  The only thing available is the original JSON string sent in the
++original request, and whatever data is returned in the error response, which
++will contain the Elasticsearch metadata about the index, type, and id, and will
++be made available in the `$.omes` fields.  For the message to retry, the code
++will take the original JSON string and parse it back into an internal `Message`
++object.  This means you **may need to use a different template** to output
++messages for your retry ruleset.  For example, if you used the following
++template to format the Elasticsearch message for the initial submission:
++
++.. code-block:: none
++
++    template(name="es_output_template"
++             type="list"
++             option.json="on") {
++               constant(value="{")
++                 constant(value="\"timestamp\":\"")      property(name="timereported" dateFormat="rfc3339")
++                 constant(value="\",\"message\":\"")     property(name="msg")
++                 constant(value="\",\"host\":\"")        property(name="hostname")
++                 constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
++                 constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
++                 constant(value="\",\"syslogtag\":\"")   property(name="syslogtag")
++               constant(value="\"}")
++             }
++
++You would have to use a different template for the retry, since none of the
++`timereported`, `msg`, etc. fields will have the same values for the retry as
++for the initial try.
++
++Examples
++========
++
++Example 1
++^^^^^^^^^
+ 
+ The following sample does the following:
+ 
+ -  loads the omelasticsearch module
+ -  outputs all logs to Elasticsearch using the default settings
+ 
+-::
++.. code-block:: none
+ 
+     module(load="omelasticsearch")
+     *.*     action(type="omelasticsearch")
+ 
++Example 2
++^^^^^^^^^
++
+ The following sample does the following:
+ 
+ -  loads the omelasticsearch module
+@@ -246,7 +582,7 @@ The following sample does the following:
+    -  retry indefinitely if the HTTP request failed (eg: if the target
+       server is down)
+ 
+-::
++.. code-block:: none
+ 
+     module(load="omelasticsearch")
+     template(name="testTemplate"
+@@ -274,6 +610,87 @@ The following sample does the following:
+            queue.dequeuebatchsize="300"
+            action.resumeretrycount="-1")
+ 
++.. _omelasticsearch-writeoperation-example:
++
++Example 3
++^^^^^^^^^
++
++The following sample shows how to use :ref:`omelasticsearch-writeoperation`
++with :ref:`omelasticsearch-dynbulkid` and :ref:`omelasticsearch-bulkid`.  For
++simplicity, it assumes rsyslog has been built with `--enable-libuuid` which
++provides the `uuid` property for each record:
++
++.. code-block:: none
++
++    module(load="omelasticsearch")
++    set $!es_record_id = $uuid;
++    template(name="bulkid-template" type="list") { property(name="$!es_record_id") }
++    action(type="omelasticsearch"
++           ...
++           bulkmode="on"
++           bulkid="bulkid-template"
++           dynbulkid="on"
++           writeoperation="create")
++
++
++.. _omelasticsearch-retry-example:
++
++Example 4
++^^^^^^^^^
++
++The following sample shows how to use :ref:`omelasticsearch-retryfailures` to
++process, discard, or retry failed operations.  This uses
++`writeoperation="create"` with a unique `bulkid` so that we can check for and
++discard duplicate messages as successful.  The `try_es` ruleset is used both
++for the initial attempt and any subsequent retries.  The code in the ruleset
++assumes that if `$.omes!status` is set and is non-zero, this is a retry for a
++previously failed operation.  If the status was successful, or Elasticsearch
++said this was a duplicate, the record is already in Elasticsearch, so we can
++drop the record.  If there was some error processing the response
++e.g. Elasticsearch sent a response formatted in some way that we did not know
++how to process, then submit the record to the `error_es` ruleset.  If the
++response was a "hard" error like `400`, then submit the record to the
++`error_es` ruleset.  In any other case, such as a status `429` or `5xx`, the
++record will be resubmitted to Elasticsearch. In the example, the `error_es`
++ruleset just dumps the records to a file.
++
++.. code-block:: none
++
++    module(load="omelasticsearch")
++    module(load="omfile")
++    set $!es_record_id = $uuid;
++    template(name="bulkid-template" type="list") { property(name="$!es_record_id") }
++
++    ruleset(name="error_es") {
++	    action(type="omfile" template="RSYSLOG_DebugFormat" file="es-bulk-errors.log")
++    }
++
++    ruleset(name="try_es") {
++        if strlen($.omes!status) > 0 then {
++            # retry case
++            if ($.omes!status == 200) or ($.omes!status == 201) or (($.omes!status == 409) and ($.omes!writeoperation == "create")) then {
++                stop # successful
++            }
++            if ($.omes!writeoperation == "unknown") or (strlen($.omes!error!type) == 0) or (strlen($.omes!error!reason) == 0) then {
++                call error_es
++                stop
++            }
++            if ($.omes!status == 400) or ($.omes!status < 200) then {
++                call error_es
++                stop
++            }
++            # else fall through to retry operation
++        }
++        action(type="omelasticsearch"
++                  ...
++                  bulkmode="on"
++                  bulkid="bulkid-template"
++                  dynbulkid="on"
++                  writeoperation="create"
++                  retryfailures="on"
++                  retryruleset="try_es")
++    }
++    call try_es
+ 
+ This documentation is part of the `rsyslog <http://www.rsyslog.com/>`_
+ project.
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1538372-imjournal-duplicates.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1538372-imjournal-duplicates.patch
new file mode 100644
index 0000000..88e4859
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1538372-imjournal-duplicates.patch
@@ -0,0 +1,28 @@
+From 1dbb68f3dc5c7ae94bdea5ad37296cbc2224e92b Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Wed, 25 Jul 2018 14:24:57 +0200
+Subject: [PATCH] Added WorkAroundJournalBug parameter
+
+this is documentation for rsyslog/rsyslog#2543
+---
+ source/configuration/modules/imjournal.rst | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/source/configuration/modules/imjournal.rst b/source/configuration/modules/imjournal.rst
+index 2530ddfe..85ca9e7d 100644
+--- a/source/configuration/modules/imjournal.rst
++++ b/source/configuration/modules/imjournal.rst
+@@ -99,6 +99,13 @@ -  **usepidfromsystem** [**off**/on]
+    Retrieves the trusted systemd parameter, _PID, instead of the user 
+    systemd parameter, SYSLOG_PID, which is the default.
+ 
++-  **WorkAroundJournalBug** [**off**/on]
++
++    When journald instance rotates its files it is possible that duplicate records 
++    appear in rsyslog. If you turn on this option imjournal will keep track of cursor
++    with each message to work around this problem. Be aware that in some cases this
++    might result in imjournal performance hit.
++
+ **Caveats/Known Bugs:**
+ 
+ - As stated above, a corrupted systemd journal database can cause major
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1539193-mmkubernetes-new-plugin.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1539193-mmkubernetes-new-plugin.patch
new file mode 100644
index 0000000..f2ad9f7
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1539193-mmkubernetes-new-plugin.patch
@@ -0,0 +1,384 @@
+diff --git a/source/configuration/modules/mmkubernetes.rst b/source/configuration/modules/mmkubernetes.rst
+new file mode 100644
+index 0000000..1cd3d2a
+--- /dev/null
++++ b/source/configuration/modules/mmkubernetes.rst
+@@ -0,0 +1,378 @@
++*****************************************
++Kubernetes Metadata Module (mmkubernetes)
++*****************************************
++
++===========================  ===========================================================================
++**Module Name:**             **mmkubernetes**
++**Author:**                  `Tomáš Heinrich`
++                             `Rich Megginson` <rmeggins@redhat.com>
++===========================  ===========================================================================
++
++Purpose
++=======
++
++This module is used to add `Kubernetes <https://kubernetes.io/>`
++metadata to log messages logged by containers running in Kubernetes.
++It will add the namespace uuid, pod uuid, pod and namespace labels and
++annotations, and other metadata associated with the pod and
++namespace.
++
++.. note::
++
++   This **only** works with log files in `/var/log/containers/*.log`
++   (docker `--log-driver=json-file`), or with journald entries with
++   message properties `CONTAINER_NAME` and `CONTAINER_ID_FULL` (docker
++   `--log-driver=journald`), and when the application running inside
++   the container writes logs to `stdout`/`stderr`.  This **does not**
++   currently work with other log drivers.
++
++For json-file logs, you must use the `imfile` module with the
++`addmetadata="on"` parameter, and the filename must match the
++liblognorm rules specified by the `filenamerules`
++(:ref:`filenamerules`) or `filenamerulebase` (:ref:`filenamerulebase`)
++parameter values.
++
++For journald logs, there must be a message property `CONTAINER_NAME`
++which matches the liblognorm rules specified by the `containerrules`
++(:ref:`containerrules`) or `containerrulebase`
++(:ref:`containerrulebase`) parameter values. The record must also have
++the message property `CONTAINER_ID_FULL`.
++
++This module is implemented via the output module interface. This means
++that mmkubernetes should be called just like an action. After it has
++been called, there will be two new message properties: `kubernetes`
++and `docker`.  There will be subfields of each one for the various
++metadata items: `$!kubernetes!namespace_name`
++`$!kubernetes!labels!this-is-my-label`, etc.  There is currently only
++1 docker subfield: `$!docker!container_id`.  See
++https://github.com/ViaQ/elasticsearch-templates/blob/master/namespaces/kubernetes.yml
++and
++https://github.com/ViaQ/elasticsearch-templates/blob/master/namespaces/docker.yml
++for more details.
++
++Configuration Parameters
++========================
++
++.. note::
++
++   Parameter names are case-insensitive.
++
++Module Parameters and Action Parameters
++---------------------------------------
++
++.. _kubernetesurl:
++
++KubernetesURL
++^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "https://kubernetes.default.svc.cluster.local:443", "yes", "none"
++
++The URL of the Kubernetes API server.  Example: `https://localhost:8443`.
++
++.. _mmkubernetes-tls.cacert:
++
++tls.cacert
++^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++Full path and file name of file containing the CA cert of the
++Kubernetes API server cert issuer.  Example: `/etc/rsyslog.d/mmk8s-ca.crt`.
++This parameter is not mandatory if using an `http` scheme instead of `https` in
++`kubernetesurl`, or if using `allowunsignedcerts="yes"`.
++
++.. _tokenfile:
++
++tokenfile
++^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++The file containing the token to use to authenticate to the Kubernetes API
++server.  One of `tokenfile` or `token` is required if Kubernetes is configured
++with access control.  Example: `/etc/rsyslog.d/mmk8s.token`
++
++.. _token:
++
++token
++^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++The token to use to authenticate to the Kubernetes API server.  One of `token`
++or `tokenfile` is required if Kubernetes is configured with access control.
++Example: `UxMU46ptoEWOSqLNa1bFmH`
++
++.. _annotation_match:
++
++annotation_match
++^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "array", "none", "no", "none"
++
++By default no pod or namespace annotations will be added to the
++messages.  This parameter is an array of patterns to match the keys of
++the `annotations` field in the pod and namespace metadata to include
++in the `$!kubernetes!annotations` (for pod annotations) or the
++`$!kubernetes!namespace_annotations` (for namespace annotations)
++message properties.  Example: `["k8s.*master","k8s.*node"]`
++
++.. _srcmetadatapath:
++
++srcmetadatapath
++^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "$!metadata!filename", "no", "none"
++
++When reading json-file logs, with `imfile` and `addmetadata="on"`,
++this is the property where the filename is stored.
++
++.. _dstmetadatapath:
++
++dstmetadatapath
++^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "$!", "no", "none"
++
++This is the where the `kubernetes` and `docker` properties will be
++written.  By default, the module will add `$!kubernetes` and
++`$!docker`.
++
++.. _allowunsignedcerts:
++
++allowunsignedcerts
++^^^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "boolean", "off", "no", "none"
++
++If `"on"`, this will set the curl `CURLOPT_SSL_VERIFYPEER` option to
++`0`.  You are strongly discouraged to set this to `"on"`.  It is
++primarily useful only for debugging or testing.
++
++.. _de_dot:
++
++de_dot
++^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "boolean", "on", "no", "none"
++
++When processing labels and annotations, if this parameter is set to
++`"on"`, the key strings will have their `.` characters replaced with
++the string specified by the `de_dot_separator` parameter.
++
++.. _de_dot_separator:
++
++de_dot_separator
++^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "_", "no", "none"
++
++When processing labels and annotations, if the `de_dot` parameter is
++set to `"on"`, the key strings will have their `.` characters replaced
++with the string specified by the string value of this parameter.
++
++.. _filenamerules:
++
++filenamerules
++^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "SEE BELOW", "no", "none"
++
++.. note::
++
++    This directive is not supported with liblognorm 2.0.2 and earlier.
++
++When processing json-file logs, these are the lognorm rules to use to
++match the filename and extract metadata.  The default value is::
++
++    rule=:/var/log/containers/%pod_name:char-to:_%_%namespace_name:char-to:_%_%conta\
++    iner_name:char-to:-%-%container_id:char-to:.%.log
++
++.. note::
++
++    In the above rules, the slashes ``\`` ending each line indicate
++    line wrapping - they are not part of the rule.
++
++There are two rules because the `container_hash` is optional.
++
++.. _filenamerulebase:
++
++filenamerulebase
++^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "/etc/rsyslog.d/k8s_filename.rulebase", "no", "none"
++
++When processing json-file logs, this is the rulebase used to
++match the filename and extract metadata.  For the actual rules, see
++below `filenamerules`.
++
++.. _containerrules:
++
++containerrules
++^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "SEE BELOW", "no", "none"
++
++.. note::
++
++    This directive is not supported with liblognorm 2.0.2 and earlier.
++
++For journald logs, there must be a message property `CONTAINER_NAME`
++which has a value matching these rules specified by this parameter.
++The default value is::
++
++    rule=:%k8s_prefix:char-to:_%_%container_name:char-to:.%.%container_hash:char-to:\
++    _%_%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_u\
++    sed_2:rest%
++    rule=:%k8s_prefix:char-to:_%_%container_name:char-to:_%_%pod_name:char-to:_%_%na\
++    mespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%
++
++.. note::
++
++    In the above rules, the slashes ``\`` ending each line indicate
++    line wrapping - they are not part of the rule.
++
++There are two rules because the `container_hash` is optional.
++
++.. _containerrulebase:
++
++containerrulebase
++^^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "/etc/rsyslog.d/k8s_container_name.rulebase", "no", "none"
++
++When processing json-file logs, this is the rulebase used to
++match the CONTAINER_NAME property value and extract metadata.  For the
++actual rules, see `containerrules`.
++
++Fields
++------
++
++These are the fields added from the metadata in the json-file filename, or from
++the `CONTAINER_NAME` and `CONTAINER_ID_FULL` fields from the `imjournal` input:
++
++`$!kubernetes!namespace_name`, `$!kubernetes!pod_name`,
++`$!kubernetes!container_name`, `$!docker!id`, `$!kubernetes!master_url`.
++
++If mmkubernetes can extract the above fields from the input, the following
++fields will always be present.  If they are not present, mmkubernetes
++failed to look up the namespace or pod in Kubernetes:
++
++`$!kubernetes!namespace_id`, `$!kubernetes!pod_id`,
++`$!kubernetes!creation_timestamp`, `$!kubernetes!host`
++
++The following fields may be present, depending on how the namespace and pod are
++defined in Kubernetes, and depending on the value of the directive
++`annotation_match`:
++
++`$!kubernetes!labels`, `$!kubernetes!annotations`, `$!kubernetes!namespace_labels`,
++`$!kubernetes!namespace_annotations`
++
++More fields may be added in the future.
++
++Example
++-------
++
++Assuming you have an `imfile` input reading from docker json-file container
++logs managed by Kubernetes, with `addmetadata="on"` so that mmkubernetes can
++get the basic necessary Kubernetes metadata from the filename:
++
++.. code-block:: none
++
++    input(type="imfile" file="/var/log/containers/*.log"
++          tag="kubernetes" addmetadata="on")
++
++and/or an `imjournal` input for docker journald container logs annotated by
++Kubernetes:
++
++.. code-block:: none
++
++    input(type="imjournal")
++
++Then mmkubernetes can be used to annotate log records like this:
++
++.. code-block:: none
++
++    module(load="mmkubernetes")
++
++    action(type="mmkubernetes")
++
++After this, you should have log records with fields described in the `Fields`
++section above.
++
++Credits
++-------
++
++This work is based on
++https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter
++and has many of the same features.
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1459896-queues-defaults-doc.patch b/SOURCES/rsyslog-8.24.0-rhbz1459896-queues-defaults-doc.patch
deleted file mode 100644
index e7bdc91..0000000
--- a/SOURCES/rsyslog-8.24.0-rhbz1459896-queues-defaults-doc.patch
+++ /dev/null
@@ -1,138 +0,0 @@
-From c8be9a713a57f07311560af50c24267b30bef21b Mon Sep 17 00:00:00 2001
-From: Jiri Vymazal <jvymazal@redhat.com>
-Date: Tue, 29 Aug 2017 16:32:15 +0200
-Subject: [PATCH] fixed queue default values
-
----
- source/concepts/queues.rst                                       | 7 +++----
- source/configuration/global/index.rst                            | 6 +++---
- source/configuration/global/options/rsconf1_mainmsgqueuesize.rst | 2 +-
- source/rainerscript/queue_parameters.rst                         | 15 ++++++++++++---
- source/configuration/action/index.rst                            | 12 ++++++------
- 5 files changed, 24 insertions(+), 16 deletions(-)
-
-diff --git a/source/concepts/queues.rst b/source/concepts/queues.rst
-index c71413c..9b41128 100644
---- a/source/concepts/queues.rst
-+++ b/source/concepts/queues.rst
-@@ -273,10 +273,9 @@ actually needed.
- The water marks can be set via the "*$<object>QueueHighWatermark*\ "
- and  "*$<object>QueueLowWatermark*\ " configuration file directives.
- Note that these are actual numbers, not percentages. Be sure they make
--sense (also in respect to "*$<object>QueueSize*\ "), as rsyslodg does
--currently not perform any checks on the numbers provided. It is easy to
--screw up the system here (yes, a feature enhancement request is filed
--;)).
-+sense (also in respect to "*$<object>QueueSize*\ "). Rsyslodg does
-+perform some checks on the numbers provided, and issues warning when
-+numbers are "suspicious".
- 
- Limiting the Queue Size
- -----------------------
-diff --git a/source/configuration/global/index.rst b/source/configuration/global/index.rst
-index 2738f21..a53ef23 100644
---- a/source/configuration/global/index.rst
-+++ b/source/configuration/global/index.rst
-@@ -137,13 +137,13 @@ To understand queue parameters, read
- -  **$MainMsgQueueDequeueSlowdown** <number> [number is timeout in
-    *micro*\ seconds (1000000us is 1sec!), default 0 (no delay). Simple
-    rate-limiting!]
---  **$MainMsgQueueDiscardMark** <number> [default 9750]
-+-  **$MainMsgQueueDiscardMark** <number> [default 98000]
- -  **$MainMsgQueueDiscardSeverity** <severity> [either a textual or
-    numerical severity! default 4 (warning)]
- -  **$MainMsgQueueFileName** <name>
---  **$MainMsgQueueHighWaterMark** <number> [default 8000]
-+-  **$MainMsgQueueHighWaterMark** <number> [default 80000]
- -  **$MainMsgQueueImmediateShutdown** [on/**off**]
---  **$MainMsgQueueLowWaterMark** <number> [default 2000]
-+-  **$MainMsgQueueLowWaterMark** <number> [default 20000]
- -  **$MainMsgQueueMaxFileSize** <size\_nbr>, default 1m
- -  **$MainMsgQueueTimeoutActionCompletion** <number> [number is timeout in
-    ms (1000ms is 1sec!), default 1000, 0 means immediate!]
-diff --git a/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst b/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst
-index 050407c..3e902cf 100644
---- a/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst
-+++ b/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst
-@@ -3,7 +3,7 @@ $MainMsgQueueSize
- 
- **Type:** global configuration directive
- 
--**Default:** 10000
-+**Default:** 100000
- 
- **Description:**
- 
-diff --git a/source/rainerscript/queue_parameters.rst b/source/rainerscript/queue_parameters.rst
-index 4453721..3f2b7a2 100644
---- a/source/rainerscript/queue_parameters.rst
-+++ b/source/rainerscript/queue_parameters.rst
-@@ -33,8 +33,14 @@ read the :doc:`queues <../concepts/queues>` documentation.
-    For more information on the current status of this restriction see
-    the `rsyslog FAQ: "lower bound for queue
-    sizes" <http://www.rsyslog.com/lower-bound-for-queue-sizes/>`_.
-+
-+   The default depends on queue type and, if you need
-+   a specific value, please specify it. Otherwise rsyslog selects what
-+   it consideres appropriate. For example, ruleset queues have a default
-+   size of 50000 and action queues which are configured to be non-direct
-+   have a size of 1000.
- -  **queue.dequeuebatchsize** number
--   default 16
-+   default 128
- -  **queue.maxdiskspace** number
-    The maximum size that all queue files together will use on disk. Note
-    that the actual size may be slightly larger than the configured max,
-@@ -46,8 +47,9 @@ read the :doc:`queues <../concepts/queues>` documentation.
-    processing, because disk queue mode is very considerably slower than
-    in-memory queue mode. Going to disk should be reserved for cases
-    where an output action destination is offline for some period.
-+   default 90% of queue size
- -  **queue.lowwatermark** number
--   default 2000
-+   default 70% of queue size
- -  **queue.fulldelaymark** number 
-    Number of messages when the queue should block delayable messages. 
-    Messages are NO LONGER PROCESSED until the queue has sufficient space 
-@@ -59,9 +61,11 @@ read the :doc:`queues <../concepts/queues>` documentation.
-    out of space. Please note that if you use a DA queue, setting the 
-    fulldelaymark BELOW the highwatermark makes the queue never activate 
-    disk mode for delayable inputs. So this is probably not what you want.
-+   default 97% of queue size
- -  **queue.lightdelaymark** number
-+   default 70% of queue size
- -  **queue.discardmark** number
--   default 9750
-+   default 80% of queue size
- -  **queue.discardseverity** number
-    \*numerical\* severity! default 8 (nothing discarded)
- -  **queue.checkpointinterval** number
-diff --git a/source/configuration/action/index.rst b/source/configuration/action/index.rst
-index 3e7cd24..9352866 100644
---- a/source/configuration/action/index.rst
-+++ b/source/configuration/action/index.rst
-@@ -163,18 +163,18 @@ following action, only. The next and all other actions will be
- in "direct" mode (no real queue) if not explicitely specified otherwise.
-
- -  **$ActionQueueCheckpointInterval** <number>
---  **$ActionQueueDequeueBatchSize** <number> [default 16]
-+-  **$ActionQueueDequeueBatchSize** <number> [default 128]
- -  **$ActionQueueDequeueSlowdown** <number> [number is timeout in
-    *micro*\ seconds (1000000us is 1sec!), default 0 (no delay). Simple
-    rate-limiting!]
---  **$ActionQueueDiscardMark** <number> [default 9750]
---  **$ActionQueueDiscardSeverity** <number> [\*numerical\* severity! default
--   4 (warning)]
-+-  **$ActionQueueDiscardMark** <number> [default 80% of queue size]
-+-  **$ActionQueueDiscardSeverity** <number> [\*numerical\* severity! default
-+   8 (nothing discarded)]
- -  **$ActionQueueFileName** <name>
---  **$ActionQueueHighWaterMark** <number> [default 8000]
-+-  **$ActionQueueHighWaterMark** <number> [default 90% of queue size]
- -  **$ActionQueueImmediateShutdown** [on/**off**]
- -  **$ActionQueueSize** <number>
---  **$ActionQueueLowWaterMark** <number> [default 2000]
-+-  **$ActionQueueLowWaterMark** <number> [default 70% of queue size]
- -  **$ActionQueueMaxFileSize** <size\_nbr>, default 1m
- -  **$ActionQueueTimeoutActionCompletion** <number> [number is timeout in ms
-    (1000ms is 1sec!), default 1000, 0 means immediate!]
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1507028-recover_qi-doc.patch b/SOURCES/rsyslog-8.24.0-rhbz1507028-recover_qi-doc.patch
deleted file mode 100644
index 42a69b1..0000000
--- a/SOURCES/rsyslog-8.24.0-rhbz1507028-recover_qi-doc.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-From ff07a7cfc171dc2151cc8afe44776525d34a9e01 Mon Sep 17 00:00:00 2001
-From: jvymazal <jvymazal@redhat.com>
-Date: Tue, 3 Jan 2017 10:24:26 +0100
-Subject: [PATCH] Update queues.rst
-
-Update queues.rst
----
- source/concepts/queues.rst | 6 ++++++
- 1 file changed, 6 insertions(+)
-
-diff --git a/source/concepts/queues.rst b/source/concepts/queues.rst
-index eb394e8..c71413c 100644
---- a/source/concepts/queues.rst
-+++ b/source/concepts/queues.rst
-@@ -153,6 +153,12 @@ can be requested via "*<object>QueueSyncQueueFiles on/off* with the
- default being off. Activating this option has a performance penalty, so
- it should not be turned on without reason.
- 
-+If you happen to lose or otherwise need the housekeeping structures and 
-+have all yours queue chunks you can use perl script included in rsyslog
-+package to generate it. 
-+Usage: recover_qi.pl -w *$WorkDirectory* -f QueueFileName -d 8 > QueueFileName.qi
-+
-+
- In-Memory Queues
- ~~~~~~~~~~~~~~~~
- 
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1507145-omelastic-client-cert.patch b/SOURCES/rsyslog-8.24.0-rhbz1507145-omelastic-client-cert.patch
new file mode 100644
index 0000000..84e9d0f
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1507145-omelastic-client-cert.patch
@@ -0,0 +1,208 @@
+From 02772eb5f28b3c3a98f0d739b6210ca82d58f7ee Mon Sep 17 00:00:00 2001
+From: Rich Megginson <rmeggins@redhat.com>
+Date: Thu, 8 Feb 2018 18:13:13 -0700
+Subject: [PATCH] omelasticsearch - add support for CA cert, client cert auth
+
+This allows omelasticsearch to perform client cert based authentication
+to Elasticsearch.
+Add the following parameters:
+`tls.cacert` - Full path and filename of the file containing the CA cert
+               for the CA that issued the Elasticsearch server(s) cert(s)
+`tls.mycert` - Full path and filename of the file containing the client
+               cert used to authenticate to Elasticsearch
+`tls.myprivkey` - Full path and filename of the file containing the client
+                  key used to authenticate to Elasticsearch
+---
+ plugins/omelasticsearch/omelasticsearch.c | 79 ++++++++++++++++++++++++++++---
+ 1 file changed, 73 insertions(+), 6 deletions(-)
+
+diff --git a/plugins/omelasticsearch/omelasticsearch.c b/plugins/omelasticsearch/omelasticsearch.c
+index 97d8fb233..88bd5e16c 100644
+--- a/plugins/omelasticsearch/omelasticsearch.c
++++ b/plugins/omelasticsearch/omelasticsearch.c
+@@ -110,6 +110,9 @@ typedef struct _instanceData {
+ 	size_t maxbytes;
+ 	sbool useHttps;
+ 	sbool allowUnsignedCerts;
++	uchar *caCertFile;
++	uchar *myCertFile;
++	uchar *myPrivKeyFile;
+ } instanceData;
+ 
+ typedef struct wrkrInstanceData {
+@@ -154,7 +157,10 @@ static struct cnfparamdescr actpdescr[] = {
+ 	{ "template", eCmdHdlrGetWord, 0 },
+ 	{ "dynbulkid", eCmdHdlrBinary, 0 },
+ 	{ "bulkid", eCmdHdlrGetWord, 0 },
+-	{ "allowunsignedcerts", eCmdHdlrBinary, 0 }
++	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
++	{ "tls.cacert", eCmdHdlrString, 0 },
++	{ "tls.mycert", eCmdHdlrString, 0 },
++	{ "tls.myprivkey", eCmdHdlrString, 0 }
+ };
+ static struct cnfparamblk actpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -168,6 +174,9 @@ BEGINcreateInstance
+ CODESTARTcreateInstance
+ 	pData->fdErrFile = -1;
+ 	pthread_mutex_init(&pData->mutErrFile, NULL);
++	pData->caCertFile = NULL;
++	pData->myCertFile = NULL;
++	pData->myPrivKeyFile = NULL;
+ ENDcreateInstance
+ 
+ BEGINcreateWrkrInstance
+@@ -216,6 +225,9 @@ CODESTARTfreeInstance
+ 	free(pData->timeout);
+ 	free(pData->errorFile);
+ 	free(pData->bulkId);
++	free(pData->caCertFile);
++	free(pData->myCertFile);
++	free(pData->myPrivKeyFile);
+ ENDfreeInstance
+ 
+ BEGINfreeWrkrInstance
+@@ -270,6 +282,9 @@ CODESTARTdbgPrintInstInfo
+ 	dbgprintf("\tinterleaved=%d\n", pData->interleaved);
+ 	dbgprintf("\tdynbulkid=%d\n", pData->dynBulkId);
+ 	dbgprintf("\tbulkid='%s'\n", pData->bulkId);
++	dbgprintf("\ttls.cacert='%s'\n", pData->caCertFile);
++	dbgprintf("\ttls.mycert='%s'\n", pData->myCertFile);
++	dbgprintf("\ttls.myprivkey='%s'\n", pData->myPrivKeyFile);
+ ENDdbgPrintInstInfo
+ 
+ 
+@@ -311,7 +326,7 @@ computeBaseUrl(const char*const serverParam,
+ 		r = useHttps ? es_addBuf(&urlBuf, SCHEME_HTTPS, sizeof(SCHEME_HTTPS)-1) :
+ 			es_addBuf(&urlBuf, SCHEME_HTTP, sizeof(SCHEME_HTTP)-1);
+ 
+-	if (r == 0) r = es_addBuf(&urlBuf, serverParam, strlen(serverParam));
++	if (r == 0) r = es_addBuf(&urlBuf, (char *)serverParam, strlen(serverParam));
+ 	if (r == 0 && !strchr(host, ':')) {
+ 		snprintf(portBuf, sizeof(portBuf), ":%d", defaultPort);
+ 		r = es_addBuf(&urlBuf, portBuf, strlen(portBuf));
+@@ -1296,7 +1311,7 @@ finalize_it:
+ }
+ 
+ static void
+-curlCheckConnSetup(CURL *handle, HEADER *header, long timeout, sbool allowUnsignedCerts)
++curlCheckConnSetup(CURL *handle, HEADER *header, long timeout, sbool allowUnsignedCerts, wrkrInstanceData_t *pWrkrData)
+ {
+ 	curl_easy_setopt(handle, CURLOPT_HTTPHEADER, header);
+ 	curl_easy_setopt(handle, CURLOPT_NOBODY, TRUE);
+@@ -1305,13 +1320,21 @@ curlCheckConnSetup(CURL *handle, HEADER *header, long timeout, sbool allowUnsign
+ 
+ 	if(allowUnsignedCerts)
+ 		curl_easy_setopt(handle, CURLOPT_SSL_VERIFYPEER, FALSE);
++	if(pWrkrData->pData->caCertFile)
++		curl_easy_setopt(handle, CURLOPT_CAINFO, pWrkrData->pData->caCertFile);
++	if(pWrkrData->pData->myCertFile)
++		curl_easy_setopt(handle, CURLOPT_SSLCERT, pWrkrData->pData->myCertFile);
++	if(pWrkrData->pData->myPrivKeyFile)
++		curl_easy_setopt(handle, CURLOPT_SSLKEY, pWrkrData->pData->myPrivKeyFile);
++	/* uncomment for in-dept debuggung:
++	curl_easy_setopt(handle, CURLOPT_VERBOSE, TRUE); */
+ 
+ 	/* Only enable for debugging
+ 	curl_easy_setopt(curl, CURLOPT_VERBOSE, TRUE); */
+ }
+ 
+ static void
+-curlPostSetup(CURL *handle, HEADER *header, uchar* authBuf)
++curlPostSetup(CURL *handle, HEADER *header, uchar* authBuf, wrkrInstanceData_t *pWrkrData)
+ {
+ 	curl_easy_setopt(handle, CURLOPT_HTTPHEADER, header);
+ 	curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, curlResult);
+@@ -1322,6 +1345,12 @@ curlPostSetup(CURL *handle, HEADER *header, uchar* authBuf)
+ 		curl_easy_setopt(handle, CURLOPT_USERPWD, authBuf);
+ 		curl_easy_setopt(handle, CURLOPT_PROXYAUTH, CURLAUTH_ANY);
+ 	}
++	if(pWrkrData->pData->caCertFile)
++		curl_easy_setopt(handle, CURLOPT_CAINFO, pWrkrData->pData->caCertFile);
++	if(pWrkrData->pData->myCertFile)
++		curl_easy_setopt(handle, CURLOPT_SSLCERT, pWrkrData->pData->myCertFile);
++	if(pWrkrData->pData->myPrivKeyFile)
++		curl_easy_setopt(handle, CURLOPT_SSLKEY, pWrkrData->pData->myPrivKeyFile);
+ }
+ 
+ static rsRetVal
+@@ -1332,7 +1361,7 @@ curlSetup(wrkrInstanceData_t *pWrkrData, instanceData *pData)
+ 	if (pWrkrData->curlPostHandle == NULL) {
+ 		return RS_RET_OBJ_CREATION_FAILED;
+ 	}
+-	curlPostSetup(pWrkrData->curlPostHandle, pWrkrData->curlHeader, pData->authBuf);
++	curlPostSetup(pWrkrData->curlPostHandle, pWrkrData->curlHeader, pData->authBuf, pWrkrData);
+ 
+ 	pWrkrData->curlCheckConnHandle = curl_easy_init();
+ 	if (pWrkrData->curlCheckConnHandle == NULL) {
+@@ -1341,7 +1370,7 @@ curlSetup(wrkrInstanceData_t *pWrkrData, instanceData *pData)
+ 		return RS_RET_OBJ_CREATION_FAILED;
+ 	}
+ 	curlCheckConnSetup(pWrkrData->curlCheckConnHandle, pWrkrData->curlHeader,
+-		pData->healthCheckTimeout, pData->allowUnsignedCerts);
++		pData->healthCheckTimeout, pData->allowUnsignedCerts, pWrkrData);
+ 
+ 	return RS_RET_OK;
+ }
+@@ -1372,6 +1401,9 @@ setInstParamDefaults(instanceData *pData)
+ 	pData->interleaved=0;
+ 	pData->dynBulkId= 0;
+ 	pData->bulkId = NULL;
++	pData->caCertFile = NULL;
++	pData->myCertFile = NULL;
++	pData->myPrivKeyFile = NULL;
+ }
+ 
+ BEGINnewActInst
+@@ -1380,6 +1412,8 @@ BEGINnewActInst
+ 	struct cnfarray* servers = NULL;
+ 	int i;
+ 	int iNumTpls;
++	FILE *fp;
++	char errStr[1024];
+ CODESTARTnewActInst
+ 	if((pvals = nvlstGetParams(lst, &actpblk, NULL)) == NULL) {
+ 		ABORT_FINALIZE(RS_RET_MISSING_CNFPARAMS);
+@@ -1435,6 +1469,39 @@ CODESTARTnewActInst
+ 			pData->dynBulkId = pvals[i].val.d.n;
+ 		} else if(!strcmp(actpblk.descr[i].name, "bulkid")) {
+ 			pData->bulkId = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(actpblk.descr[i].name, "tls.cacert")) {
++			pData->caCertFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->caCertFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				errmsg.LogError(0, RS_RET_NO_FILE_ACCESS,
++						"error: 'tls.cacert' file %s couldn't be accessed: %s\n",
++						pData->caCertFile, errStr);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "tls.mycert")) {
++			pData->myCertFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->myCertFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				errmsg.LogError(0, RS_RET_NO_FILE_ACCESS,
++						"error: 'tls.mycert' file %s couldn't be accessed: %s\n",
++						pData->myCertFile, errStr);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "tls.myprivkey")) {
++			pData->myPrivKeyFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->myPrivKeyFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				errmsg.LogError(0, RS_RET_NO_FILE_ACCESS,
++						"error: 'tls.myprivkey' file %s couldn't be accessed: %s\n",
++						pData->myPrivKeyFile, errStr);
++			} else {
++				fclose(fp);
++			}
+ 		} else {
+ 			dbgprintf("omelasticsearch: program error, non-handled "
+ 			  "param '%s'\n", actpblk.descr[i].name);
+-- 
+2.14.3
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1511485-deserialize-property-name.patch b/SOURCES/rsyslog-8.24.0-rhbz1511485-deserialize-property-name.patch
new file mode 100644
index 0000000..2bf5f9e
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1511485-deserialize-property-name.patch
@@ -0,0 +1,59 @@
+From c49e42f4f8381fc8e92579c41cefb2c85fe45929 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Tue, 7 Feb 2017 13:09:40 +0100
+Subject: [PATCH] core: fix sequence error in msg object deserializer
+
+Corruption of disk queue (or disk part of DA queue) always happens if
+the "json" property (message variables) is present and "structured-data"
+property is also present. This causes rsyslog to serialize to the
+queue in wrong property sequence, which will lead to error -2308 on
+deserialization.
+
+Seems to be a long-standing bug. Depending on version used, some or
+all messages in disk queue may be lost.
+
+closes https://github.com/rsyslog/rsyslog/issues/1404
+---
+ runtime/msg.c | 14 ++++++++------
+ 1 file changed, 8 insertions(+), 6 deletions(-)
+
+diff --git a/runtime/msg.c b/runtime/msg.c
+index 7cfeca843..cfa95517e 100644
+--- a/runtime/msg.c
++++ b/runtime/msg.c
+@@ -1350,6 +1350,11 @@ MsgDeserialize(smsg_t * const pMsg, strm_t *pStrm)
+ 		reinitVar(pVar);
+ 		CHKiRet(objDeserializeProperty(pVar, pStrm));
+ 	}
++	if(isProp("pszStrucData")) {
++		MsgSetStructuredData(pMsg, (char*) rsCStrGetSzStrNoNULL(pVar->val.pStr));
++		reinitVar(pVar);
++		CHKiRet(objDeserializeProperty(pVar, pStrm));
++	}
+ 	if(isProp("json")) {
+ 		tokener = json_tokener_new();
+ 		pMsg->json = json_tokener_parse_ex(tokener, (char*)rsCStrGetSzStrNoNULL(pVar->val.pStr),
+@@ -1366,11 +1371,6 @@ MsgDeserialize(smsg_t * const pMsg, strm_t *pStrm)
+ 		reinitVar(pVar);
+ 		CHKiRet(objDeserializeProperty(pVar, pStrm));
+ 	}
+-	if(isProp("pszStrucData")) {
+-		MsgSetStructuredData(pMsg, (char*) rsCStrGetSzStrNoNULL(pVar->val.pStr));
+-		reinitVar(pVar);
+-		CHKiRet(objDeserializeProperty(pVar, pStrm));
+-	}
+ 	if(isProp("pCSAPPNAME")) {
+ 		MsgSetAPPNAME(pMsg, (char*) rsCStrGetSzStrNoNULL(pVar->val.pStr));
+ 		reinitVar(pVar);
+@@ -1401,8 +1401,10 @@ MsgDeserialize(smsg_t * const pMsg, strm_t *pStrm)
+ 	 * but on the other hand it works decently AND we will probably replace
+ 	 * the whole persisted format soon in any case. -- rgerhards, 2012-11-06
+ 	 */
+-	if(!isProp("offMSG"))
++	if(!isProp("offMSG")) {
++		DBGPRINTF("error property: %s\n", rsCStrGetSzStrNoNULL(pVar->pcsName));
+ 		ABORT_FINALIZE(RS_RET_DS_PROP_SEQ_ERR);
++	}
+ 	MsgSetMSGoffs(pMsg, pVar->val.num);
+ finalize_it:
+ 	if(pVar != NULL)
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1512551-caching-sockaddr.patch b/SOURCES/rsyslog-8.24.0-rhbz1512551-caching-sockaddr.patch
new file mode 100644
index 0000000..cf36062
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1512551-caching-sockaddr.patch
@@ -0,0 +1,70 @@
+From 5f828658a317c86095fc2b982801b58bf8b8ee6f Mon Sep 17 00:00:00 2001
+From: mosvald <mosvald@redhat.com>
+Date: Mon, 4 Dec 2017 08:10:37 +0100
+Subject: [PATCH] cache sin_addr instead of the whole sockaddr structure
+
+---
+ runtime/dnscache.c | 41 +++++++++++++++++++++++++++++++----------
+ 1 file changed, 31 insertions(+), 10 deletions(-)
+
+diff --git a/runtime/dnscache.c b/runtime/dnscache.c
+index 388a64f5e..d1d6e10a1 100644
+--- a/runtime/dnscache.c
++++ b/runtime/dnscache.c
+@@ -79,22 +79,43 @@ static prop_t *staticErrValue;
+ static unsigned int
+ hash_from_key_fn(void *k) 
+ {
+-    int len;
+-    uchar *rkey = (uchar*) k; /* we treat this as opaque bytes */
+-    unsigned hashval = 1;
+-
+-    len = SALEN((struct sockaddr*)k);
+-    while(len--)
+-        hashval = hashval * 33 + *rkey++;
++	int len = 0;
++	uchar *rkey; /* we treat this as opaque bytes */
++	unsigned hashval = 1;
++
++	switch (((struct sockaddr *)k)->sa_family) {
++		case AF_INET:
++			len = sizeof (struct in_addr);
++			rkey = (uchar*) &(((struct sockaddr_in *)k)->sin_addr);
++			break;
++		case AF_INET6:
++			len = sizeof (struct in6_addr);
++			rkey = (uchar*) &(((struct sockaddr_in6 *)k)->sin6_addr);
++			break;
++	}
++	while(len--)
++		hashval = hashval * 33 + *rkey++;
+ 
+-    return hashval;
++	return hashval;
+ }
+ 
+ static int
+ key_equals_fn(void *key1, void *key2)
+ {
+-	return (SALEN((struct sockaddr*)key1) == SALEN((struct sockaddr*) key2) 
+-		   && !memcmp(key1, key2, SALEN((struct sockaddr*) key1)));
++	int RetVal = 0;
++
++	if (((struct sockaddr *)key1)->sa_family != ((struct sockaddr *)key2)->sa_family)
++		return 0;
++	 switch (((struct sockaddr *)key1)->sa_family) {
++		case AF_INET:
++			RetVal = !memcmp(&((struct sockaddr_in *)key1)->sin_addr, &((struct sockaddr_in *)key2)->sin_addr, sizeof (struct in_addr));
++			break;
++		case AF_INET6:
++			RetVal = !memcmp(&((struct sockaddr_in6 *)key1)->sin6_addr, &((struct sockaddr_in6 *)key2)->sin6_addr, sizeof (struct in6_addr));
++			break;
++	}
++
++	return RetVal;
+ }
+ 
+ /* destruct a cache entry.
+-- 
+2.14.3
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch b/SOURCES/rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch
new file mode 100644
index 0000000..d0f5721
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch
@@ -0,0 +1,3160 @@
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Mon, 28 Jun 2018 15:07:55 +0100
+Subject: Imfile rewrite with symlink support
+
+This commit greatly refactors imfile internal workings. It changes the
+handling of inotify, FEN, and polling modes. Mostly unchanged is the
+processing of the way a file is read and state files are kept.
+
+This is about a 50% rewrite of the module.
+
+Polling, inotify, and FEN modes now use greatly unified code. Some
+differences still exists and may be changed with further commits. The
+internal handling of wildcards and file detection has been completely
+re-written from scratch. For example, previously when multi-level
+wildcards were used these were not reliably detected. The code also
+now provides much of the same functionality in all modes, most importantly
+wildcards are now also supported in polling mode.
+
+The refactoring sets ground for further enhancements and smaller
+refactorings. This commit provides the same feature set that imfile
+had previously.
+
+Some specific changes:
+bugfix: imfile did not pick up all files when not present
+at startup
+
+bugfix: directories only support "*" wildcard, no others
+
+bugfix: parameter "sortfiles" did only work in FEN mode
+
+provides the ability to dynamically add and remove files via
+multi-level wildcards
+
+the state file name currently has been changed to inode number
+
+We change it to json and also change the way it is stored and loaded.
+This sets base to additional improvements in imfile.
+
+When imfile rewrites state files, it does not truncate previous
+content. If the new content is smaller than the existing one, the
+existing part will not be overwritten, resulting in invalid json.
+That in turn can lead to some other failures.
+
+This introduces symlink detection and following as well
+as monitoring changes on them.
+
+stream/bugfix: memory leak on stream open if filename as already generated - 
+this can happen if imfile reads a state file. On each open, memory for the
+file name can be lost.
+
+(cherry picked from commit a03dccf8484d621fe06cb2d11816fbe921751e54 - https://gitlab.cee.redhat.com/rsyslog/rsyslog)
+---
+ plugins/imfile/imfile.c                          | 2264 ++++++++++++++++++++++---------------------
+ runtime/msg.c                                    |   22 ++++++++++++++++++++++
+ runtime/msg.h                                    |    1 +
+ runtime/stream.c                                 |  136 ++++++++++++++++++++-------
+ runtime/stream.h                                 |   17 ++++++++++++++---
+ 5 files changed, 1303 insertions(+), 1137 deletitions(-)
+
+diff --git a/plugins/imfile/imfile.c b/plugins/imfile/imfile.c
+index b0bc860bcd16beaecd67ce1b7c61991356ea5471..f8225d7068d8fc98edde7bbed194be1105b1696b 100644
+--- a/plugins/imfile/imfile.c
++++ b/plugins/imfile/imfile.c
+@@ -35,6 +35,7 @@
+ #include <unistd.h>
+ #include <glob.h>
+ #include <poll.h>
++#include <json.h>
+ #include <fnmatch.h>
+ #ifdef HAVE_SYS_INOTIFY_H
+ #include <sys/inotify.h>
+@@ -56,6 +57,7 @@
+ #include "stringbuf.h"
+ #include "ruleset.h"
+ #include "ratelimit.h"
++#include "parserif.h"
+ 
+ #include <regex.h> // TODO: fix via own module
+ 
+@@ -77,50 +81,19 @@ static int bLegacyCnfModGlobalsPermitted;/* are legacy module-global config para
+ 
+ #define NUM_MULTISUB 1024 /* default max number of submits */
+ #define DFLT_PollInterval 10
+-
+-#define INIT_FILE_TAB_SIZE 4 /* default file table size - is extended as needed, use 2^x value */
+-#define INIT_FILE_IN_DIR_TAB_SIZE 1 /* initial size for "associated files tab" in directory table */
+ #define INIT_WDMAP_TAB_SIZE 1 /* default wdMap table size - is extended as needed, use 2^x value */
+-
+ #define ADD_METADATA_UNSPECIFIED -1
++#define CONST_LEN_CEE_COOKIE 5
++#define CONST_CEE_COOKIE "@cee:"
++
++/* If set to 1, fileTableDisplay will be compiled and used for debugging */
++#define ULTRA_DEBUG 0
++
++/* Setting GLOB_BRACE to ZERO which disables support for GLOB_BRACE if not available on current platform */
++#ifndef GLOB_BRACE
++	#define GLOB_BRACE 0
++#endif
+ 
+-/* this structure is used in pure polling mode as well one of the support
+- * structures for inotify.
+- */
+-typedef struct lstn_s {
+-	struct lstn_s *next, *prev;
+-	struct lstn_s *masterLstn;/* if dynamic file (via wildcard), this points to the configured
+-				 * master entry. For master entries, it is always NULL. Only
+-				 * dynamic files can be deleted from the "files" list. */
+-	uchar *pszFileName;
+-	uchar *pszDirName;
+-	uchar *pszBaseName;
+-	uchar *pszTag;
+-	size_t lenTag;
+-	uchar *pszStateFile; /* file in which state between runs is to be stored (dynamic if NULL) */
+-	int readTimeout;
+-	int iFacility;
+-	int iSeverity;
+-	int maxLinesAtOnce;
+-	uint32_t trimLineOverBytes;
+-	int nRecords; /**< How many records did we process before persisting the stream? */
+-	int iPersistStateInterval; /**< how often should state be persisted? (0=on close only) */
+-	strm_t *pStrm;	/* its stream (NULL if not assigned) */
+-	sbool bRMStateOnDel;
+-	sbool hasWildcard;
+-	uint8_t readMode;	/* which mode to use in ReadMulteLine call? */
+-	uchar *startRegex;	/* regex that signifies end of message (NULL if unset) */
+-	regex_t end_preg;	/* compiled version of startRegex */
+-	uchar *prevLineSegment;	/* previous line segment (in regex mode) */
+-	sbool escapeLF;	/* escape LF inside the MSG content? */
+-	sbool reopenOnTruncate;
+-	sbool addMetadata;
+-	sbool addCeeTag;
+-	sbool freshStartTail; /* read from tail of file on fresh start? */
+-	ruleset_t *pRuleset;	/* ruleset to bind listener to (use system default if unspecified) */
+-	ratelimit_t *ratelimiter;
+-	multi_submit_t multiSub;
+-} lstn_t;
+ 
+ static struct configSettings_s {
+ 	uchar *pszFileName;
+@@ -138,9 +111,11 @@ static struct configSettings_s {
+ 
+ struct instanceConf_s {
+ 	uchar *pszFileName;
++	uchar *pszFileName_forOldStateFile; /* we unfortunately needs this to read old state files */
+ 	uchar *pszDirName;
+ 	uchar *pszFileBaseName;
+ 	uchar *pszTag;
++	size_t lenTag;
+ 	uchar *pszStateFile;
+ 	uchar *pszBindRuleset;
+ 	int nMultiSub;
+@@ -151,11 +126,15 @@ struct instanceConf_s {
+ 	sbool bRMStateOnDel;
+ 	uint8_t readMode;
+ 	uchar *startRegex;
++	regex_t end_preg;	/* compiled version of startRegex */
++	sbool discardTruncatedMsg;
++	sbool msgDiscardingError;
+ 	sbool escapeLF;
+ 	sbool reopenOnTruncate;
+ 	sbool addCeeTag;
+ 	sbool addMetadata;
+ 	sbool freshStartTail;
++	sbool fileNotFoundError;
+ 	int maxLinesAtOnce;
+ 	uint32_t trimLineOverBytes;
+ 	ruleset_t *pBindRuleset;	/* ruleset to bind listener to (use system default if unspecified) */
+@@ -163,9 +142,54 @@ struct instanceConf_s {
+ };
+ 
+ 
++/* file system objects */
++typedef struct fs_edge_s fs_edge_t;
++typedef struct fs_node_s fs_node_t;
++typedef struct act_obj_s act_obj_t;
++struct act_obj_s {
++	act_obj_t *prev;
++	act_obj_t *next;
++	fs_edge_t *edge;	/* edge which this object belongs to */
++	char *name;		/* full path name of active object */
++	char *basename;		/* only basename */ //TODO: remove when refactoring rename support
++	char *source_name;  /* if this object is target of a symlink, source_name is its name (else NULL) */
++	//char *statefile;	/* base name of state file (for move operations) */
++	int wd;
++	time_t timeoutBase; /* what time to calculate the timeout against? */
++	/* file dynamic data */
++	int in_move;	/* workaround for inotify move: if set, state file must not be deleted */
++	ino_t ino;	/* current inode nbr */
++	strm_t *pStrm;	/* its stream (NULL if not assigned) */
++	int nRecords; /**< How many records did we process before persisting the stream? */
++	ratelimit_t *ratelimiter;
++	multi_submit_t multiSub;
++	int is_symlink;
++};
++struct fs_edge_s {
++	fs_node_t *parent;
++	fs_node_t *node;	/* node this edge points to */
++	fs_edge_t *next;
++	uchar *name;
++	uchar *path;
++	act_obj_t *active;
++	int is_file;
++	int ninst;	/* nbr of instances in instarr */
++	instanceConf_t **instarr;
++};
++struct fs_node_s {
++	fs_edge_t *edges;
++	fs_node_t *root;
++};
++
++
+ /* forward definitions */
+-static rsRetVal persistStrmState(lstn_t *pInfo);
++static rsRetVal persistStrmState(act_obj_t *);
+ static rsRetVal resetConfigVariables(uchar __attribute__((unused)) *pp, void __attribute__((unused)) *pVal);
++static rsRetVal pollFile(act_obj_t *act);
++static int getBasename(uchar *const __restrict__ basen, uchar *const __restrict__ path);
++static void act_obj_unlink(act_obj_t *act);
++static uchar * getStateFileName(const act_obj_t *, uchar *, const size_t);
++static int getFullStateFileName(const uchar *const, uchar *const pszout, const size_t ilenout);
+ 
+ 
+ #define OPMODE_POLLING 0
+@@ -178,57 +200,23 @@ struct modConfData_s {
+ 	int readTimeout;
+ 	int timeoutGranularity;		/* value in ms */
+ 	instanceConf_t *root, *tail;
+-	lstn_t *pRootLstn;
+-	lstn_t *pTailLstn;
++	fs_node_t *conf_tree;
+ 	uint8_t opMode;
+ 	sbool configSetViaV2Method;
++	sbool sortFiles;
++	sbool normalizePath;	/* normalize file system pathes (all start with root dir) */
+ 	sbool haveReadTimeouts;	/* use special processing if read timeouts exist */
++	sbool bHadFileData;	/* actually a global variable:
++				   1 - last call to pollFile() had data
++				   0 - last call to pollFile() had NO data
++				   Must be manually reset to 0 if desired. Helper for
++				   polling mode.
++				 */
+ };
+ static modConfData_t *loadModConf = NULL;/* modConf ptr to use for the current load process */
+ static modConfData_t *runModConf = NULL;/* modConf ptr to use for the current load process */
+ 
+ #ifdef HAVE_INOTIFY_INIT
+-/* support for inotify mode */
+-
+-/* we need to track directories */
+-struct dirInfoFiles_s { /* associated files */
+-	lstn_t *pLstn;
+-	int refcnt;	/* due to inotify's async nature, we may have multiple
+-			 * references to a single file inside our cache - e.g. when
+-			 * inodes are removed, and the file name is re-created BUT another
+-			 * process (like rsyslogd ;)) holds open the old inode.
+-			 */
+-};
+-typedef struct dirInfoFiles_s dirInfoFiles_t;
+-
+-/* This structure is a dynamic table to track file entries */
+-struct fileTable_s {
+-	dirInfoFiles_t *listeners;
+-	int currMax;
+-	int allocMax;
+-};
+-typedef struct fileTable_s fileTable_t;
+-
+-/* The dirs table (defined below) contains one entry for each directory that
+- * is to be monitored. For each directory, it contains array which point to
+- * the associated *active* files as well as *configured* files. Note that
+- * the configured files may currently not exist, but will be processed
+- * when they are created.
+- */
+-struct dirInfo_s {
+-	uchar *dirName;
+-	fileTable_t active; /* associated active files */
+-	fileTable_t configured; /* associated configured files */
+-};
+-typedef struct dirInfo_s dirInfo_t;
+-static dirInfo_t *dirs = NULL;
+-static int allocMaxDirs;
+-static int currMaxDirs;
+-/* the following two macros are used to select the correct file table */
+-#define ACTIVE_FILE 1
+-#define CONFIGURED_FILE 0
+-
+-
+ /* We need to map watch descriptors to our actual objects. Unfortunately, the
+  * inotify API does not provide us with any cookie, so a simple O(1) algorithm
+  * cannot be done (what a shame...). We assume that maintaining the array is much
+@@ -238,9 +226,7 @@ static int currMaxDirs;
+  */
+ struct wd_map_s {
+ 	int wd;		/* ascending sort key */
+-	lstn_t *pLstn;	/* NULL, if this is a dir entry, otherwise pointer into listener(file) table */
+-	int dirIdx;	/* index into dirs table, undefined if pLstn == NULL */
+-	time_t timeoutBase; /* what time to calculate the timeout against? */
++	act_obj_t *act; /* point to related active object */
+ };
+ typedef struct wd_map_s wd_map_t;
+ static wd_map_t *wdmap = NULL;
+@@ -257,6 +243,8 @@ static struct cnfparamdescr modpdescr[] = {
+ 	{ "pollinginterval", eCmdHdlrPositiveInt, 0 },
+ 	{ "readtimeout", eCmdHdlrPositiveInt, 0 },
+ 	{ "timeoutgranularity", eCmdHdlrPositiveInt, 0 },
++	{ "sortfiles", eCmdHdlrBinary, 0 },
++	{ "normalizepath", eCmdHdlrBinary, 0 },
+ 	{ "mode", eCmdHdlrGetWord, 0 }
+ };
+ static struct cnfparamblk modpblk =
+@@ -286,7 +274,8 @@ static struct cnfparamdescr inppdescr[] = {
+ 	{ "addceetag", eCmdHdlrBinary, 0 },
+ 	{ "statefile", eCmdHdlrString, CNFPARAM_DEPRECATED },
+ 	{ "readtimeout", eCmdHdlrPositiveInt, 0 },
+-	{ "freshstarttail", eCmdHdlrBinary, 0}
++	{ "freshstarttail", eCmdHdlrBinary, 0},
++	{ "filenotfounderror", eCmdHdlrBinary, 0}
+ };
+ static struct cnfparamblk inppblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -297,18 +286,106 @@ static struct cnfparamblk inppblk =
+ #include "im-helper.h" /* must be included AFTER the type definitions! */
+ 
+ 
+-#ifdef HAVE_INOTIFY_INIT
+-/* support for inotify mode */
++/* Support for "old cruft" state files will potentially become optional in the
++ * future (hopefully). To prepare so, we use conditional compilation with a
++ * fixed-true condition ;-) -- rgerhards, 2018-03-28
++ * reason: https://github.com/rsyslog/rsyslog/issues/2231#issuecomment-376862280
++ */
++#define ENABLE_V1_STATE_FILE_FORMAT_SUPPORT 1
++#ifdef ENABLE_V1_STATE_FILE_FORMAT_SUPPORT
++static uchar *
++OLD_getStateFileName(const instanceConf_t *const inst,
++	 uchar *const __restrict__ buf,
++	 const size_t lenbuf)
++{
++	DBGPRINTF("OLD_getStateFileName trying '%s'\n", inst->pszFileName_forOldStateFile);
++	snprintf((char*)buf, lenbuf - 1, "imfile-state:%s", inst->pszFileName_forOldStateFile);
++	buf[lenbuf-1] = '\0'; /* be on the safe side... */
++	uchar *p = buf;
++	for( ; *p ; ++p) {
++		if(*p == '/')
++			*p = '-';
++	}
++	return buf;
++}
+ 
+-#if 0 /* enable if you need this for debugging */
++/* try to open an old-style state file for given file. If the state file does not
++ * exist or cannot be read, an error is returned.
++ */
++static rsRetVal
++OLD_openFileWithStateFile(act_obj_t *const act)
++{
++	DEFiRet;
++	strm_t *psSF = NULL;
++	uchar pszSFNam[MAXFNAME];
++	size_t lenSFNam;
++	struct stat stat_buf;
++	uchar statefile[MAXFNAME];
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
++
++	uchar *const statefn = OLD_getStateFileName(inst, statefile, sizeof(statefile));
++	DBGPRINTF("OLD_openFileWithStateFile: trying to open state for '%s', state file '%s'\n",
++		  act->name, statefn);
++
++	/* Get full path and file name */
++	lenSFNam = getFullStateFileName(statefn, pszSFNam, sizeof(pszSFNam));
++
++	/* check if the file exists */
++	if(stat((char*) pszSFNam, &stat_buf) == -1) {
++		if(errno == ENOENT) {
++			DBGPRINTF("OLD_openFileWithStateFile: NO state file (%s) exists for '%s'\n",
++				pszSFNam, act->name);
++			ABORT_FINALIZE(RS_RET_FILE_NOT_FOUND);
++		} else {
++			char errStr[1024];
++			rs_strerror_r(errno, errStr, sizeof(errStr));
++			DBGPRINTF("OLD_openFileWithStateFile: error trying to access state "
++				"file for '%s':%s\n", act->name, errStr);
++			ABORT_FINALIZE(RS_RET_IO_ERROR);
++		}
++	}
++
++	/* If we reach this point, we have a state file */
++
++	DBGPRINTF("old state file found - instantiating from it\n");
++	CHKiRet(strm.Construct(&psSF));
++	CHKiRet(strm.SettOperationsMode(psSF, STREAMMODE_READ));
++	CHKiRet(strm.SetsType(psSF, STREAMTYPE_FILE_SINGLE));
++	CHKiRet(strm.SetFName(psSF, pszSFNam, lenSFNam));
++	CHKiRet(strm.SetFileNotFoundError(psSF, inst->fileNotFoundError));
++	CHKiRet(strm.ConstructFinalize(psSF));
++
++	/* read back in the object */
++	CHKiRet(obj.Deserialize(&act->pStrm, (uchar*) "strm", psSF, NULL, act));
++	free(act->pStrm->pszFName);
++	CHKmalloc(act->pStrm->pszFName = ustrdup(act->name));
++
++	strm.CheckFileChange(act->pStrm);
++	CHKiRet(strm.SeekCurrOffs(act->pStrm));
++
++	/* we now persist the new state file and delete the old one, so we will
++	 * never have to deal with the old one. */
++	persistStrmState(act);
++	unlink((char*)pszSFNam);
++
++finalize_it:
++	if(psSF != NULL)
++		strm.Destruct(&psSF);
++	RETiRet;
++}
++#endif /* #ifdef ENABLE_V1_STATE_FILE_FORMAT_SUPPORT */
++
++
++#ifdef HAVE_INOTIFY_INIT
++#if ULTRA_DEBUG == 1
+ static void
+-dbg_wdmapPrint(char *msg)
++dbg_wdmapPrint(const char *msg)
+ {
+ 	int i;
+ 	DBGPRINTF("%s\n", msg);
+ 	for(i = 0 ; i < nWdmap ; ++i)
+-		DBGPRINTF("wdmap[%d]: wd: %d, file %d, dir %d\n", i,
+-			  wdmap[i].wd, wdmap[i].fIdx, wdmap[i].dirIdx);
++		DBGPRINTF("wdmap[%d]: wd: %d, act %p, name: %s\n",
++			i, wdmap[i].wd, wdmap[i].act, wdmap[i].act->name);
+ }
+ #endif
+ 
+@@ -324,48 +401,10 @@ finalize_it:
+ 	RETiRet;
+ }
+ 
+-/* looks up a wdmap entry by dirIdx and returns it's index if found
+- * or -1 if not found.
+- */
+-static int
+-wdmapLookupListner(lstn_t* pLstn)
+-{
+-	int i = 0;
+-	int wd = -1;
+-	/* Loop through */
+-	for(i = 0 ; i < nWdmap; ++i) {
+-		if (wdmap[i].pLstn == pLstn)
+-			wd = wdmap[i].wd;
+-	}
+-
+-	return wd;
+-}
+-
+-/* compare function for bsearch() */
+-static int
+-wdmap_cmp(const void *k, const void *a)
+-{
+-	int key = *((int*) k);
+-	wd_map_t *etry = (wd_map_t*) a;
+-	if(key < etry->wd)
+-		return -1;
+-	else if(key > etry->wd)
+-		return 1;
+-	else
+-		return 0;
+-}
+-/* looks up a wdmap entry and returns it's index if found
+- * or -1 if not found.
+- */
+-static wd_map_t *
+-wdmapLookup(int wd)
+-{
+-	return bsearch(&wd, wdmap, nWdmap, sizeof(wd_map_t), wdmap_cmp);
+-}
+ 
+ /* note: we search backwards, as inotify tends to return increasing wd's */
+ static rsRetVal
+-wdmapAdd(int wd, const int dirIdx, lstn_t *const pLstn)
++wdmapAdd(int wd, act_obj_t *const act)
+ {
+ 	wd_map_t *newmap;
+ 	int newmapsize;
+@@ -375,7 +414,7 @@ wdmapAdd(int wd, const int dirIdx, lstn_t *const pLstn)
+ 	for(i = nWdmap-1 ; i >= 0 && wdmap[i].wd > wd ; --i)
+ 		; 	/* just scan */
+ 	if(i >= 0 && wdmap[i].wd == wd) {
+-		DBGPRINTF("imfile: wd %d already in wdmap!\n", wd);
++		LogError(0, RS_RET_INTERNAL_ERROR, "imfile: wd %d already in wdmap!", wd);
+ 		ABORT_FINALIZE(RS_RET_FILE_ALREADY_IN_TABLE);
+ 	}
+ 	++i;
+@@ -392,17 +431,59 @@ wdmapAdd(int wd, const int dirIdx, lstn_t *const pLstn)
+ 		memmove(wdmap + i + 1, wdmap + i, sizeof(wd_map_t) * (nWdmap - i));
+ 	}
+ 	wdmap[i].wd = wd;
+-	wdmap[i].dirIdx = dirIdx;
+-	wdmap[i].pLstn = pLstn;
++	wdmap[i].act = act;
+ 	++nWdmap;
+-	DBGPRINTF("imfile: enter into wdmap[%d]: wd %d, dir %d, lstn %s:%s\n",i,wd,dirIdx,
+-		  (pLstn == NULL) ? "DIRECTORY" : "FILE",
+-	          (pLstn == NULL) ? dirs[dirIdx].dirName : pLstn->pszFileName);
++	DBGPRINTF("add wdmap[%d]: wd %d, act obj %p, path %s\n", i, wd, act, act->name);
+ 
+ finalize_it:
+ 	RETiRet;
+ }
+ 
++/* return wd or -1 on error */
++static int
++in_setupWatch(act_obj_t *const act, const int is_file)
++{
++	int wd = -1;
++	if(runModConf->opMode != OPMODE_INOTIFY)
++		goto done;
++
++	wd = inotify_add_watch(ino_fd, act->name,
++		(is_file) ? IN_MODIFY|IN_DONT_FOLLOW : IN_CREATE|IN_DELETE|IN_MOVED_FROM|IN_MOVED_TO);
++	if(wd < 0) { /* There is high probability of selinux denial on top-level paths */
++		if (errno != EACCES)
++			LogError(errno, RS_RET_IO_ERROR, "imfile: cannot watch object '%s'", act->name);
++		else
++			DBGPRINTF("Access denied when creating watch on '%s'\n", act->name);
++		goto done;
++	}
++	wdmapAdd(wd, act);
++	DBGPRINTF("in_setupWatch: watch %d added for %s(object %p)\n", wd, act->name, act);
++done:	return wd;
++}
++
++/* compare function for bsearch() */
++static int
++wdmap_cmp(const void *k, const void *a)
++{
++	int key = *((int*) k);
++	wd_map_t *etry = (wd_map_t*) a;
++	if(key < etry->wd)
++		return -1;
++	else if(key > etry->wd)
++		return 1;
++	else
++		return 0;
++}
++/* looks up a wdmap entry and returns it's index if found
++ * or -1 if not found.
++ */
++static wd_map_t *
++wdmapLookup(int wd)
++{
++	return bsearch(&wd, wdmap, nWdmap, sizeof(wd_map_t), wdmap_cmp);
++}
++
++
+ static rsRetVal
+ wdmapDel(const int wd)
+ {
+@@ -427,46 +506,570 @@ finalize_it:
+ 	RETiRet;
+ }
+ 
+-#endif /* #if HAVE_INOTIFY_INIT */
++#endif // #ifdef HAVE_INOTIFY_INIT
++
++static void
++fen_setupWatch(act_obj_t *const __attribute__((unused)) act)
++{
++	DBGPRINTF("fen_setupWatch: DUMMY CALLED - not on Solaris?\n");
++}
++
++static void
++fs_node_print(const fs_node_t *const node, const int level)
++{
++	fs_edge_t *chld;
++	act_obj_t *act;
++	dbgprintf("node print[%2.2d]: %p edges:\n", level, node);
++
++	for(chld = node->edges ; chld != NULL ; chld = chld->next) {
++		dbgprintf("node print[%2.2d]:     child %p '%s' isFile %d, path: '%s'\n",
++			level, chld->node, chld->name, chld->is_file, chld->path);
++		for(int i = 0 ; i < chld->ninst ; ++i) {
++			dbgprintf("\tinst: %p\n", chld->instarr[i]);
++		}
++		for(act = chld->active ; act != NULL ; act = act->next) {
++			dbgprintf("\tact : %p\n", act);
++			dbgprintf("\tact : %p: name '%s', wd: %d\n",
++				act, act->name, act->wd);
++		}
++	}
++	for(chld = node->edges ; chld != NULL ; chld = chld->next) {
++		fs_node_print(chld->node, level+1);
++	}
++}
++
++/* add a new file system object if it not yet exists, ignore call
++ * if it already does.
++ */
++static rsRetVal
++act_obj_add(fs_edge_t *const edge, const char *const name, const int is_file,
++	const ino_t ino, const int is_symlink, const char *const source)
++{
++	act_obj_t *act;
++	char basename[MAXFNAME];
++	DEFiRet;
++	
++	DBGPRINTF("act_obj_add: edge %p, name '%s' (source '%s')\n", edge, name, source? source : "---");
++	for(act = edge->active ; act != NULL ; act = act->next) {
++		if(!strcmp(act->name, name)) {
++                       if (!source || !act->source_name || !strcmp(act->source_name, source)) {
++                               DBGPRINTF("active object '%s' already exists in '%s' - no need to add\n",
++                                       name, edge->path);
++                               FINALIZE;
++                       }
++		}
++	}
++	DBGPRINTF("add new active object '%s' in '%s'\n", name, edge->path);
++	CHKmalloc(act = calloc(sizeof(act_obj_t), 1));
++	CHKmalloc(act->name = strdup(name));
++       if (-1 == getBasename((uchar*)basename, (uchar*)name)) {
++               CHKmalloc(act->basename = strdup(name)); /* assume basename is same as name */
++       } else {
++               CHKmalloc(act->basename = strdup(basename));
++       }
++	act->edge = edge;
++	act->ino = ino;
++	act->is_symlink = is_symlink;
++       if (source) { /* we are target of symlink */
++               CHKmalloc(act->source_name = strdup(source));
++       } else {
++               act->source_name = NULL;
++       }
++	#ifdef HAVE_INOTIFY_INIT
++	act->wd = in_setupWatch(act, is_file);
++	#endif
++	fen_setupWatch(act);
++	if(is_file && !is_symlink) {
++		const instanceConf_t *const inst = edge->instarr[0];// TODO: same file, multiple instances?
++		CHKiRet(ratelimitNew(&act->ratelimiter, "imfile", name));
++		CHKmalloc(act->multiSub.ppMsgs = MALLOC(inst->nMultiSub * sizeof(smsg_t *)));
++		act->multiSub.maxElem = inst->nMultiSub;
++		act->multiSub.nElem = 0;
++		pollFile(act);
++	}
++
++	/* all well, add to active list */
++	if(edge->active != NULL) {
++		edge->active->prev = act;
++	}
++	act->next = edge->active;
++	edge->active = act;
++//dbgprintf("printout of fs tree after act_obj_add for '%s'\n", name);
++//fs_node_print(runModConf->conf_tree, 0);
++//dbg_wdmapPrint("wdmap after act_obj_add");
++finalize_it:
++	if(iRet != RS_RET_OK) {
++		if(act != NULL) {
++			free(act->name);
++			free(act);
++		}
++	}
++	RETiRet;
++}
++
++
++/* this walks an edges active list and detects and acts on any changes
++ * seen there. It does NOT detect newly appeared files, as they are not
++ * inside the active list!
++ */
++static void
++detect_updates(fs_edge_t *const edge)
++{
++	act_obj_t *act;
++	struct stat fileInfo;
++	int restart = 0;
++
++	for(act = edge->active ; act != NULL ; ) {
++		DBGPRINTF("detect_updates checking active obj '%s'\n", act->name);
++		const int r = lstat(act->name, &fileInfo);
++		if(r == -1) { /* object gone away? */
++			DBGPRINTF("object gone away, unlinking: '%s'\n", act->name);
++			act_obj_unlink(act);
++			restart = 1;
++			break;
++		}
++		// TODO: add inode check for change notification!
++
++		/* Note: active nodes may get deleted, so we need to do the
++		 * pointer advancement at the end of the for loop!
++		 */
++		act = act->next;
++	}
++	if (restart)
++		detect_updates(edge);
++}
++
++
++/* check if active files need to be processed. This is only needed in
++ * polling mode.
++ */
++static void
++poll_active_files(fs_edge_t *const edge)
++{
++	if(   runModConf->opMode != OPMODE_POLLING
++	   || !edge->is_file
++	   || glbl.GetGlobalInputTermState() != 0) {
++		return;
++	}
++
++	act_obj_t *act;
++	for(act = edge->active ; act != NULL ; act = act->next) {
++		fen_setupWatch(act);
++		DBGPRINTF("poll_active_files: polling '%s'\n", act->name);
++		pollFile(act);
++	}
++}
++
++static rsRetVal
++process_symlink(fs_edge_t *const chld, const char *symlink)
++{
++	DEFiRet;
++	char *target = NULL;
++	CHKmalloc(target = realpath(symlink, target));
++       struct stat fileInfo;
++       if(lstat(target, &fileInfo) != 0) {
++               LogError(errno, RS_RET_ERR,     "imfile: process_symlink cannot stat file '%s' - ignored", target);
++               FINALIZE;
++       }
++       const int is_file = (S_ISREG(fileInfo.st_mode));
++       DBGPRINTF("process_symlink:  found '%s', File: %d (config file: %d), symlink: %d\n",
++               target, is_file, chld->is_file, 0);
++	if (act_obj_add(chld, target, is_file, fileInfo.st_ino, 0, symlink) == RS_RET_OK) {
++		/* need to watch parent target as well for proper rotation support */
++		uint idx = ustrlen(chld->active->name) - ustrlen(chld->active->basename);
++		if (idx) { /* basename is different from name */
++			char parent[MAXFNAME];
++			memcpy(parent, chld->active->name, idx-1);
++			parent[idx-1] = '\0';
++			if(lstat(parent, &fileInfo) != 0) {
++				LogError(errno, RS_RET_ERR,
++						 "imfile: process_symlink: cannot stat directory '%s' - ignored", parent);
++				FINALIZE;
++			}
++			DBGPRINTF("process_symlink:	adding parent '%s' of target '%s'\n", parent, target);
++			act_obj_add(chld->parent->root->edges, parent, 0, fileInfo.st_ino, 0, NULL);
++		}
++	}
++
++finalize_it:
++	free(target);
++	RETiRet;
++}
++
++static void
++poll_tree(fs_edge_t *const chld)
++{
++	struct stat fileInfo;
++	glob_t files;
++	int issymlink;
++	DBGPRINTF("poll_tree: chld %p, name '%s', path: %s\n", chld, chld->name, chld->path);
++	detect_updates(chld);
++	const int ret = glob((char*)chld->path, runModConf->sortFiles|GLOB_BRACE, NULL, &files);
++	DBGPRINTF("poll_tree: glob returned %d\n", ret);
++	if(ret == 0) {
++		DBGPRINTF("poll_tree: processing %d files\n", (int) files.gl_pathc);
++		for(unsigned i = 0 ; i < files.gl_pathc ; i++) {
++			if(glbl.GetGlobalInputTermState() != 0) {
++				goto done;
++			}
++			char *const file = files.gl_pathv[i];
++			if(lstat(file, &fileInfo) != 0) {
++				LogError(errno, RS_RET_ERR,
++					"imfile: poll_tree cannot stat file '%s' - ignored", file);
++				continue;
++			}
++
++			if (S_ISLNK(fileInfo.st_mode)) {
++				rsRetVal slink_ret = process_symlink(chld, file);
++				if (slink_ret != RS_RET_OK) {
++					continue;
++				}
++				issymlink = 1;
++			} else {
++				issymlink = 0;
++			}
++			const int is_file = (S_ISREG(fileInfo.st_mode) || issymlink);
++			DBGPRINTF("poll_tree:  found '%s', File: %d (config file: %d), symlink: %d\n",
++				file, is_file, chld->is_file, issymlink);
++			if(!is_file && S_ISREG(fileInfo.st_mode)) {
++				LogMsg(0, RS_RET_ERR, LOG_WARNING,
++					"imfile: '%s' is neither a regular file, symlink, nor a "
++					"directory - ignored", file);
++				continue;
++			}
++			if(chld->is_file != is_file) {
++				LogMsg(0, RS_RET_ERR, LOG_WARNING,
++					"imfile: '%s' is %s but %s expected - ignored",
++					file, (is_file) ? "FILE" : "DIRECTORY",
++					(chld->is_file) ? "FILE" : "DIRECTORY");
++				continue;
++			}
++			act_obj_add(chld, file, is_file, fileInfo.st_ino, issymlink, NULL);
++		}
++		globfree(&files);
++	}
++
++	poll_active_files(chld);
++
++done:	return;
++}
++
++#ifdef HAVE_INOTIFY_INIT // TODO: shouldn't we use that in polling as well?
++static void
++poll_timeouts(fs_edge_t *const edge)
++{
++	if(edge->is_file) {
++		act_obj_t *act;
++		for(act = edge->active ; act != NULL ; act = act->next) {
++			if(strmReadMultiLine_isTimedOut(act->pStrm)) {
++				DBGPRINTF("timeout occured on %s\n", act->name);
++				pollFile(act);
++			}
++		}
++	}
++}
++#endif
++
++
++/* destruct a single act_obj object */
++static void
++act_obj_destroy(act_obj_t *const act, const int is_deleted)
++{
++	uchar *statefn;
++	uchar statefile[MAXFNAME];
++	uchar toDel[MAXFNAME];
++
++	if(act == NULL)
++		return;
++
++	DBGPRINTF("act_obj_destroy: act %p '%s', (source '%s'), wd %d, pStrm %p, is_deleted %d, in_move %d\n",
++		act, act->name, act->source_name? act->source_name : "---", act->wd, act->pStrm, is_deleted, act->in_move);
++       if(act->is_symlink && is_deleted) {
++		act_obj_t *target_act;
++		for(target_act = act->edge->active ; target_act != NULL ; target_act = target_act->next) {
++			if(target_act->source_name && !strcmp(target_act->source_name, act->name)) {
++				DBGPRINTF("act_obj_destroy: unlinking slink target %s of %s "
++						"symlink\n", target_act->name, act->name);
++				act_obj_unlink(target_act);
++				break;
++			}
++		}
++	}
++	if(act->ratelimiter != NULL) {
++		ratelimitDestruct(act->ratelimiter);
++	}
++	if(act->pStrm != NULL) {
++		const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
++		pollFile(act); /* get any left-over data */
++		if(inst->bRMStateOnDel) {
++			statefn = getStateFileName(act, statefile, sizeof(statefile));
++			getFullStateFileName(statefn, toDel, sizeof(toDel));
++			statefn = toDel;
++		}
++		persistStrmState(act);
++		strm.Destruct(&act->pStrm);
++		/* we delete state file after destruct in case strm obj initiated a write */
++		if(is_deleted && !act->in_move && inst->bRMStateOnDel) {
++			DBGPRINTF("act_obj_destroy: deleting state file %s\n", statefn);
++			unlink((char*)statefn);
++		}
++	}
++	#ifdef HAVE_INOTIFY_INIT
++	if(act->wd != -1) {
++		wdmapDel(act->wd);
++	}
++	#endif
++	#if defined(OS_SOLARIS) && defined (HAVE_PORT_SOURCE_FILE)
++	if(act->pfinf != NULL) {
++		free(act->pfinf->fobj.fo_name);
++		free(act->pfinf);
++	}
++	#endif
++	free(act->basename);
++	free(act->source_name);
++	//free(act->statefile);
++	free(act->multiSub.ppMsgs);
++	#if defined(OS_SOLARIS) && defined (HAVE_PORT_SOURCE_FILE)
++		act->is_deleted = 1;
++	#else
++		free(act->name);
++		free(act);
++	#endif
++}
++
+ 
++/* destroy complete act list starting at given node */
++static void
++act_obj_destroy_all(act_obj_t *act)
++{
++	if(act == NULL)
++		return;
++
++	DBGPRINTF("act_obj_destroy_all: act %p '%s', wd %d, pStrm %p\n", act, act->name, act->wd, act->pStrm);
++	while(act != NULL) {
++		act_obj_t *const toDel = act;
++		act = act->next;
++		act_obj_destroy(toDel, 0);
++	}
++}
++
++#if 0
++/* debug: find if ptr is still present in list */
++static void
++chk_active(const act_obj_t *act, const act_obj_t *const deleted)
++{
++	while(act != NULL) {
++		DBGPRINTF("chk_active %p vs %p\n", act, deleted);
++		if(act->prev == deleted)
++			DBGPRINTF("chk_active %p prev points to %p\n", act, deleted);
++		if(act->next == deleted)
++			DBGPRINTF("chk_active %p next points to %p\n", act, deleted);
++		act = act->next;
++		DBGPRINTF("chk_active next %p\n", act);
++	}
++}
++#endif
++
++/* unlink act object from linked list and then
++ * destruct it.
++ */
++static void
++act_obj_unlink(act_obj_t *act)
++{
++	DBGPRINTF("act_obj_unlink %p: %s\n", act, act->name);
++	if(act->prev == NULL) {
++		act->edge->active = act->next;
++	} else {
++		act->prev->next = act->next;
++	}
++	if(act->next != NULL) {
++		act->next->prev = act->prev;
++	}
++	act_obj_destroy(act, 1);
++	act = NULL;
++//dbgprintf("printout of fs tree post unlink\n");
++//fs_node_print(runModConf->conf_tree, 0);
++//dbg_wdmapPrint("wdmap after");
++}
++
++static void
++fs_node_destroy(fs_node_t *const node)
++{
++	fs_edge_t *edge;
++	DBGPRINTF("node destroy: %p edges:\n", node);
++
++	for(edge = node->edges ; edge != NULL ; ) {
++		fs_node_destroy(edge->node);
++		fs_edge_t *const toDel = edge;
++		edge = edge->next;
++		act_obj_destroy_all(toDel->active);
++		free(toDel->name);
++		free(toDel->path);
++		free(toDel->instarr);
++		free(toDel);
++	}
++	free(node);
++}
++
++static void
++fs_node_walk(fs_node_t *const node,
++	void (*f_usr)(fs_edge_t*const))
++{
++	DBGPRINTF("node walk: %p edges:\n", node);
++
++	fs_edge_t *edge;
++	for(edge = node->edges ; edge != NULL ; edge = edge->next) {
++		DBGPRINTF("node walk: child %p '%s'\n", edge->node, edge->name);
++		f_usr(edge);
++		fs_node_walk(edge->node, f_usr);
++	}
++}
++
++
++
++/* add a file system object to config tree (or update existing node with new monitor)
++ */
++static rsRetVal
++fs_node_add(fs_node_t *const node, fs_node_t *const source,
++	const uchar *const toFind,
++	const size_t pathIdx,
++	instanceConf_t *const inst)
++{
++	DEFiRet;
++	fs_edge_t *newchld = NULL;
++	int i;
++
++	DBGPRINTF("fs_node_add(%p, '%s') enter, idx %zd\n",
++		node, toFind+pathIdx, pathIdx);
++	assert(toFind[0] != '\0');
++	for(i = pathIdx ; (toFind[i] != '\0') && (toFind[i] != '/') ; ++i)
++		/*JUST SKIP*/;
++	const int isFile = (toFind[i] == '\0') ? 1 : 0;
++	uchar ourPath[PATH_MAX];
++	if(i == 0) {
++		ourPath[0] = '/';
++		ourPath[1] = '\0';
++	} else {
++		memcpy(ourPath, toFind, i);
++		ourPath[i] = '\0';
++	}
++	const size_t nextPathIdx = i+1;
++	const size_t len = i - pathIdx;
++	uchar name[PATH_MAX];
++	memcpy(name, toFind+pathIdx, len);
++	name[len] = '\0';
++	DBGPRINTF("fs_node_add: name '%s'\n", name); node->root = source;
++
++	fs_edge_t *chld;
++	for(chld = node->edges ; chld != NULL ; chld = chld->next) {
++		if(!ustrcmp(chld->name, name)) {
++			DBGPRINTF("fs_node_add(%p, '%s') found '%s'\n", chld->node, toFind, name);
++			/* add new instance */
++			chld->ninst++;
++			CHKmalloc(chld->instarr = realloc(chld->instarr, sizeof(instanceConf_t*) * chld->ninst));
++			chld->instarr[chld->ninst-1] = inst;
++			/* recurse */
++			if(!isFile) {
++				CHKiRet(fs_node_add(chld->node, node, toFind, nextPathIdx, inst));
++			}
++			FINALIZE;
++		}
++	}
++
++	/* could not find node --> add it */
++	DBGPRINTF("fs_node_add(%p, '%s') did not find '%s' - adding it\n",
++		node, toFind, name);
++	CHKmalloc(newchld = calloc(sizeof(fs_edge_t), 1));
++	CHKmalloc(newchld->name = ustrdup(name));
++	CHKmalloc(newchld->node = calloc(sizeof(fs_node_t), 1));
++	CHKmalloc(newchld->path = ustrdup(ourPath));
++	CHKmalloc(newchld->instarr = calloc(sizeof(instanceConf_t*), 1));
++	newchld->instarr[0] = inst;
++	newchld->is_file = isFile;
++	newchld->ninst = 1;
++	newchld->parent = node;
++
++	DBGPRINTF("fs_node_add(%p, '%s') returns %p\n", node, toFind, newchld->node);
++
++	if(!isFile) {
++		CHKiRet(fs_node_add(newchld->node, node, toFind, nextPathIdx, inst));
++	}
++
++	/* link to list */
++	newchld->next = node->edges;
++	node->edges = newchld;
++finalize_it:
++	if(iRet != RS_RET_OK) {
++		if(newchld != NULL) {
++		free(newchld->name);
++		free(newchld->node);
++		free(newchld->path);
++		free(newchld->instarr);
++		free(newchld);
++		}
++	}
++	RETiRet;
++}
++
++/* Helper function to combine statefile and workdir
++ * This function is guranteed to work only on config data and DOES NOT
++ * open or otherwise modify disk file state.
++ */
++static int
++getFullStateFileName(const uchar *const pszstatefile, uchar *const pszout, const size_t ilenout)
++{
++	int lenout;
++	const uchar* pszworkdir;
+ 
+-/* this generates a state file name suitable for the current file. To avoid
++	/* Get Raw Workdir, if it is NULL we need to propper handle it */
++	pszworkdir = glblGetWorkDirRaw();
++
++	/* Construct file name */
++	lenout = snprintf((char*)pszout, ilenout, "%s/%s",
++			     (char*) (pszworkdir == NULL ? "." : (char*) pszworkdir), (char*)pszstatefile);
++
++	/* return out length */
++	return lenout;
++}
++
++
++/* this generates a state file name suitable for the given file. To avoid
+  * malloc calls, it must be passed a buffer which should be MAXFNAME large.
+  * Note: the buffer is not necessarily populated ... always ONLY use the
+  * RETURN VALUE!
++ * This function is guranteed to work only on config data and DOES NOT
++ * open or otherwise modify disk file state.
+  */
+ static uchar *
+-getStateFileName(lstn_t *const __restrict__ pLstn,
++getStateFileName(const act_obj_t *const act,
+ 	 	 uchar *const __restrict__ buf,
+ 		 const size_t lenbuf)
+ {
+-	uchar *ret;
+-	if(pLstn->pszStateFile == NULL) {
+-		snprintf((char*)buf, lenbuf - 1, "imfile-state:%s", pLstn->pszFileName);
+-		buf[lenbuf-1] = '\0'; /* be on the safe side... */
+-		uchar *p = buf;
+-		for( ; *p ; ++p) {
+-			if(*p == '/')
+-				*p = '-';
+-		}
+-		ret = buf;
+-	} else {
+-		ret = pLstn->pszStateFile;
+-	}
+-	return ret;
++	DBGPRINTF("getStateFileName for '%s'\n", act->name);
++	snprintf((char*)buf, lenbuf - 1, "imfile-state:%lld", (long long) act->ino);
++	DBGPRINTF("getStateFileName:  stat file name now is %s\n", buf);
++	return buf;
+ }
+ 
+ 
+ /* enqueue the read file line as a message. The provided string is
+- * not freed - thuis must be done by the caller.
++ * not freed - this must be done by the caller.
+  */
+-static rsRetVal enqLine(lstn_t *const __restrict__ pLstn,
+-                        cstr_t *const __restrict__ cstrLine)
++#define MAX_OFFSET_REPRESENTATION_NUM_BYTES 20
++static rsRetVal
++enqLine(act_obj_t *const act,
++	cstr_t *const __restrict__ cstrLine,
++	const int64 strtOffs)
+ {
+ 	DEFiRet;
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
+ 	smsg_t *pMsg;
++	uchar file_offset[MAX_OFFSET_REPRESENTATION_NUM_BYTES+1];
++	const uchar *metadata_names[2] = {(uchar *)"filename",(uchar *)"fileoffset"} ;
++	const uchar *metadata_values[2] ;
++	const size_t msgLen = cstrLen(cstrLine);
+ 
+-	if(rsCStrLen(cstrLine) == 0) {
++	if(msgLen == 0) {
+ 		/* we do not process empty lines */
+ 		FINALIZE;
+ 	}
+@@ -474,27 +1180,34 @@ static rsRetVal enqLine(lstn_t *const __restrict__ pLstn,
+ 	CHKiRet(msgConstruct(&pMsg));
+ 	MsgSetFlowControlType(pMsg, eFLOWCTL_FULL_DELAY);
+ 	MsgSetInputName(pMsg, pInputName);
+-	if (pLstn->addCeeTag) {
+-		size_t msgLen = cstrLen(cstrLine);
+-		const char *const ceeToken = "@cee:";
+-		size_t ceeMsgSize = msgLen + strlen(ceeToken) +1;
++	if(inst->addCeeTag) {
++		/* Make sure we account for terminating null byte */
++		size_t ceeMsgSize = msgLen + CONST_LEN_CEE_COOKIE + 1;
+ 		char *ceeMsg;
+ 		CHKmalloc(ceeMsg = MALLOC(ceeMsgSize));
+-		strcpy(ceeMsg, ceeToken);
++		strcpy(ceeMsg, CONST_CEE_COOKIE);
+ 		strcat(ceeMsg, (char*)rsCStrGetSzStrNoNULL(cstrLine));
+ 		MsgSetRawMsg(pMsg, ceeMsg, ceeMsgSize);
+ 		free(ceeMsg);
+ 	} else {
+-		MsgSetRawMsg(pMsg, (char*)rsCStrGetSzStrNoNULL(cstrLine), cstrLen(cstrLine));
++		MsgSetRawMsg(pMsg, (char*)rsCStrGetSzStrNoNULL(cstrLine), msgLen);
+ 	}
+ 	MsgSetMSGoffs(pMsg, 0);	/* we do not have a header... */
+ 	MsgSetHOSTNAME(pMsg, glbl.GetLocalHostName(), ustrlen(glbl.GetLocalHostName()));
+-	MsgSetTAG(pMsg, pLstn->pszTag, pLstn->lenTag);
+-	msgSetPRI(pMsg, pLstn->iFacility | pLstn->iSeverity);
+-	MsgSetRuleset(pMsg, pLstn->pRuleset);
+-	if(pLstn->addMetadata)
+-		msgAddMetadata(pMsg, (uchar*)"filename", pLstn->pszFileName);
+-	ratelimitAddMsg(pLstn->ratelimiter, &pLstn->multiSub, pMsg);
++	MsgSetTAG(pMsg, inst->pszTag, inst->lenTag);
++	msgSetPRI(pMsg, inst->iFacility | inst->iSeverity);
++	MsgSetRuleset(pMsg, inst->pBindRuleset);
++	if(inst->addMetadata) {
++		if (act->source_name) {
++			metadata_values[0] = (const uchar*)act->source_name;
++		} else {
++			metadata_values[0] = (const uchar*)act->name;
++		}
++		snprintf((char *)file_offset, MAX_OFFSET_REPRESENTATION_NUM_BYTES+1, "%lld", strtOffs);
++		metadata_values[1] = file_offset;
++		msgAddMultiMetadata(pMsg, metadata_names, metadata_values, 2);
++	}
++	ratelimitAddMsg(act->ratelimiter, &act->multiSub, pMsg);
+ finalize_it:
+ 	RETiRet;
+ }
+@@ -504,70 +1213,89 @@ finalize_it:
+  * exist or cannot be read, an error is returned.
+  */
+ static rsRetVal
+-openFileWithStateFile(lstn_t *const __restrict__ pLstn)
++openFileWithStateFile(act_obj_t *const act)
+ {
+ 	DEFiRet;
+-	strm_t *psSF = NULL;
+ 	uchar pszSFNam[MAXFNAME];
+-	size_t lenSFNam;
+-	struct stat stat_buf;
+ 	uchar statefile[MAXFNAME];
++	int fd = -1;
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
+ 
+-	uchar *const statefn = getStateFileName(pLstn, statefile, sizeof(statefile));
+-	DBGPRINTF("imfile: trying to open state for '%s', state file '%s'\n",
+-		  pLstn->pszFileName, statefn);
+-	/* Construct file name */
+-	lenSFNam = snprintf((char*)pszSFNam, sizeof(pszSFNam), "%s/%s",
+-			     (char*) glbl.GetWorkDir(), (char*)statefn);
++	uchar *const statefn = getStateFileName(act, statefile, sizeof(statefile));
++
++	getFullStateFileName(statefn, pszSFNam, sizeof(pszSFNam));
++	DBGPRINTF("trying to open state for '%s', state file '%s'\n", act->name, pszSFNam);
+ 
+ 	/* check if the file exists */
+-	if(stat((char*) pszSFNam, &stat_buf) == -1) {
++	fd = open((char*)pszSFNam, O_CLOEXEC | O_NOCTTY | O_RDONLY, 0600);
++	if(fd < 0) {
+ 		if(errno == ENOENT) {
+-			DBGPRINTF("imfile: NO state file exists for '%s'\n", pLstn->pszFileName);
+-			ABORT_FINALIZE(RS_RET_FILE_NOT_FOUND);
++			DBGPRINTF("NO state file (%s) exists for '%s' - trying to see if "
++				"old-style file exists\n", pszSFNam, act->name);
++			CHKiRet(OLD_openFileWithStateFile(act));
++			FINALIZE;
+ 		} else {
+-			char errStr[1024];
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			DBGPRINTF("imfile: error trying to access state file for '%s':%s\n",
+-			          pLstn->pszFileName, errStr);
++			LogError(errno, RS_RET_IO_ERROR,
++				"imfile error trying to access state file for '%s'",
++			        act->name);
+ 			ABORT_FINALIZE(RS_RET_IO_ERROR);
+ 		}
+ 	}
+ 
+-	/* If we reach this point, we have a state file */
++	CHKiRet(strm.Construct(&act->pStrm));
+ 
+-	CHKiRet(strm.Construct(&psSF));
+-	CHKiRet(strm.SettOperationsMode(psSF, STREAMMODE_READ));
+-	CHKiRet(strm.SetsType(psSF, STREAMTYPE_FILE_SINGLE));
+-	CHKiRet(strm.SetFName(psSF, pszSFNam, lenSFNam));
+-	CHKiRet(strm.ConstructFinalize(psSF));
++	struct json_object *jval;
++	struct json_object *json = fjson_object_from_fd(fd);
++	if(json == NULL) {
++		LogError(0, RS_RET_ERR, "imfile: error reading state file for '%s'", act->name);
++	}
+ 
+-	/* read back in the object */
+-	CHKiRet(obj.Deserialize(&pLstn->pStrm, (uchar*) "strm", psSF, NULL, pLstn));
+-	DBGPRINTF("imfile: deserialized state file, state file base name '%s', "
+-		  "configured base name '%s'\n", pLstn->pStrm->pszFName,
+-		  pLstn->pszFileName);
+-	if(ustrcmp(pLstn->pStrm->pszFName, pLstn->pszFileName)) {
+-		errmsg.LogError(0, RS_RET_STATEFILE_WRONG_FNAME, "imfile: state file '%s' "
+-				"contains file name '%s', but is used for file '%s'. State "
+-				"file deleted, starting from begin of file.",
+-				pszSFNam, pLstn->pStrm->pszFName, pLstn->pszFileName);
++	/* we access some data items a bit dirty, as we need to refactor the whole
++	 * thing in any case - TODO
++	 */
++	/* Note: we ignore filname property - it is just an aid to the user. Most
++	 * importantly it *is wrong* after a file move!
++	 */
++	fjson_object_object_get_ex(json, "prev_was_nl", &jval);
++	act->pStrm->bPrevWasNL = fjson_object_get_int(jval);
+ 
+-		unlink((char*)pszSFNam);
+-		ABORT_FINALIZE(RS_RET_STATEFILE_WRONG_FNAME);
++	fjson_object_object_get_ex(json, "curr_offs", &jval);
++	act->pStrm->iCurrOffs = fjson_object_get_int64(jval);
++
++	fjson_object_object_get_ex(json, "strt_offs", &jval);
++	act->pStrm->strtOffs = fjson_object_get_int64(jval);
++
++	fjson_object_object_get_ex(json, "prev_line_segment", &jval);
++	const uchar *const prev_line_segment = (const uchar*)fjson_object_get_string(jval);
++	if(jval != NULL) {
++		CHKiRet(rsCStrConstructFromszStr(&act->pStrm->prevLineSegment, prev_line_segment));
++		cstrFinalize(act->pStrm->prevLineSegment);
++		uchar *ret = rsCStrGetSzStrNoNULL(act->pStrm->prevLineSegment);
++		DBGPRINTF("prev_line_segment present in state file 2, is: %s\n", ret);
+ 	}
+ 
+-	strm.CheckFileChange(pLstn->pStrm);
+-	CHKiRet(strm.SeekCurrOffs(pLstn->pStrm));
++	fjson_object_object_get_ex(json, "prev_msg_segment", &jval);
++	const uchar *const prev_msg_segment = (const uchar*)fjson_object_get_string(jval);
++	if(jval != NULL) {
++		CHKiRet(rsCStrConstructFromszStr(&act->pStrm->prevMsgSegment, prev_msg_segment));
++		cstrFinalize(act->pStrm->prevMsgSegment);
++		uchar *ret = rsCStrGetSzStrNoNULL(act->pStrm->prevMsgSegment);
++		DBGPRINTF("prev_msg_segment present in state file 2, is: %s\n", ret);
++	}
++	fjson_object_put(json);
+ 
+-	/* note: we do not delete the state file, so that the last position remains
+-	 * known even in the case that rsyslogd aborts for some reason (like powerfail)
+-	 */
++	CHKiRet(strm.SetFName(act->pStrm, (uchar*)act->name, strlen(act->name)));
++	CHKiRet(strm.SettOperationsMode(act->pStrm, STREAMMODE_READ));
++	CHKiRet(strm.SetsType(act->pStrm, STREAMTYPE_FILE_MONITOR));
++	CHKiRet(strm.SetFileNotFoundError(act->pStrm, inst->fileNotFoundError));
++	CHKiRet(strm.ConstructFinalize(act->pStrm));
+ 
+-finalize_it:
+-	if(psSF != NULL)
+-		strm.Destruct(&psSF);
++	CHKiRet(strm.SeekCurrOffs(act->pStrm));
+ 
++finalize_it:
++	if(fd >= 0) {
++		close(fd);
++	}
+ 	RETiRet;
+ }
+ 
+@@ -576,30 +1304,32 @@ finalize_it:
+  * checked before calling it.
+  */
+ static rsRetVal
+-openFileWithoutStateFile(lstn_t *const __restrict__ pLstn)
++openFileWithoutStateFile(act_obj_t *const act)
+ {
+ 	DEFiRet;
+ 	struct stat stat_buf;
+ 
+-	DBGPRINTF("imfile: clean startup withOUT state file for '%s'\n", pLstn->pszFileName);
+-	if(pLstn->pStrm != NULL)
+-		strm.Destruct(&pLstn->pStrm);
+-	CHKiRet(strm.Construct(&pLstn->pStrm));
+-	CHKiRet(strm.SettOperationsMode(pLstn->pStrm, STREAMMODE_READ));
+-	CHKiRet(strm.SetsType(pLstn->pStrm, STREAMTYPE_FILE_MONITOR));
+-	CHKiRet(strm.SetFName(pLstn->pStrm, pLstn->pszFileName, strlen((char*) pLstn->pszFileName)));
+-	CHKiRet(strm.ConstructFinalize(pLstn->pStrm));
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
++
++	DBGPRINTF("clean startup withOUT state file for '%s'\n", act->name);
++	if(act->pStrm != NULL)
++		strm.Destruct(&act->pStrm);
++	CHKiRet(strm.Construct(&act->pStrm));
++	CHKiRet(strm.SettOperationsMode(act->pStrm, STREAMMODE_READ));
++	CHKiRet(strm.SetsType(act->pStrm, STREAMTYPE_FILE_MONITOR));
++	CHKiRet(strm.SetFName(act->pStrm, (uchar*)act->name, strlen(act->name)));
++	CHKiRet(strm.SetFileNotFoundError(act->pStrm, inst->fileNotFoundError));
++	CHKiRet(strm.ConstructFinalize(act->pStrm));
+ 
+ 	/* As a state file not exist, this is a fresh start. seek to file end
+ 	 * when freshStartTail is on.
+ 	 */
+-	if(pLstn->freshStartTail){
+-		if(stat((char*) pLstn->pszFileName, &stat_buf) != -1) {
+-			pLstn->pStrm->iCurrOffs = stat_buf.st_size;
+-			CHKiRet(strm.SeekCurrOffs(pLstn->pStrm));
++	if(inst->freshStartTail){
++		if(stat((char*) act->name, &stat_buf) != -1) {
++			act->pStrm->iCurrOffs = stat_buf.st_size;
++			CHKiRet(strm.SeekCurrOffs(act->pStrm));
+ 		}
+ 	}
+-	strmSetReadTimeout(pLstn->pStrm, pLstn->readTimeout);
+ 
+ finalize_it:
+ 	RETiRet;
+@@ -608,17 +1338,18 @@ finalize_it:
+  * if so, reading it in. Processing continues from the last know location.
+  */
+ static rsRetVal
+-openFile(lstn_t *const __restrict__ pLstn)
++openFile(act_obj_t *const act)
+ {
+ 	DEFiRet;
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
+ 
+-	CHKiRet_Hdlr(openFileWithStateFile(pLstn)) {
+-		CHKiRet(openFileWithoutStateFile(pLstn));
++	CHKiRet_Hdlr(openFileWithStateFile(act)) {
++		CHKiRet(openFileWithoutStateFile(act));
+ 	}
+ 
+-	DBGPRINTF("imfile: breopenOnTruncate %d for '%s'\n",
+-		pLstn->reopenOnTruncate, pLstn->pszFileName);
+-	CHKiRet(strm.SetbReopenOnTruncate(pLstn->pStrm, pLstn->reopenOnTruncate));
++	DBGPRINTF("breopenOnTruncate %d for '%s'\n", inst->reopenOnTruncate, act->name);
++	CHKiRet(strm.SetbReopenOnTruncate(act->pStrm, inst->reopenOnTruncate));
++	strmSetReadTimeout(act->pStrm, inst->readTimeout);
+ 
+ finalize_it:
+ 	RETiRet;
+@@ -638,58 +1369,72 @@ static void pollFileCancelCleanup(void *pArg)
+ }
+ 
+ 
+-/* poll a file, need to check file rollover etc. open file if not open */
+-#if !defined(_AIX)
+-#pragma GCC diagnostic ignored "-Wempty-body"
+-#endif
++/* pollFile needs to be split due to the unfortunate pthread_cancel_push() macros. */
+ static rsRetVal
+-pollFile(lstn_t *pLstn, int *pbHadFileData)
++pollFileReal(act_obj_t *act, cstr_t **pCStr)
+ {
+-	cstr_t *pCStr = NULL;
++	int64 strtOffs;
+ 	DEFiRet;
+-
+-	/* Note: we must do pthread_cleanup_push() immediately, because the POXIS macros
+-	 * otherwise do not work if I include the _cleanup_pop() inside an if... -- rgerhards, 2008-08-14
+-	 */
+-	pthread_cleanup_push(pollFileCancelCleanup, &pCStr);
+ 	int nProcessed = 0;
+-	if(pLstn->pStrm == NULL) {
+-		CHKiRet(openFile(pLstn)); /* open file */
++
++	DBGPRINTF("pollFileReal enter, pStrm %p, name '%s'\n", act->pStrm, act->name);
++	DBGPRINTF("pollFileReal enter, edge %p\n", act->edge);
++	DBGPRINTF("pollFileReal enter, edge->instarr %p\n", act->edge->instarr);
++
++	instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
++
++	if(act->pStrm == NULL) {
++		CHKiRet(openFile(act)); /* open file */
+ 	}
+ 
+ 	/* loop below will be exited when strmReadLine() returns EOF */
+ 	while(glbl.GetGlobalInputTermState() == 0) {
+-		if(pLstn->maxLinesAtOnce != 0 && nProcessed >= pLstn->maxLinesAtOnce)
++		if(inst->maxLinesAtOnce != 0 && nProcessed >= inst->maxLinesAtOnce)
+ 			break;
+-		if(pLstn->startRegex == NULL) {
+-			CHKiRet(strm.ReadLine(pLstn->pStrm, &pCStr, pLstn->readMode, pLstn->escapeLF, pLstn->trimLineOverBytes));
++		if(inst->startRegex == NULL) {
++			CHKiRet(strm.ReadLine(act->pStrm, pCStr, inst->readMode, inst->escapeLF,
++				inst->trimLineOverBytes, &strtOffs));
+ 		} else {
+-			CHKiRet(strmReadMultiLine(pLstn->pStrm, &pCStr, &pLstn->end_preg, pLstn->escapeLF));
++			CHKiRet(strmReadMultiLine(act->pStrm, pCStr, &inst->end_preg,
++				inst->escapeLF, &strtOffs));
+ 		}
+ 		++nProcessed;
+-		if(pbHadFileData != NULL)
+-			*pbHadFileData = 1; /* this is just a flag, so set it and forget it */
+-		CHKiRet(enqLine(pLstn, pCStr)); /* process line */
+-		rsCStrDestruct(&pCStr); /* discard string (must be done by us!) */
+-		if(pLstn->iPersistStateInterval > 0 && pLstn->nRecords++ >= pLstn->iPersistStateInterval) {
+-			persistStrmState(pLstn);
+-			pLstn->nRecords = 0;
++		runModConf->bHadFileData = 1; /* this is just a flag, so set it and forget it */
++		CHKiRet(enqLine(act, *pCStr, strtOffs)); /* process line */
++		rsCStrDestruct(pCStr); /* discard string (must be done by us!) */
++		if(inst->iPersistStateInterval > 0 && ++act->nRecords >= inst->iPersistStateInterval) {
++			persistStrmState(act);
++			act->nRecords = 0;
+ 		}
+ 	}
+ 
+ finalize_it:
+-	multiSubmitFlush(&pLstn->multiSub);
+-	pthread_cleanup_pop(0);
++	multiSubmitFlush(&act->multiSub);
+ 
+-	if(pCStr != NULL) {
+-		rsCStrDestruct(&pCStr);
++	if(*pCStr != NULL) {
++		rsCStrDestruct(pCStr);
+ 	}
+ 
+ 	RETiRet;
+ }
+-#if !defined(_AIX)
+-#pragma GCC diagnostic warning "-Wempty-body"
+-#endif
++
++/* poll a file, need to check file rollover etc. open file if not open */
++static rsRetVal
++pollFile(act_obj_t *const act)
++{
++	cstr_t *pCStr = NULL;
++	DEFiRet;
++	if (act->is_symlink) {
++		FINALIZE;    /* no reason to poll symlink file */
++	}
++	/* Note: we must do pthread_cleanup_push() immediately, because the POSIX macros
++	 * otherwise do not work if I include the _cleanup_pop() inside an if... -- rgerhards, 2008-08-14
++	 */
++	pthread_cleanup_push(pollFileCancelCleanup, &pCStr);
++	iRet = pollFileReal(act, &pCStr);
++	pthread_cleanup_pop(0);
++finalize_it: RETiRet;
++}
+ 
+ 
+ /* create input instance, set default parameters, and
+@@ -722,6 +1467,7 @@ createInstance(instanceConf_t **pinst)
+ 	inst->addMetadata = ADD_METADATA_UNSPECIFIED;
+ 	inst->addCeeTag = 0;
+ 	inst->freshStartTail = 0;
++	inst->fileNotFoundError = 1;
+ 	inst->readTimeout = loadModConf->readTimeout;
+ 
+ 	/* node created, let's add to config */
+@@ -767,19 +1513,11 @@ getBasename(uchar *const __restrict__ basen, uchar *const __restrict__ path)
+ }
+ 
+ /* this function checks instance parameters and does some required pre-processing
+- * (e.g. split filename in path and actual name)
+- * Note: we do NOT use dirname()/basename() as they have portability problems.
+  */
+ static rsRetVal
+-checkInstance(instanceConf_t *inst)
++checkInstance(instanceConf_t *const inst)
+ {
+-	char dirn[MAXFNAME];
+-	uchar basen[MAXFNAME];
+-	int i;
+-	struct stat sb;
+-	int r;
+-	int eno;
+-	char errStr[512];
++	uchar curr_wd[MAXFNAME];
+ 	DEFiRet;
+ 
+ 	/* this is primarily for the clang static analyzer, but also
+@@ -788,36 +1526,37 @@ checkInstance(instanceConf_t *inst)
+ 	if(inst->pszFileName == NULL)
+ 		ABORT_FINALIZE(RS_RET_INTERNAL_ERROR);
+ 
+-	i = getBasename(basen, inst->pszFileName);
+-	if (i == -1) {
+-		errmsg.LogError(0, RS_RET_CONFIG_ERROR, "imfile: file path '%s' does not include a basename component",
+-			inst->pszFileName);
+-		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
+-	}
+-	
+-	memcpy(dirn, inst->pszFileName, i); /* do not copy slash */
+-	dirn[i] = '\0';
+-	CHKmalloc(inst->pszFileBaseName = (uchar*) strdup((char*)basen));
+-	CHKmalloc(inst->pszDirName = (uchar*) strdup(dirn));
+-
+-	if(dirn[0] == '\0') {
+-		dirn[0] = '/';
+-		dirn[1] = '\0';
+-	}
+-	r = stat(dirn, &sb);
+-	if(r != 0)  {
+-		eno = errno;
+-		rs_strerror_r(eno, errStr, sizeof(errStr));
+-		errmsg.LogError(0, RS_RET_CONFIG_ERROR, "imfile warning: directory '%s': %s",
+-				dirn, errStr);
+-		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
+-	}
+-	if(!S_ISDIR(sb.st_mode)) {
+-		errmsg.LogError(0, RS_RET_CONFIG_ERROR, "imfile warning: configured directory "
+-				"'%s' is NOT a directory", dirn);
+-		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	CHKmalloc(inst->pszFileName_forOldStateFile = ustrdup(inst->pszFileName));
++	if(loadModConf->normalizePath) {
++		if(inst->pszFileName[0] == '.' && inst->pszFileName[1] == '/') {
++			DBGPRINTF("imfile: removing heading './' from name '%s'\n", inst->pszFileName);
++			memmove(inst->pszFileName, inst->pszFileName+2, ustrlen(inst->pszFileName) - 1);
++		}
++
++		if(inst->pszFileName[0] != '/') {
++			if(getcwd((char*)curr_wd, MAXFNAME) == NULL || curr_wd[0] != '/') {
++				LogError(errno, RS_RET_ERR, "imfile: error querying current working "
++					"directory - can not continue with %s", inst->pszFileName);
++				ABORT_FINALIZE(RS_RET_ERR);
++			}
++			const size_t len_curr_wd = ustrlen(curr_wd);
++			if(len_curr_wd + ustrlen(inst->pszFileName) + 1 >= MAXFNAME) {
++				LogError(0, RS_RET_ERR, "imfile: length of configured file and current "
++					"working directory exceeds permitted size - ignoring %s",
++					inst->pszFileName);
++				ABORT_FINALIZE(RS_RET_ERR);
++			}
++			curr_wd[len_curr_wd] = '/';
++			strcpy((char*)curr_wd+len_curr_wd+1, (char*)inst->pszFileName);
++			free(inst->pszFileName);
++			CHKmalloc(inst->pszFileName = ustrdup(curr_wd));
++		}
+ 	}
++	dbgprintf("imfile: adding file monitor for '%s'\n", inst->pszFileName);
+ 
++	if(inst->pszTag != NULL) {
++		inst->lenTag = ustrlen(inst->pszTag);
++	}
+ finalize_it:
+ 	RETiRet;
+ }
+@@ -869,140 +1608,14 @@ addInstance(void __attribute__((unused)) *pVal, uchar *pNewVal)
+ 	inst->bRMStateOnDel = 0;
+ 	inst->readTimeout = loadModConf->readTimeout;
+ 
+-	CHKiRet(checkInstance(inst));
+-
+-	/* reset legacy system */
+-	cs.iPersistStateInterval = 0;
+-	resetConfigVariables(NULL, NULL); /* values are both dummies */
+-
+-finalize_it:
+-	free(pNewVal); /* we do not need it, but we must free it! */
+-	RETiRet;
+-}
+-
+-
+-/* This adds a new listener object to the bottom of the list, but
+- * it does NOT initialize any data members except for the list
+- * pointers themselves.
+- */
+-static rsRetVal
+-lstnAdd(lstn_t **newLstn)
+-{
+-	lstn_t *pLstn;
+-	DEFiRet;
+-
+-	CHKmalloc(pLstn = (lstn_t*) MALLOC(sizeof(lstn_t)));
+-	if(runModConf->pRootLstn == NULL) {
+-		runModConf->pRootLstn = pLstn;
+-		pLstn->prev = NULL;
+-	} else {
+-		runModConf->pTailLstn->next = pLstn;
+-		pLstn->prev = runModConf->pTailLstn;
+-	}
+-	runModConf->pTailLstn = pLstn;
+-	pLstn->next = NULL;
+-	*newLstn = pLstn;
+-
+-finalize_it:
+-	RETiRet;
+-}
+-
+-/* delete a listener object */
+-static void
+-lstnDel(lstn_t *pLstn)
+-{
+-	DBGPRINTF("imfile: lstnDel called for %s\n", pLstn->pszFileName);
+-	if(pLstn->pStrm != NULL) { /* stream open? */
+-		persistStrmState(pLstn);
+-		strm.Destruct(&(pLstn->pStrm));
+-	}
+-	ratelimitDestruct(pLstn->ratelimiter);
+-	free(pLstn->multiSub.ppMsgs);
+-	free(pLstn->pszFileName);
+-	free(pLstn->pszTag);
+-	free(pLstn->pszStateFile);
+-	free(pLstn->pszBaseName);
+-	if(pLstn->startRegex != NULL)
+-		regfree(&pLstn->end_preg);
+-
+-	if(pLstn == runModConf->pRootLstn)
+-		runModConf->pRootLstn = pLstn->next;
+-	if(pLstn == runModConf->pTailLstn)
+-		runModConf->pTailLstn = pLstn->prev;
+-	if(pLstn->next != NULL)
+-		pLstn->next->prev = pLstn->prev;
+-	if(pLstn->prev != NULL)
+-		pLstn->prev->next = pLstn->next;
+-	free(pLstn);
+-}
+-
+-/* This function is called when a new listener shall be added.
+- * It also does some late stage error checking on the config
+- * and reports issues it finds.
+- */
+-static rsRetVal
+-addListner(instanceConf_t *inst)
+-{
+-	DEFiRet;
+-	lstn_t *pThis;
+-	sbool hasWildcard;
+-
+-	hasWildcard = containsGlobWildcard((char*)inst->pszFileBaseName);
+-	if(hasWildcard) {
+-		if(runModConf->opMode == OPMODE_POLLING) {
+-			errmsg.LogError(0, RS_RET_IMFILE_WILDCARD,
+-				"imfile: The to-be-monitored file \"%s\" contains "
+-				"wildcards. This is not supported in "
+-				"polling mode.", inst->pszFileName);
+-			ABORT_FINALIZE(RS_RET_IMFILE_WILDCARD);
+-		} else if(inst->pszStateFile != NULL) {
+-			errmsg.LogError(0, RS_RET_IMFILE_WILDCARD,
+-				"imfile: warning: it looks like to-be-monitored "
+-				"file \"%s\" contains wildcards. This usually "
+-				"does not work well with specifying a state file.",
+-				inst->pszFileName);
+-		}
+-	}
++	CHKiRet(checkInstance(inst));
++
++	/* reset legacy system */
++	cs.iPersistStateInterval = 0;
++	resetConfigVariables(NULL, NULL); /* values are both dummies */
+ 
+-	CHKiRet(lstnAdd(&pThis));
+-	pThis->hasWildcard = hasWildcard;
+-	pThis->pszFileName = (uchar*) strdup((char*) inst->pszFileName);
+-	pThis->pszDirName = inst->pszDirName; /* use memory from inst! */
+-	pThis->pszBaseName = (uchar*)strdup((char*)inst->pszFileBaseName); /* be consistent with expanded wildcards! */
+-	pThis->pszTag = (uchar*) strdup((char*) inst->pszTag);
+-	pThis->lenTag = ustrlen(pThis->pszTag);
+-	pThis->pszStateFile = inst->pszStateFile == NULL ? NULL : (uchar*) strdup((char*) inst->pszStateFile);
+-
+-	CHKiRet(ratelimitNew(&pThis->ratelimiter, "imfile", (char*)inst->pszFileName));
+-	CHKmalloc(pThis->multiSub.ppMsgs = MALLOC(inst->nMultiSub * sizeof(smsg_t *)));
+-	pThis->multiSub.maxElem = inst->nMultiSub;
+-	pThis->multiSub.nElem = 0;
+-	pThis->iSeverity = inst->iSeverity;
+-	pThis->iFacility = inst->iFacility;
+-	pThis->maxLinesAtOnce = inst->maxLinesAtOnce;
+-	pThis->trimLineOverBytes = inst->trimLineOverBytes;
+-	pThis->iPersistStateInterval = inst->iPersistStateInterval;
+-	pThis->readMode = inst->readMode;
+-	pThis->startRegex = inst->startRegex; /* no strdup, as it is read-only */
+-	if(pThis->startRegex != NULL)
+-		if(regcomp(&pThis->end_preg, (char*)pThis->startRegex, REG_EXTENDED)) {
+-			DBGPRINTF("imfile: error regex compile\n");
+-			ABORT_FINALIZE(RS_RET_ERR);
+-		}
+-	pThis->bRMStateOnDel = inst->bRMStateOnDel;
+-	pThis->escapeLF = inst->escapeLF;
+-	pThis->reopenOnTruncate = inst->reopenOnTruncate;
+-	pThis->addMetadata = (inst->addMetadata == ADD_METADATA_UNSPECIFIED) ?
+-			       hasWildcard : inst->addMetadata;
+-	pThis->addCeeTag = inst->addCeeTag;
+-	pThis->readTimeout = inst->readTimeout;
+-	pThis->freshStartTail = inst->freshStartTail;
+-	pThis->pRuleset = inst->pBindRuleset;
+-	pThis->nRecords = 0;
+-	pThis->pStrm = NULL;
+-	pThis->prevLineSegment = NULL;
+-	pThis->masterLstn = NULL; /* we *are* a master! */
+ finalize_it:
++	free(pNewVal); /* we do not need it, but we must free it! */
+ 	RETiRet;
+ }
+ 
+@@ -1055,6 +1668,8 @@ CODESTARTnewInpInst
+ 			inst->addCeeTag = (sbool) pvals[i].val.d.n;
+ 		} else if(!strcmp(inppblk.descr[i].name, "freshstarttail")) {
+ 			inst->freshStartTail = (sbool) pvals[i].val.d.n;
++		} else if(!strcmp(inppblk.descr[i].name, "filenotfounderror")) {
++			inst->fileNotFoundError = (sbool) pvals[i].val.d.n;
+ 		} else if(!strcmp(inppblk.descr[i].name, "escapelf")) {
+ 			inst->escapeLF = (sbool) pvals[i].val.d.n;
+ 		} else if(!strcmp(inppblk.descr[i].name, "reopenontruncate")) {
+@@ -1087,6 +1702,16 @@ CODESTARTnewInpInst
+ 			"at the same time --- remove one of them");
+ 			ABORT_FINALIZE(RS_RET_PARAM_NOT_PERMITTED);
+ 	}
++
++	if(inst->startRegex != NULL) {
++		const int errcode = regcomp(&inst->end_preg, (char*)inst->startRegex, REG_EXTENDED);
++		if(errcode != 0) {
++			char errbuff[512];
++			regerror(errcode, &inst->end_preg, errbuff, sizeof(errbuff));
++			parser_errmsg("imfile: error in regex expansion: %s", errbuff);
++			ABORT_FINALIZE(RS_RET_ERR);
++		}
++	}
+ 	if(inst->readTimeout != 0)
+ 		loadModConf->haveReadTimeouts = 1;
+ 	CHKiRet(checkInstance(inst));
+@@ -1106,6 +1731,10 @@ CODESTARTbeginCnfLoad
+ 	loadModConf->readTimeout = 0; /* default: no timeout */
+ 	loadModConf->timeoutGranularity = 1000; /* default: 1 second */
+ 	loadModConf->haveReadTimeouts = 0; /* default: no timeout */
++	loadModConf->normalizePath = 1;
++	loadModConf->sortFiles = GLOB_NOSORT;
++	loadModConf->conf_tree = calloc(sizeof(fs_node_t), 1);
++	loadModConf->conf_tree->edges = NULL;
+ 	bLegacyCnfModGlobalsPermitted = 1;
+ 	/* init legacy config vars */
+ 	cs.pszFileName = NULL;
+@@ -1148,6 +1777,10 @@ CODESTARTsetModCnf
+ 		} else if(!strcmp(modpblk.descr[i].name, "timeoutgranularity")) {
+ 			/* note: we need ms, thus "* 1000" */
+ 			loadModConf->timeoutGranularity = (int) pvals[i].val.d.n * 1000;
++		} else if(!strcmp(modpblk.descr[i].name, "sortfiles")) {
++			loadModConf->sortFiles = ((sbool) pvals[i].val.d.n) ? 0 : GLOB_NOSORT;
++		} else if(!strcmp(modpblk.descr[i].name, "normalizepath")) {
++			loadModConf->normalizePath = (sbool) pvals[i].val.d.n;
+ 		} else if(!strcmp(modpblk.descr[i].name, "mode")) {
+ 			if(!es_strconstcmp(pvals[i].val.d.estr, "polling"))
+ 				loadModConf->opMode = OPMODE_POLLING;
+@@ -1217,19 +1850,31 @@ BEGINactivateCnf
+ 	instanceConf_t *inst;
+ CODESTARTactivateCnf
+ 	runModConf = pModConf;
+-	runModConf->pRootLstn = NULL,
+-	runModConf->pTailLstn = NULL;
++	if(runModConf->root == NULL) {
++		LogError(0, NO_ERRCODE, "imfile: no file monitors configured, "
++				"input not activated.\n");
++		ABORT_FINALIZE(RS_RET_NO_RUN);
++	}
+ 
+ 	for(inst = runModConf->root ; inst != NULL ; inst = inst->next) {
+-		addListner(inst);
++		// TODO: provide switch to turn off this warning?
++		if(!containsGlobWildcard((char*)inst->pszFileName)) {
++			if(access((char*)inst->pszFileName, R_OK) != 0) {
++				LogError(errno, RS_RET_ERR,
++					"imfile: on startup file '%s' does not exist "
++					"but is configured in static file monitor - this "
++					"may indicate a misconfiguration. If the file "
++					"appears at a later time, it will automatically "
++					"be processed. Reason", inst->pszFileName);
++			}
++		}
++		fs_node_add(runModConf->conf_tree, NULL, inst->pszFileName, 0, inst);
+ 	}
+ 
+-	/* if we could not set up any listeners, there is no point in running... */
+-	if(runModConf->pRootLstn == 0) {
+-		errmsg.LogError(0, NO_ERRCODE, "imfile: no file monitors could be started, "
+-				"input not activated.\n");
+-		ABORT_FINALIZE(RS_RET_NO_RUN);
++	if(Debug) {
++		fs_node_print(runModConf->conf_tree, 0);
+ 	}
++
+ finalize_it:
+ ENDactivateCnf
+ 
+@@ -1237,14 +1882,20 @@ ENDactivateCnf
+ BEGINfreeCnf
+ 	instanceConf_t *inst, *del;
+ CODESTARTfreeCnf
++	fs_node_destroy(pModConf->conf_tree);
++	//move_list_destruct(pModConf);
+ 	for(inst = pModConf->root ; inst != NULL ; ) {
++		if(inst->startRegex != NULL)
++			regfree(&inst->end_preg);
+ 		free(inst->pszBindRuleset);
+ 		free(inst->pszFileName);
+-		free(inst->pszDirName);
+-		free(inst->pszFileBaseName);
+ 		free(inst->pszTag);
+ 		free(inst->pszStateFile);
+-		free(inst->startRegex);
++		free(inst->pszFileName_forOldStateFile);
++		if(inst->startRegex != NULL) {
++			regfree(&inst->end_preg);
++			free(inst->startRegex);
++		}
+ 		del = inst;
+ 		inst = inst->next;
+ 		free(del);
+@@ -1252,45 +1903,25 @@ CODESTARTfreeCnf
+ ENDfreeCnf
+ 
+ 
+-/* Monitor files in traditional polling mode.
+- *
+- * We go through all files and remember if at least one had data. If so, we do
+- * another run (until no data was present in any file). Then we sleep for
+- * PollInterval seconds and restart the whole process. This ensures that as
+- * long as there is some data present, it will be processed at the fastest
+- * possible pace - probably important for busy systmes. If we monitor just a
+- * single file, the algorithm is slightly modified. In that case, the sleep
+- * hapens immediately. The idea here is that if we have just one file, we
+- * returned from the file processer because that file had no additional data.
+- * So even if we found some lines, it is highly unlikely to find a new one
+- * just now. Trying it would result in a performance-costly additional try
+- * which in the very, very vast majority of cases will never find any new
+- * lines.
+- * On spamming the main queue: keep in mind that it will automatically rate-limit
+- * ourselfes if we begin to overrun it. So we really do not need to care here.
+- */
++/* Monitor files in polling mode. */
+ static rsRetVal
+ doPolling(void)
+ {
+-	int bHadFileData; /* were there at least one file with data during this run? */
+ 	DEFiRet;
+ 	while(glbl.GetGlobalInputTermState() == 0) {
++		DBGPRINTF("doPolling: new poll run\n");
+ 		do {
+-			lstn_t *pLstn;
+-			bHadFileData = 0;
+-			for(pLstn = runModConf->pRootLstn ; pLstn != NULL ; pLstn = pLstn->next) {
+-				if(glbl.GetGlobalInputTermState() == 1)
+-					break; /* terminate input! */
+-				pollFile(pLstn, &bHadFileData);
+-			}
+-		} while(bHadFileData == 1 && glbl.GetGlobalInputTermState() == 0);
+-		  /* warning: do...while()! */
++			runModConf->bHadFileData = 0;
++			fs_node_walk(runModConf->conf_tree, poll_tree);
++			DBGPRINTF("doPolling: end poll walk, hadData %d\n", runModConf->bHadFileData);
++		} while(runModConf->bHadFileData); /* warning: do...while()! */
+ 
+ 		/* Note: the additional 10ns wait is vitally important. It guards rsyslog
+ 		 * against totally hogging the CPU if the users selects a polling interval
+ 		 * of 0 seconds. It doesn't hurt any other valid scenario. So do not remove.
+ 		 * rgerhards, 2008-02-14
+ 		 */
++		DBGPRINTF("doPolling: poll going to sleep\n");
+ 		if(glbl.GetGlobalInputTermState() == 0)
+ 			srSleep(runModConf->iPollInterval, 10);
+ 	}
+@@ -1298,631 +1929,122 @@ doPolling(void)
+ 	RETiRet;
+ }
+ 
++#if defined(HAVE_INOTIFY_INIT)
+ 
+-#ifdef HAVE_INOTIFY_INIT
+-static rsRetVal
+-fileTableInit(fileTable_t *const __restrict__ tab, const int nelem)
+-{
+-	DEFiRet;
+-	CHKmalloc(tab->listeners = malloc(sizeof(dirInfoFiles_t) * nelem));
+-	tab->allocMax = nelem;
+-	tab->currMax = 0;
+-finalize_it:
+-	RETiRet;
+-}
+-/* uncomment if needed
+ static void
+-fileTableDisplay(fileTable_t *tab)
++in_dbg_showEv(const struct inotify_event *ev)
+ {
+-	int f;
+-	uchar *baseName;
+-	DBGPRINTF("imfile: dirs.currMaxfiles %d\n", tab->currMax);
+-	for(f = 0 ; f < tab->currMax ; ++f) {
+-		baseName = tab->listeners[f].pLstn->pszBaseName;
+-		DBGPRINTF("imfile: TABLE %p CONTENTS, %d->%p:'%s'\n", tab, f, tab->listeners[f].pLstn, (char*)baseName);
+-	}
+-}
+-*/
+-
+-static int
+-fileTableSearch(fileTable_t *const __restrict__ tab, uchar *const __restrict__ fn)
+-{
+-	int f;
+-	uchar *baseName = NULL;
+-	/* UNCOMMENT FOR DEBUG fileTableDisplay(tab); */
+-	for(f = 0 ; f < tab->currMax ; ++f) {
+-		baseName = tab->listeners[f].pLstn->pszBaseName;
+-		if(!fnmatch((char*)baseName, (char*)fn, FNM_PATHNAME | FNM_PERIOD))
+-			break; /* found */
+-	}
+-	if(f == tab->currMax)
+-		f = -1;
+-	DBGPRINTF("imfile: fileTableSearch file '%s' - '%s', found:%d\n", fn, baseName, f);
+-	return f;
+-}
+-
+-static int
+-fileTableSearchNoWildcard(fileTable_t *const __restrict__ tab, uchar *const __restrict__ fn)
+-{
+-	int f;
+-	uchar *baseName = NULL;
+-	/* UNCOMMENT FOR DEBUG fileTableDisplay(tab); */
+-	for(f = 0 ; f < tab->currMax ; ++f) {
+-		baseName = tab->listeners[f].pLstn->pszBaseName;
+-		if (strcmp((const char*)baseName, (const char*)fn) == 0)
+-			break; /* found */
+-	}
+-	if(f == tab->currMax)
+-		f = -1;
+-	DBGPRINTF("imfile: fileTableSearchNoWildcard file '%s' - '%s', found:%d\n", fn, baseName, f);
+-	return f;
+-}
+-
+-/* add file to file table */
+-static rsRetVal
+-fileTableAddFile(fileTable_t *const __restrict__ tab, lstn_t *const __restrict__ pLstn)
+-{
+-	int j;
+-	DEFiRet;
+-	/* UNCOMMENT FOR DEBUG fileTableDisplay(tab); */
+-	for(j = 0 ; j < tab->currMax && tab->listeners[j].pLstn != pLstn ; ++j)
+-		; /* just scan */
+-	if(j < tab->currMax) {
+-		++tab->listeners[j].refcnt;
+-		DBGPRINTF("imfile: file '%s' already registered, refcnt now %d\n",
+-			pLstn->pszFileName, tab->listeners[j].refcnt);
+-		FINALIZE;
++	if(ev->mask & IN_IGNORED) {
++		dbgprintf("INOTIFY event: watch was REMOVED\n");
+ 	}
+-
+-	if(tab->currMax == tab->allocMax) {
+-		const int newMax = 2 * tab->allocMax;
+-		dirInfoFiles_t *newListenerTab = realloc(tab->listeners, newMax * sizeof(dirInfoFiles_t));
+-		if(newListenerTab == NULL) {
+-			errmsg.LogError(0, RS_RET_OUT_OF_MEMORY,
+-					"cannot alloc memory to map directory/file relationship "
+-					"for '%s' - ignoring", pLstn->pszFileName);
+-			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
+-		}
+-		tab->listeners = newListenerTab;
+-		tab->allocMax = newMax;
+-		DBGPRINTF("imfile: increased dir table to %d entries\n", allocMaxDirs);
++	if(ev->mask & IN_MODIFY) {
++		dbgprintf("INOTIFY event: watch was MODIFID\n");
+ 	}
+-
+-	tab->listeners[tab->currMax].pLstn = pLstn;
+-	tab->listeners[tab->currMax].refcnt = 1;
+-	tab->currMax++;
+-finalize_it:
+-	RETiRet;
+-}
+-
+-/* delete a file from file table */
+-static rsRetVal
+-fileTableDelFile(fileTable_t *const __restrict__ tab, lstn_t *const __restrict__ pLstn)
+-{
+-	int j;
+-	DEFiRet;
+-
+-	for(j = 0 ; j < tab->currMax && tab->listeners[j].pLstn != pLstn ; ++j)
+-		; /* just scan */
+-	if(j == tab->currMax) {
+-		DBGPRINTF("imfile: no association for file '%s'\n", pLstn->pszFileName);
+-		FINALIZE;
++	if(ev->mask & IN_ACCESS) {
++		dbgprintf("INOTIFY event: watch IN_ACCESS\n");
+ 	}
+-	tab->listeners[j].refcnt--;
+-	if(tab->listeners[j].refcnt == 0) {
+-		/* we remove that entry (but we never shrink the table) */
+-		if(j < tab->currMax - 1) {
+-			/* entry in middle - need to move others */
+-			memmove(tab->listeners+j, tab->listeners+j+1,
+-				(tab->currMax -j-1) * sizeof(dirInfoFiles_t));
+-		}
+-		--tab->currMax;
++	if(ev->mask & IN_ATTRIB) {
++		dbgprintf("INOTIFY event: watch IN_ATTRIB\n");
+ 	}
+-finalize_it:
+-	RETiRet;
+-}
+-/* add entry to dirs array */
+-static rsRetVal
+-dirsAdd(uchar *dirName)
+-{
+-	int newMax;
+-	dirInfo_t *newDirTab;
+-	DEFiRet;
+-
+-	if(currMaxDirs == allocMaxDirs) {
+-		newMax = 2 * allocMaxDirs;
+-		newDirTab = realloc(dirs, newMax * sizeof(dirInfo_t));
+-		if(newDirTab == NULL) {
+-			errmsg.LogError(0, RS_RET_OUT_OF_MEMORY,
+-					"cannot alloc memory to monitor directory '%s' - ignoring",
+-					dirName);
+-			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
+-		}
+-		dirs = newDirTab;
+-		allocMaxDirs = newMax;
+-		DBGPRINTF("imfile: increased dir table to %d entries\n", allocMaxDirs);
++	if(ev->mask & IN_CLOSE_WRITE) {
++		dbgprintf("INOTIFY event: watch IN_CLOSE_WRITE\n");
+ 	}
+-
+-	/* if we reach this point, there is space in the file table for the new entry */
+-	dirs[currMaxDirs].dirName = dirName;
+-	CHKiRet(fileTableInit(&dirs[currMaxDirs].active, INIT_FILE_IN_DIR_TAB_SIZE));
+-	CHKiRet(fileTableInit(&dirs[currMaxDirs].configured, INIT_FILE_IN_DIR_TAB_SIZE));
+-
+-	++currMaxDirs;
+-	DBGPRINTF("imfile: added to dirs table: '%s'\n", dirName);
+-finalize_it:
+-	RETiRet;
+-}
+-
+-
+-/* checks if a dir name is already inside the dirs array. If so, returns
+- * its index. If not present, -1 is returned.
+- */
+-static int
+-dirsFindDir(uchar *dir)
+-{
+-	int i;
+-
+-	for(i = 0 ; i < currMaxDirs && ustrcmp(dir, dirs[i].dirName) ; ++i)
+-		; /* just scan, all done in for() */
+-	if(i == currMaxDirs)
+-		i = -1;
+-	return i;
+-}
+-
+-static rsRetVal
+-dirsInit(void)
+-{
+-	instanceConf_t *inst;
+-	DEFiRet;
+-
+-	free(dirs);
+-	CHKmalloc(dirs = malloc(sizeof(dirInfo_t) * INIT_FILE_TAB_SIZE));
+-	allocMaxDirs = INIT_FILE_TAB_SIZE;
+-	currMaxDirs = 0;
+-
+-	for(inst = runModConf->root ; inst != NULL ; inst = inst->next) {
+-		if(dirsFindDir(inst->pszDirName) == -1)
+-			dirsAdd(inst->pszDirName);
++	if(ev->mask & IN_CLOSE_NOWRITE) {
++		dbgprintf("INOTIFY event: watch IN_CLOSE_NOWRITE\n");
+ 	}
+-
+-finalize_it:
+-	RETiRet;
+-}
+-
+-/* add file to directory (create association)
+- * fIdx is index into file table, all other information is pulled from that table.
+- * bActive is 1 if the file is to be added to active set, else zero
+- */
+-static rsRetVal
+-dirsAddFile(lstn_t *__restrict__ pLstn, const int bActive)
+-{
+-	int dirIdx;
+-	dirInfo_t *dir;
+-	DEFiRet;
+-
+-	dirIdx = dirsFindDir(pLstn->pszDirName);
+-	if(dirIdx == -1) {
+-		errmsg.LogError(0, RS_RET_INTERNAL_ERROR, "imfile: could not find "
+-			"directory '%s' in dirs array - ignoring",
+-			pLstn->pszDirName);
+-		FINALIZE;
++	if(ev->mask & IN_CREATE) {
++		dbgprintf("INOTIFY event: file was CREATED: %s\n", ev->name);
+ 	}
+-
+-	dir = dirs + dirIdx;
+-	CHKiRet(fileTableAddFile((bActive ? &dir->active : &dir->configured), pLstn));
+-	DBGPRINTF("imfile: associated file [%s] to directory %d[%s], Active = %d\n",
+-		pLstn->pszFileName, dirIdx, dir->dirName, bActive);
+-	/* UNCOMMENT FOR DEBUG fileTableDisplay(bActive ? &dir->active : &dir->configured); */
+-finalize_it:
+-	RETiRet;
+-}
+-
+-
+-static void
+-in_setupDirWatch(const int dirIdx)
+-{
+-	int wd;
+-	wd = inotify_add_watch(ino_fd, (char*)dirs[dirIdx].dirName, IN_CREATE|IN_DELETE|IN_MOVED_FROM);
+-	if(wd < 0) {
+-		DBGPRINTF("imfile: could not create dir watch for '%s'\n",
+-			dirs[dirIdx].dirName);
+-		goto done;
++	if(ev->mask & IN_DELETE) {
++		dbgprintf("INOTIFY event: watch IN_DELETE\n");
+ 	}
+-	wdmapAdd(wd, dirIdx, NULL);
+-	DBGPRINTF("imfile: watch %d added for dir %s\n", wd, dirs[dirIdx].dirName);
+-done:	return;
+-}
+-
+-/* Setup a new file watch for a known active file. It must already have
+- * been entered into the correct tables.
+- * Note: we need to try to read this file, as it may already contain data this
+- * needs to be processed, and we won't get an event for that as notifications
+- * happen only for things after the watch has been activated.
+- * Note: newFileName is NULL for configured files, and non-NULL for dynamically
+- * detected files (e.g. wildcards!)
+- */
+-static void
+-startLstnFile(lstn_t *const __restrict__ pLstn)
+-{
+-	rsRetVal localRet;
+-	const int wd = inotify_add_watch(ino_fd, (char*)pLstn->pszFileName, IN_MODIFY);
+-	if(wd < 0) {
+-		char errStr[512];
+-		rs_strerror_r(errno, errStr, sizeof(errStr));
+-		DBGPRINTF("imfile: could not create file table entry for '%s' - "
+-			  "not processing it now: %s\n",
+-			  pLstn->pszFileName, errStr);
+-		goto done;
++	if(ev->mask & IN_DELETE_SELF) {
++		dbgprintf("INOTIFY event: watch IN_DELETE_SELF\n");
+ 	}
+-	if((localRet = wdmapAdd(wd, -1, pLstn)) != RS_RET_OK) {
+-		DBGPRINTF("imfile: error %d adding file to wdmap, ignoring\n", localRet);
+-		goto done;
++	if(ev->mask & IN_MOVE_SELF) {
++		dbgprintf("INOTIFY event: watch IN_MOVE_SELF\n");
+ 	}
+-	DBGPRINTF("imfile: watch %d added for file %s\n", wd, pLstn->pszFileName);
+-	dirsAddFile(pLstn, ACTIVE_FILE);
+-	pollFile(pLstn, NULL);
+-done:	return;
+-}
+-
+-/* Duplicate an existing listener. This is called when a new file is to
+- * be monitored due to wildcard detection. Returns the new pLstn in
+- * the ppExisting parameter.
+- */
+-static rsRetVal
+-lstnDup(lstn_t **ppExisting, uchar *const __restrict__ newname)
+-{
+-	DEFiRet;
+-	lstn_t *const existing = *ppExisting;
+-	lstn_t *pThis;
+-
+-	CHKiRet(lstnAdd(&pThis));
+-	pThis->pszDirName = existing->pszDirName; /* read-only */
+-	pThis->pszBaseName = (uchar*)strdup((char*)newname);
+-	if(asprintf((char**)&pThis->pszFileName, "%s/%s", (char*)pThis->pszDirName, (char*)newname) == -1) {
+-		DBGPRINTF("imfile/lstnDup: asprintf failed, malfunction can happen\n");
+-		ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
+-	}
+-	pThis->pszTag = (uchar*) strdup((char*) existing->pszTag);
+-	pThis->lenTag = ustrlen(pThis->pszTag);
+-	pThis->pszStateFile = existing->pszStateFile == NULL ? NULL : (uchar*) strdup((char*) existing->pszStateFile);
+-
+-	CHKiRet(ratelimitNew(&pThis->ratelimiter, "imfile", (char*)pThis->pszFileName));
+-	pThis->multiSub.maxElem = existing->multiSub.maxElem;
+-	pThis->multiSub.nElem = 0;
+-	CHKmalloc(pThis->multiSub.ppMsgs = MALLOC(pThis->multiSub.maxElem * sizeof(smsg_t*)));
+-	pThis->iSeverity = existing->iSeverity;
+-	pThis->iFacility = existing->iFacility;
+-	pThis->maxLinesAtOnce = existing->maxLinesAtOnce;
+-	pThis->trimLineOverBytes = existing->trimLineOverBytes;
+-	pThis->iPersistStateInterval = existing->iPersistStateInterval;
+-	pThis->readMode = existing->readMode;
+-	pThis->startRegex = existing->startRegex; /* no strdup, as it is read-only */
+-	if(pThis->startRegex != NULL) // TODO: make this a single function with better error handling
+-		if(regcomp(&pThis->end_preg, (char*)pThis->startRegex, REG_EXTENDED)) {
+-			DBGPRINTF("imfile: error regex compile\n");
+-			ABORT_FINALIZE(RS_RET_ERR);
+-		}
+-	pThis->bRMStateOnDel = existing->bRMStateOnDel;
+-	pThis->hasWildcard = existing->hasWildcard;
+-	pThis->escapeLF = existing->escapeLF;
+-	pThis->reopenOnTruncate = existing->reopenOnTruncate;
+-	pThis->addMetadata = existing->addMetadata;
+-	pThis->addCeeTag = existing->addCeeTag;
+-	pThis->readTimeout = existing->readTimeout;
+-	pThis->freshStartTail = existing->freshStartTail;
+-	pThis->pRuleset = existing->pRuleset;
+-	pThis->nRecords = 0;
+-	pThis->pStrm = NULL;
+-	pThis->prevLineSegment = NULL;
+-	pThis->masterLstn = existing;
+-	*ppExisting = pThis;
+-finalize_it:
+-	RETiRet;
+-}
+-
+-/* Setup a new file watch for dynamically discovered files (via wildcards).
+- * Note: we need to try to read this file, as it may already contain data this
+- * needs to be processed, and we won't get an event for that as notifications
+- * happen only for things after the watch has been activated.
+- */
+-static void
+-in_setupFileWatchDynamic(lstn_t *pLstn, uchar *const __restrict__ newBaseName)
+-{
+-	char fullfn[MAXFNAME];
+-	struct stat fileInfo;
+-	snprintf(fullfn, MAXFNAME, "%s/%s", pLstn->pszDirName, newBaseName);
+-	if(stat(fullfn, &fileInfo) != 0) {
+-		char errStr[1024];
+-		rs_strerror_r(errno, errStr, sizeof(errStr));
+-		DBGPRINTF("imfile: ignoring file '%s' cannot stat(): %s\n",
+-			fullfn, errStr);
+-		goto done;
++	if(ev->mask & IN_MOVED_FROM) {
++		dbgprintf("INOTIFY event: watch IN_MOVED_FROM, cookie %u, name '%s'\n", ev->cookie, ev->name);
+ 	}
+-
+-	if(S_ISDIR(fileInfo.st_mode)) {
+-		DBGPRINTF("imfile: ignoring directory '%s'\n", fullfn);
+-		goto done;
++	if(ev->mask & IN_MOVED_TO) {
++		dbgprintf("INOTIFY event: watch IN_MOVED_TO, cookie %u, name '%s'\n", ev->cookie, ev->name);
+ 	}
+-
+-	if(lstnDup(&pLstn, newBaseName) != RS_RET_OK)
+-		goto done;
+-
+-	startLstnFile(pLstn);
+-done:	return;
+-}
+-
+-/* Setup a new file watch for static (configured) files.
+- * Note: we need to try to read this file, as it may already contain data this
+- * needs to be processed, and we won't get an event for that as notifications
+- * happen only for things after the watch has been activated.
+- */
+-static void
+-in_setupFileWatchStatic(lstn_t *pLstn)
+-{
+-	DBGPRINTF("imfile: adding file '%s' to configured table\n",
+-		  pLstn->pszFileName);
+-	dirsAddFile(pLstn, CONFIGURED_FILE);
+-
+-	if(pLstn->hasWildcard) {
+-		DBGPRINTF("imfile: file '%s' has wildcard, doing initial "
+-			  "expansion\n", pLstn->pszFileName);
+-		glob_t files;
+-		const int ret = glob((char*)pLstn->pszFileName,
+-					GLOB_MARK|GLOB_NOSORT|GLOB_BRACE, NULL, &files);
+-		if(ret == 0) {
+-			for(unsigned i = 0 ; i < files.gl_pathc ; i++) {
+-				uchar basen[MAXFNAME];
+-				uchar *const file = (uchar*)files.gl_pathv[i];
+-				if(file[strlen((char*)file)-1] == '/')
+-					continue;/* we cannot process subdirs! */
+-				getBasename(basen, file);
+-				in_setupFileWatchDynamic(pLstn, basen);
+-			}
+-			globfree(&files);
+-		}
+-	} else {
+-		/* Duplicate static object as well, otherwise the configobject could be deleted later! */
+-		if(lstnDup(&pLstn, pLstn->pszBaseName) != RS_RET_OK) {
+-			DBGPRINTF("imfile: in_setupFileWatchStatic failed to duplicate listener for '%s'\n", pLstn->pszFileName);
+-			goto done;
+-		}
+-		startLstnFile(pLstn);
++	if(ev->mask & IN_OPEN) {
++		dbgprintf("INOTIFY event: watch IN_OPEN\n");
+ 	}
+-done:	return;
+-}
+-
+-/* setup our initial set of watches, based on user config */
+-static void
+-in_setupInitialWatches(void)
+-{
+-	int i;
+-	for(i = 0 ; i < currMaxDirs ; ++i) {
+-		in_setupDirWatch(i);
+-	}
+-	lstn_t *pLstn;
+-	for(pLstn = runModConf->pRootLstn ; pLstn != NULL ; pLstn = pLstn->next) {
+-		if(pLstn->masterLstn == NULL) {
+-			/* we process only static (master) entries */
+-			in_setupFileWatchStatic(pLstn);
+-		}
++	if(ev->mask & IN_ISDIR) {
++		dbgprintf("INOTIFY event: watch IN_ISDIR\n");
+ 	}
+ }
+ 
+-static void
+-in_dbg_showEv(struct inotify_event *ev)
+-{
+-	if(ev->mask & IN_IGNORED) {
+-		DBGPRINTF("INOTIFY event: watch was REMOVED\n");
+-	} else if(ev->mask & IN_MODIFY) {
+-		DBGPRINTF("INOTIFY event: watch was MODIFID\n");
+-	} else if(ev->mask & IN_ACCESS) {
+-		DBGPRINTF("INOTIFY event: watch IN_ACCESS\n");
+-	} else if(ev->mask & IN_ATTRIB) {
+-		DBGPRINTF("INOTIFY event: watch IN_ATTRIB\n");
+-	} else if(ev->mask & IN_CLOSE_WRITE) {
+-		DBGPRINTF("INOTIFY event: watch IN_CLOSE_WRITE\n");
+-	} else if(ev->mask & IN_CLOSE_NOWRITE) {
+-		DBGPRINTF("INOTIFY event: watch IN_CLOSE_NOWRITE\n");
+-	} else if(ev->mask & IN_CREATE) {
+-		DBGPRINTF("INOTIFY event: file was CREATED: %s\n", ev->name);
+-	} else if(ev->mask & IN_DELETE) {
+-		DBGPRINTF("INOTIFY event: watch IN_DELETE\n");
+-	} else if(ev->mask & IN_DELETE_SELF) {
+-		DBGPRINTF("INOTIFY event: watch IN_DELETE_SELF\n");
+-	} else if(ev->mask & IN_MOVE_SELF) {
+-		DBGPRINTF("INOTIFY event: watch IN_MOVE_SELF\n");
+-	} else if(ev->mask & IN_MOVED_FROM) {
+-		DBGPRINTF("INOTIFY event: watch IN_MOVED_FROM\n");
+-	} else if(ev->mask & IN_MOVED_TO) {
+-		DBGPRINTF("INOTIFY event: watch IN_MOVED_TO\n");
+-	} else if(ev->mask & IN_OPEN) {
+-		DBGPRINTF("INOTIFY event: watch IN_OPEN\n");
+-	} else if(ev->mask & IN_ISDIR) {
+-		DBGPRINTF("INOTIFY event: watch IN_ISDIR\n");
+-	} else {
+-		DBGPRINTF("INOTIFY event: unknown mask code %8.8x\n", ev->mask);
+-	 }
+-}
+-
+ 
+-/* inotify told us that a file's wd was closed. We now need to remove
+- * the file from our internal structures. Remember that a different inode
+- * with the same name may already be in processing.
+- */
+ static void
+-in_removeFile(const int dirIdx,
+-	      lstn_t *const __restrict__ pLstn)
++in_handleFileEvent(struct inotify_event *ev, const wd_map_t *const etry)
+ {
+-	uchar statefile[MAXFNAME];
+-	uchar toDel[MAXFNAME];
+-	int bDoRMState;
+-        int wd;
+-	uchar *statefn;
+-	DBGPRINTF("imfile: remove listener '%s', dirIdx %d\n",
+-	          pLstn->pszFileName, dirIdx);
+-	if(pLstn->bRMStateOnDel) {
+-		statefn = getStateFileName(pLstn, statefile, sizeof(statefile));
+-		snprintf((char*)toDel, sizeof(toDel), "%s/%s",
+-				     glbl.GetWorkDir(), (char*)statefn);
+-		bDoRMState = 1;
++	if(ev->mask & IN_MODIFY) {
++		DBGPRINTF("fs_node_notify_file_update: act->name '%s'\n", etry->act->name);
++		pollFile(etry->act);
+ 	} else {
+-		bDoRMState = 0;
+-	}
+-	pollFile(pLstn, NULL); /* one final try to gather data */
+-	/*	delete listener data */
+-	DBGPRINTF("imfile: DELETING listener data for '%s' - '%s'\n", pLstn->pszBaseName, pLstn->pszFileName);
+-	lstnDel(pLstn);
+-	fileTableDelFile(&dirs[dirIdx].active, pLstn);
+-	if(bDoRMState) {
+-		DBGPRINTF("imfile: unlinking '%s'\n", toDel);
+-		if(unlink((char*)toDel) != 0) {
+-			char errStr[1024];
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_ERR, "imfile: could not remove state "
+-				"file \"%s\": %s", toDel, errStr);
+-		}
++		DBGPRINTF("got non-expected inotify event:\n");
++		in_dbg_showEv(ev);
+ 	}
+-        wd = wdmapLookupListner(pLstn);
+-        wdmapDel(wd);
+ }
+ 
+-static void
+-in_handleDirEventCREATE(struct inotify_event *ev, const int dirIdx)
+-{
+-	lstn_t *pLstn;
+-	int ftIdx;
+-	ftIdx = fileTableSearch(&dirs[dirIdx].active, (uchar*)ev->name);
+-	if(ftIdx >= 0) {
+-		pLstn = dirs[dirIdx].active.listeners[ftIdx].pLstn;
+-	} else {
+-		DBGPRINTF("imfile: file '%s' not active in dir '%s'\n",
+-			ev->name, dirs[dirIdx].dirName);
+-		ftIdx = fileTableSearch(&dirs[dirIdx].configured, (uchar*)ev->name);
+-		if(ftIdx == -1) {
+-			DBGPRINTF("imfile: file '%s' not associated with dir '%s'\n",
+-				ev->name, dirs[dirIdx].dirName);
+-			goto done;
+-		}
+-		pLstn = dirs[dirIdx].configured.listeners[ftIdx].pLstn;
+-	}
+-	DBGPRINTF("imfile: file '%s' associated with dir '%s'\n", ev->name, dirs[dirIdx].dirName);
+-	in_setupFileWatchDynamic(pLstn, (uchar*)ev->name);
+-done:	return;
+-}
+ 
+-/* note: we need to care only for active files in the DELETE case.
+- * Two reasons: a) if this is a configured file, it should be active
+- * b) if not for some reason, there still is nothing we can do against
+- * it, and trying to process a *deleted* file really makes no sense
+- * (remeber we don't have it open, so it actually *is gone*).
++/* workaround for IN_MOVED: walk active list and prevent state file deletion of
++ * IN_MOVED_IN active object
++ * TODO: replace by a more generic solution.
+  */
+ static void
+-in_handleDirEventDELETE(struct inotify_event *const ev, const int dirIdx)
+-{
+-	const int ftIdx = fileTableSearch(&dirs[dirIdx].active, (uchar*)ev->name);
+-	if(ftIdx == -1) {
+-		DBGPRINTF("imfile: deleted file '%s' not active in dir '%s'\n",
+-			ev->name, dirs[dirIdx].dirName);
+-		goto done;
+-	}
+-	DBGPRINTF("imfile: imfile delete processing for '%s'\n",
+-	          dirs[dirIdx].active.listeners[ftIdx].pLstn->pszFileName);
+-	in_removeFile(dirIdx, dirs[dirIdx].active.listeners[ftIdx].pLstn);
+-done:	return;
+-}
+-
+-static void
+-in_handleDirEvent(struct inotify_event *const ev, const int dirIdx)
++flag_in_move(fs_edge_t *const edge, const char *name_moved)
+ {
+-	DBGPRINTF("imfile: handle dir event for %s\n", dirs[dirIdx].dirName);
+-	if((ev->mask & IN_CREATE)) {
+-		in_handleDirEventCREATE(ev, dirIdx);
+-	} else if((ev->mask & IN_DELETE)) {
+-		in_handleDirEventDELETE(ev, dirIdx);
+-	} else {
+-		DBGPRINTF("imfile: got non-expected inotify event:\n");
+-		in_dbg_showEv(ev);
+-	}
+-}
++	act_obj_t *act;
+ 
+-
+-static void
+-in_handleFileEvent(struct inotify_event *ev, const wd_map_t *const etry)
+-{
+-	if(ev->mask & IN_MODIFY) {
+-		pollFile(etry->pLstn, NULL);
+-	} else {
+-		DBGPRINTF("imfile: got non-expected inotify event:\n");
+-		in_dbg_showEv(ev);
++	for(act = edge->active ; act != NULL ; act = act->next) {
++		DBGPRINTF("checking active object %s\n", act->basename);
++		if(!strcmp(act->basename, name_moved)){
++			DBGPRINTF("found file\n");
++			act->in_move = 1;
++			break;
++		} else {
++			DBGPRINTF("name check fails, '%s' != '%s'\n", act->basename, name_moved);
++		}
+ 	}
+ }
+ 
+ static void
+ in_processEvent(struct inotify_event *ev)
+ {
+-	wd_map_t *etry;
+-	lstn_t *pLstn;
+-	int iRet;
+-	int ftIdx;
+-	int wd;
+-
+ 	if(ev->mask & IN_IGNORED) {
+-		goto done;
+-	} else if(ev->mask & IN_MOVED_FROM) {
+-		/* Find wd entry and remove it */
+-		etry =  wdmapLookup(ev->wd);
+-		if(etry != NULL) {
+-			ftIdx = fileTableSearchNoWildcard(&dirs[etry->dirIdx].active, (uchar*)ev->name);
+-			DBGPRINTF("imfile: IN_MOVED_FROM Event (ftIdx=%d, name=%s)\n", ftIdx, ev->name);
+-			if(ftIdx >= 0) {
+-				/* Find listener and wd table index*/
+-				pLstn = dirs[etry->dirIdx].active.listeners[ftIdx].pLstn;
+-				wd = wdmapLookupListner(pLstn);
+-
+-				/* Remove file from inotify watch */
+-				iRet = inotify_rm_watch(ino_fd, wd); /* Note this will TRIGGER IN_IGNORED Event! */
+-				if (iRet != 0) {
+-					DBGPRINTF("imfile: inotify_rm_watch error %d (ftIdx=%d, wd=%d, name=%s)\n", errno, ftIdx, wd, ev->name);
+-				} else {
+-					DBGPRINTF("imfile: inotify_rm_watch successfully removed file from watch (ftIdx=%d, wd=%d, name=%s)\n", ftIdx, wd, ev->name);
+-				}
+-				in_removeFile(etry->dirIdx, pLstn);
+-				DBGPRINTF("imfile: IN_MOVED_FROM Event file removed file (wd=%d, name=%s)\n", wd, ev->name);
+-			}
+-		}
++		DBGPRINTF("imfile: got IN_IGNORED event\n");
+ 		goto done;
+ 	}
+-	etry =  wdmapLookup(ev->wd);
++
++	DBGPRINTF("in_processEvent process Event %x for %s\n", ev->mask, ev->name);
++	const wd_map_t *const etry =  wdmapLookup(ev->wd);
+ 	if(etry == NULL) {
+-		DBGPRINTF("imfile: could not lookup wd %d\n", ev->wd);
++		LogMsg(0, RS_RET_INTERNAL_ERROR, LOG_WARNING, "imfile: internal error? "
++			"inotify provided watch descriptor %d which we could not find "
++			"in our tables - ignored", ev->wd);
+ 		goto done;
+ 	}
+-	if(etry->pLstn == NULL) { /* directory? */
+-		in_handleDirEvent(ev, etry->dirIdx);
++	DBGPRINTF("in_processEvent process Event %x is_file %d, act->name '%s'\n",
++		ev->mask, etry->act->edge->is_file, etry->act->name);
++
++	if((ev->mask & IN_MOVED_FROM)) {
++		flag_in_move(etry->act->edge->node->edges, ev->name);
++	}
++	if(ev->mask & (IN_MOVED_FROM | IN_MOVED_TO))  {
++		fs_node_walk(etry->act->edge->node, poll_tree);
++	} else if(etry->act->edge->is_file && !(etry->act->is_symlink)) {
++		in_handleFileEvent(ev, etry); // esentially poll_file()!
+ 	} else {
+-		in_handleFileEvent(ev, etry);
++		fs_node_walk(etry->act->edge->node, poll_tree);
+ 	}
+ done:	return;
+ }
+ 
+-static void
+-in_do_timeout_processing(void)
+-{
+-	int i;
+-	DBGPRINTF("imfile: readTimeouts are configured, checking if some apply\n");
+-
+-	for(i = 0 ; i < nWdmap ; ++i) {
+-		dbgprintf("imfile: wdmap %d, plstn %p\n", i, wdmap[i].pLstn);
+-		lstn_t *const pLstn = wdmap[i].pLstn;
+-		if(pLstn != NULL && strmReadMultiLine_isTimedOut(pLstn->pStrm)) {
+-			dbgprintf("imfile: wdmap %d, timeout occured\n", i);
+-			pollFile(pLstn, NULL);
+-		}
+-	}
+-
+-}
+-
+ 
+ /* Monitor files in inotify mode */
+ #if !defined(_AIX)
+@@ -1940,14 +2062,16 @@ do_inotify(void)
+ 	DEFiRet;
+ 
+ 	CHKiRet(wdmapInit());
+-	CHKiRet(dirsInit());
+ 	ino_fd = inotify_init();
+-        if(ino_fd < 0) {
+-            errmsg.LogError(1, RS_RET_INOTIFY_INIT_FAILED, "imfile: Init inotify instance failed ");
+-            return RS_RET_INOTIFY_INIT_FAILED;
+-        }
+-	DBGPRINTF("imfile: inotify fd %d\n", ino_fd);
+-	in_setupInitialWatches();
++	if(ino_fd < 0) {
++		LogError(errno, RS_RET_INOTIFY_INIT_FAILED, "imfile: Init inotify "
++			"instance failed ");
++		return RS_RET_INOTIFY_INIT_FAILED;
++	}
++	DBGPRINTF("inotify fd %d\n", ino_fd);
++
++	/* do watch initialization */
++	fs_node_walk(runModConf->conf_tree, poll_tree);
+ 
+ 	while(glbl.GetGlobalInputTermState() == 0) {
+ 		if(runModConf->haveReadTimeouts) {
+@@ -1959,7 +2083,8 @@ do_inotify(void)
+ 				r = poll(&pollfd, 1, runModConf->timeoutGranularity);
+ 			} while(r  == -1 && errno == EINTR);
+ 			if(r == 0) {
+-				in_do_timeout_processing();
++				DBGPRINTF("readTimeouts are configured, checking if some apply\n");
++				fs_node_walk(runModConf->conf_tree, poll_timeouts);
+ 				continue;
+ 			} else if (r == -1) {
+ 				char errStr[1024];
+@@ -2035,49 +2160,96 @@ CODESTARTwillRun
+ 	CHKiRet(prop.Construct(&pInputName));
+ 	CHKiRet(prop.SetString(pInputName, UCHAR_CONSTANT("imfile"), sizeof("imfile") - 1));
+ 	CHKiRet(prop.ConstructFinalize(pInputName));
+-
+ finalize_it:
+ ENDwillRun
+ 
++// TODO: refactor this into a generically-usable "atomic file creation" utility for
++// all kinds of "state files"
++static rsRetVal
++atomicWriteStateFile(const char *fn, const char *content)
++{
++	DEFiRet;
++	const int fd = open(fn, O_CLOEXEC | O_NOCTTY | O_WRONLY | O_CREAT | O_TRUNC, 0600);
++	if(fd < 0) {
++		LogError(errno, RS_RET_IO_ERROR, "imfile: cannot open state file '%s' for "
++			"persisting file state - some data will probably be duplicated "
++			"on next startup", fn);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++	const size_t toWrite = strlen(content);
++	const ssize_t w = write(fd, content, toWrite);
++	if(w != (ssize_t) toWrite) {
++		LogError(errno, RS_RET_IO_ERROR, "imfile: partial write to state file '%s' "
++			"this may cause trouble in the future. We will try to delete the "
++			"state file, as this provides most consistent state", fn);
++		unlink(fn);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++finalize_it:
++	if(fd >= 0) {
++		close(fd);
++	}
++	RETiRet;
++}
++
++
+ /* This function persists information for a specific file being monitored.
+  * To do so, it simply persists the stream object. We do NOT abort on error
+  * iRet as that makes matters worse (at least we can try persisting the others...).
+  * rgerhards, 2008-02-13
+  */
+ static rsRetVal
+-persistStrmState(lstn_t *pLstn)
++persistStrmState(act_obj_t *const act)
+ {
+ 	DEFiRet;
+-	strm_t *psSF = NULL; /* state file (stream) */
+-	size_t lenDir;
+ 	uchar statefile[MAXFNAME];
++	uchar statefname[MAXFNAME];
++
++	uchar *const statefn = getStateFileName(act, statefile, sizeof(statefile));
++	getFullStateFileName(statefn, statefname, sizeof(statefname));
++	DBGPRINTF("persisting state for '%s', state file '%s'\n", act->name, statefname);
++
++	struct json_object *jval = NULL;
++	struct json_object *json = NULL;
++	CHKmalloc(json = json_object_new_object());
++	jval = json_object_new_string((char*) act->name);
++	json_object_object_add(json, "filename", jval);
++	jval = json_object_new_int(strmGetPrevWasNL(act->pStrm));
++	json_object_object_add(json, "prev_was_nl", jval);
++
++	/* we access some data items a bit dirty, as we need to refactor the whole
++	 * thing in any case - TODO
++	 */
++	jval = json_object_new_int64(act->pStrm->iCurrOffs);
++	json_object_object_add(json, "curr_offs", jval);
++	jval = json_object_new_int64(act->pStrm->strtOffs);
++	json_object_object_add(json, "strt_offs", jval);
+ 
+-	uchar *const statefn = getStateFileName(pLstn, statefile, sizeof(statefile));
+-	DBGPRINTF("imfile: persisting state for '%s' to file '%s'\n",
+-		  pLstn->pszFileName, statefn);
+-	CHKiRet(strm.Construct(&psSF));
+-	lenDir = ustrlen(glbl.GetWorkDir());
+-	if(lenDir > 0)
+-		CHKiRet(strm.SetDir(psSF, glbl.GetWorkDir(), lenDir));
+-	CHKiRet(strm.SettOperationsMode(psSF, STREAMMODE_WRITE_TRUNC));
+-	CHKiRet(strm.SetsType(psSF, STREAMTYPE_FILE_SINGLE));
+-	CHKiRet(strm.SetFName(psSF, statefn, strlen((char*) statefn)));
+-	CHKiRet(strm.ConstructFinalize(psSF));
++	const uchar *const prevLineSegment = strmGetPrevLineSegment(act->pStrm);
++	if(prevLineSegment != NULL) {
++		jval = json_object_new_string((const char*) prevLineSegment);
++		json_object_object_add(json, "prev_line_segment", jval);
++	}
+ 
+-	CHKiRet(strm.Serialize(pLstn->pStrm, psSF));
+-	CHKiRet(strm.Flush(psSF));
++	const uchar *const prevMsgSegment = strmGetPrevMsgSegment(act->pStrm);
++	if(prevMsgSegment != NULL) {
++		jval = json_object_new_string((const char*) prevMsgSegment);
++		json_object_object_add(json, "prev_msg_segment", jval);
++	}
+ 
+-	CHKiRet(strm.Destruct(&psSF));
++	const char *jstr =  json_object_to_json_string_ext(json, JSON_C_TO_STRING_SPACED);
+ 
+-finalize_it:
+-	if(psSF != NULL)
+-		strm.Destruct(&psSF);
++	CHKiRet(atomicWriteStateFile((const char*)statefname, jstr));
++	json_object_put(json);
+ 
++finalize_it:
+ 	if(iRet != RS_RET_OK) {
+ 		errmsg.LogError(0, iRet, "imfile: could not persist state "
+ 				"file %s - data may be repeated on next "
+ 				"startup. Is WorkDirectory set?",
+-				statefn);
++				statefname);
+ 	}
+ 
+ 	RETiRet;
+@@ -2089,11 +2261,6 @@ finalize_it:
+  */
+ BEGINafterRun
+ CODESTARTafterRun
+-	while(runModConf->pRootLstn != NULL) {
+-		/* Note: lstnDel() reasociates root! */
+-		lstnDel(runModConf->pRootLstn);
+-	}
+-
+ 	if(pInputName != NULL)
+ 		prop.Destruct(&pInputName);
+ ENDafterRun
+@@ -2118,12 +2285,6 @@ CODESTARTmodExit
+ 	objRelease(prop, CORE_COMPONENT);
+ 	objRelease(ruleset, CORE_COMPONENT);
+ #ifdef HAVE_INOTIFY_INIT
+-	/* we use these vars only in inotify mode */
+-	if(dirs != NULL) {
+-		free(dirs->active.listeners);
+-		free(dirs->configured.listeners);
+-		free(dirs);
+-	}
+ 	free(wdmap);
+ #endif
+ ENDmodExit
+diff --git a/runtime/msg.c b/runtime/msg.c
+index a885d2368bbaeea90a6e92dc0d569d169b1dd2e5..f45d6175283097974023905fc072508a18a8270a 100644
+--- a/runtime/msg.c
++++ b/runtime/msg.c
+@@ -4890,6 +4890,28 @@ finalize_it:
+ 	RETiRet;
+ }
+ 
++rsRetVal
++msgAddMultiMetadata(smsg_t *const __restrict__ pMsg,
++	       const uchar ** __restrict__ metaname,
++	       const uchar ** __restrict__ metaval,
++	       const int count)
++{
++	DEFiRet;
++	int i = 0 ;
++	struct json_object *const json = json_object_new_object();
++	CHKmalloc(json);
++	for ( i = 0 ; i < count ; i++ ) {
++		struct json_object *const jval = json_object_new_string((char*)metaval[i]);
++		if(jval == NULL) {
++			json_object_put(json);
++			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++ 		}
++		json_object_object_add(json, (const char *const)metaname[i], jval);
++	}
++	iRet = msgAddJSON(pMsg, (uchar*)"!metadata", json, 0, 0);
++finalize_it:
++	RETiRet;
++}
+ 
+ static struct json_object *
+ jsonDeepCopy(struct json_object *src)
+diff --git a/runtime/msg.h b/runtime/msg.h
+index 6521e19b28b013f0d06e357bdb0f33a94dab638b..0e92da43398156f4871b2e567a242cb089f67a08 100644
+--- a/runtime/msg.h
++++ b/runtime/msg.h
+@@ -195,6 +195,7 @@ int getPRIi(const smsg_t * const pM);
+ void getRawMsg(smsg_t *pM, uchar **pBuf, int *piLen);
+ rsRetVal msgAddJSON(smsg_t *pM, uchar *name, struct json_object *json, int force_reset, int sharedReference);
+ rsRetVal msgAddMetadata(smsg_t *msg, uchar *metaname, uchar *metaval);
++rsRetVal msgAddMultiMetadata(smsg_t *msg, const uchar **metaname, const uchar **metaval, const int count);
+ rsRetVal MsgGetSeverity(smsg_t *pThis, int *piSeverity);
+ rsRetVal MsgDeserialize(smsg_t *pMsg, strm_t *pStrm);
+ rsRetVal MsgSetPropsViaJSON(smsg_t *__restrict__ const pMsg, const uchar *__restrict__ const json);
+diff --git a/runtime/stream.c b/runtime/stream.c
+index 701144c0e39d6fbcf9dd63fe60421e1dcd6f01c6..fb1ff11d1890bbaee107658dd3568c2bc67c223d 100644
+--- a/runtime/stream.c
++++ b/runtime/stream.c
+@@ -91,6 +91,41 @@ static rsRetVal strmSeekCurrOffs(strm_t *pThis);
+ 
+ /* methods */
+ 
++/* note: this may return NULL if not line segment is currently set  */
++// TODO: due to the cstrFinalize() this is not totally clean, albeit for our
++// current use case it does not hurt -- refactor! rgerhards, 2018-03-27
++const uchar *
++strmGetPrevLineSegment(strm_t *const pThis)
++{
++	const uchar *ret = NULL;
++	if(pThis->prevLineSegment != NULL) {
++		cstrFinalize(pThis->prevLineSegment);
++		ret = rsCStrGetSzStrNoNULL(pThis->prevLineSegment);
++	}
++	return ret;
++}
++/* note: this may return NULL if not line segment is currently set  */
++// TODO: due to the cstrFinalize() this is not totally clean, albeit for our
++// current use case it does not hurt -- refactor! rgerhards, 2018-03-27
++const uchar *
++strmGetPrevMsgSegment(strm_t *const pThis)
++{
++	const uchar *ret = NULL;
++	if(pThis->prevMsgSegment != NULL) {
++		cstrFinalize(pThis->prevMsgSegment);
++		ret = rsCStrGetSzStrNoNULL(pThis->prevMsgSegment);
++	}
++	return ret;
++}
++
++
++int
++strmGetPrevWasNL(const strm_t *const pThis)
++{
++	return pThis->bPrevWasNL;
++}
++
++
+ /* output (current) file name for debug log purposes. Falls back to various
+  * levels of impreciseness if more precise name is not known.
+  */
+@@ -242,17 +277,18 @@ doPhysOpen(strm_t *pThis)
+ 	}
+ 
+ 	pThis->fd = open((char*)pThis->pszCurrFName, iFlags | O_LARGEFILE, pThis->tOpenMode);
++	const int errno_save = errno; /* dbgprintf can mangle it! */
+ 	DBGPRINTF("file '%s' opened as #%d with mode %d\n", pThis->pszCurrFName,
+ 		  pThis->fd, (int) pThis->tOpenMode);
+ 	if(pThis->fd == -1) {
+-		char errStr[1024];
+-		int err = errno;
+-		rs_strerror_r(err, errStr, sizeof(errStr));
+-		DBGOPRINT((obj_t*) pThis, "open error %d, file '%s': %s\n", errno, pThis->pszCurrFName, errStr);
+-		if(err == ENOENT)
+-			ABORT_FINALIZE(RS_RET_FILE_NOT_FOUND);
+-		else
+-			ABORT_FINALIZE(RS_RET_FILE_OPEN_ERROR);
++		const rsRetVal errcode = (errno_save == ENOENT)
++			? RS_RET_FILE_NOT_FOUND : RS_RET_FILE_OPEN_ERROR;
++		if(pThis->fileNotFoundError) {
++			LogError(errno_save, errcode, "file '%s': open error", pThis->pszCurrFName);
++		} else {
++			DBGPRINTF("file '%s': open error", pThis->pszCurrFName);
++		}
++		ABORT_FINALIZE(errcode);
+ 	}
+ 
+ 	if(pThis->tOperationsMode == STREAMMODE_READ) {
+@@ -344,6 +380,8 @@ static rsRetVal strmOpenFile(strm_t *pThis)
+ 
+ 	if(pThis->fd != -1)
+ 		ABORT_FINALIZE(RS_RET_OK);
++
++	free(pThis->pszCurrFName);
+ 	pThis->pszCurrFName = NULL; /* used to prevent mem leak in case of error */
+ 
+ 	if(pThis->pszFName == NULL)
+@@ -733,11 +771,11 @@ static rsRetVal strmUnreadChar(strm_t *pThis, uchar c)
+  * a line, but following lines that are indented are part of the same log entry
+  */
+ static rsRetVal
+-strmReadLine(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF, uint32_t trimLineOverBytes)
++strmReadLine(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF,
++	uint32_t trimLineOverBytes, int64 *const strtOffs)
+ {
+         uchar c;
+ 	uchar finished;
+-	rsRetVal readCharRet;
+         DEFiRet;
+ 
+         ASSERT(pThis != NULL);
+@@ -756,12 +794,7 @@ strmReadLine(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF, uint
+         if(mode == 0) {
+ 		while(c != '\n') {
+ 			CHKiRet(cstrAppendChar(*ppCStr, c));
+-			readCharRet = strmReadChar(pThis, &c);
+-			if((readCharRet == RS_RET_TIMED_OUT) ||
+-			   (readCharRet == RS_RET_EOF) ) { /* end reached without \n? */
+-				CHKiRet(rsCStrConstructFromCStr(&pThis->prevLineSegment, *ppCStr));
+-                	}
+-                	CHKiRet(readCharRet);
++			CHKiRet(strmReadChar(pThis, &c));
+         	}
+ 		if (trimLineOverBytes > 0 && (uint32_t) cstrLen(*ppCStr) > trimLineOverBytes) {
+ 			/* Truncate long line at trimLineOverBytes position */
+@@ -850,12 +883,19 @@ strmReadLine(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF, uint
+ 	}
+ 
+ finalize_it:
+-        if(iRet != RS_RET_OK && *ppCStr != NULL) {
+-		if(cstrLen(*ppCStr) > 0) {
+-		/* we may have an empty string in an unsuccsfull poll or after restart! */
+-			rsCStrConstructFromCStr(&pThis->prevLineSegment, *ppCStr);
++        if(iRet == RS_RET_OK) {
++		if(strtOffs != NULL) {
++			*strtOffs = pThis->strtOffs;
++		}
++		pThis->strtOffs = pThis->iCurrOffs; /* we are at begin of next line */
++	} else {
++		if(*ppCStr != NULL) {
++			if(cstrLen(*ppCStr) > 0) {
++			/* we may have an empty string in an unsuccesfull poll or after restart! */
++				rsCStrConstructFromCStr(&pThis->prevLineSegment, *ppCStr);
++			}
++			cstrDestruct(ppCStr);
+ 		}
+-                cstrDestruct(ppCStr);
+ 	}
+ 
+         RETiRet;
+@@ -882,7 +922,8 @@ strmReadMultiLine_isTimedOut(const strm_t *const __restrict__ pThis)
+  * added 2015-05-12 rgerhards
+  */
+ rsRetVal
+-strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg, const sbool bEscapeLF)
++strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg, const sbool bEscapeLF,
++	int64 *const strtOffs)
+ {
+         uchar c;
+ 	uchar finished = 0;
+@@ -946,16 +987,24 @@ strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg, const sbool bEs
+ 	} while(finished == 0);
+ 
+ finalize_it:
+-	if(   pThis->readTimeout
+-	   && (iRet != RS_RET_OK)
+-	   && (pThis->prevMsgSegment != NULL)
+-	   && (tCurr > pThis->lastRead + pThis->readTimeout)) {
+-		CHKiRet(rsCStrConstructFromCStr(ppCStr, pThis->prevMsgSegment));
+-		cstrDestruct(&pThis->prevMsgSegment);
+-		pThis->lastRead = tCurr;
+-		dbgprintf("stream: generated msg based on timeout: %s\n", cstrGetSzStrNoNULL(*ppCStr));
+-			FINALIZE;
+-		iRet = RS_RET_OK;
++	*strtOffs = pThis->strtOffs;
++	if(thisLine != NULL) {
++		cstrDestruct(&thisLine);
++	}
++	if(iRet == RS_RET_OK) {
++		pThis->strtOffs = pThis->iCurrOffs; /* we are at begin of next line */
++	} else {
++		if(   pThis->readTimeout
++		   && (pThis->prevMsgSegment != NULL)
++		   && (tCurr > pThis->lastRead + pThis->readTimeout)) {
++			CHKiRet(rsCStrConstructFromCStr(ppCStr, pThis->prevMsgSegment));
++			cstrDestruct(&pThis->prevMsgSegment);
++			pThis->lastRead = tCurr;
++			pThis->strtOffs = pThis->iCurrOffs; /* we are at begin of next line */
++			dbgprintf("stream: generated msg based on timeout: %s\n", cstrGetSzStrNoNULL(*ppCStr));
++				FINALIZE;
++			iRet = RS_RET_OK;
++		}
+ 	}
+         RETiRet;
+ }
+@@ -974,7 +1023,10 @@ BEGINobjConstruct(strm) /* be sure to specify the object type also in END macro!
+ 	pThis->pszSizeLimitCmd = NULL;
+ 	pThis->prevLineSegment = NULL;
+ 	pThis->prevMsgSegment = NULL;
++	pThis->strtOffs = 0;
++	pThis->ignoringMsg = 0;
+ 	pThis->bPrevWasNL = 0;
++	pThis->fileNotFoundError = 1;
+ ENDobjConstruct(strm)
+ 
+ 
+@@ -1686,7 +1738,7 @@ static rsRetVal strmSeek(strm_t *pThis, off64_t offs)
+ 		DBGPRINTF("strmSeek: error %lld seeking to offset %lld\n", i, (long long) offs);
+ 		ABORT_FINALIZE(RS_RET_IO_ERROR);
+ 	}
+-	pThis->iCurrOffs = offs; /* we are now at *this* offset */
++	pThis->strtOffs = pThis->iCurrOffs = offs; /* we are now at *this* offset */
+ 	pThis->iBufPtr = 0; /* buffer invalidated */
+ 
+ finalize_it:
+@@ -1738,7 +1790,7 @@ strmMultiFileSeek(strm_t *pThis, unsigned int FNum, off64_t offs, off64_t *bytes
+ 	} else {
+ 		*bytesDel = 0;
+ 	}
+-	pThis->iCurrOffs = offs;
++	pThis->strtOffs = pThis->iCurrOffs = offs;
+ 
+ finalize_it:
+ 	RETiRet;
+@@ -1763,7 +1815,7 @@ static rsRetVal strmSeekCurrOffs(strm_t *pThis)
+ 
+ 	/* As the cryprov may use CBC or similiar things, we need to read skip data */
+ 	targetOffs = pThis->iCurrOffs;
+-	pThis->iCurrOffs = 0;
++	pThis->strtOffs = pThis->iCurrOffs = 0;
+ 	DBGOPRINT((obj_t*) pThis, "encrypted, doing skip read of %lld bytes\n",
+ 		(long long) targetOffs);
+ 	while(targetOffs != pThis->iCurrOffs) {
+@@ -1935,6 +1987,12 @@ static rsRetVal strmSetiMaxFiles(strm_t *pThis, int iNewVal)
+ 	return RS_RET_OK;
+ }
+ 
++static rsRetVal strmSetFileNotFoundError(strm_t *pThis, int pFileNotFoundError)
++{
++	pThis->fileNotFoundError = pFileNotFoundError;
++	return RS_RET_OK;
++}
++
+ 
+ /* set the stream's file prefix
+  * The passed-in string is duplicated. So if the caller does not need
+@@ -2076,6 +2134,9 @@ static rsRetVal strmSerialize(strm_t *pThis, strm_t *pStrm)
+ 	l = pThis->inode;
+ 	objSerializeSCALAR_VAR(pStrm, inode, INT64, l);
+ 
++	l = pThis->strtOffs;
++	objSerializeSCALAR_VAR(pStrm, strtOffs, INT64, l);
++
+ 	if(pThis->prevLineSegment != NULL) {
+ 		cstrFinalize(pThis->prevLineSegment);
+ 		objSerializePTR(pStrm, prevLineSegment, CSTR);
+@@ -2188,8 +2249,12 @@ static rsRetVal strmSetProperty(strm_t *pThis, var_t *pProp)
+ 		pThis->iCurrOffs = pProp->val.num;
+  	} else if(isProp("inode")) {
+ 		pThis->inode = (ino_t) pProp->val.num;
++ 	} else if(isProp("strtOffs")) {
++		pThis->strtOffs = pProp->val.num;
+  	} else if(isProp("iMaxFileSize")) {
+ 		CHKiRet(strmSetiMaxFileSize(pThis, pProp->val.num));
++ 	} else if(isProp("fileNotFoundError")) {
++		CHKiRet(strmSetFileNotFoundError(pThis, pProp->val.num));
+  	} else if(isProp("iMaxFiles")) {
+ 		CHKiRet(strmSetiMaxFiles(pThis, pProp->val.num));
+  	} else if(isProp("iFileNumDigits")) {
+@@ -2253,6 +2318,7 @@ CODESTARTobjQueryInterface(strm)
+ 	pIf->WriteChar = strmWriteChar;
+ 	pIf->WriteLong = strmWriteLong;
+ 	pIf->SetFName = strmSetFName;
++	pIf->SetFileNotFoundError = strmSetFileNotFoundError;
+ 	pIf->SetDir = strmSetDir;
+ 	pIf->Flush = strmFlush;
+ 	pIf->RecordBegin = strmRecordBegin;
+diff --git a/runtime/stream.h b/runtime/stream.h
+index 1eee34979db34620b82e6351111864645187b035..bcb81a14f60f9effa52fffa42d18d66c484ae86d 100644
+--- a/runtime/stream.h
++++ b/runtime/stream.h
+@@ -159,6 +159,10 @@ typedef struct strm_s {
+ 	sbool	bIsTTY;		/* is this a tty file? */
+ 	cstr_t *prevLineSegment; /* for ReadLine, previous, unprocessed part of file */
+ 	cstr_t *prevMsgSegment; /* for ReadMultiLine, previous, yet unprocessed part of msg */
++	int64 strtOffs;		/* start offset in file for current line/msg */
++	int fileNotFoundError;
++	int noRepeatedErrorOutput; /* if a file is missing the Error is only given once */
++	int ignoringMsg;
+ } strm_t;
+ 
+ 
+@@ -174,6 +178,7 @@ BEGINinterface(strm) /* name must also be changed in ENDinterface macro! */
+ 	rsRetVal (*Write)(strm_t *const pThis, const uchar *const pBuf, size_t lenBuf);
+ 	rsRetVal (*WriteChar)(strm_t *pThis, uchar c);
+ 	rsRetVal (*WriteLong)(strm_t *pThis, long i);
++	rsRetVal (*SetFileNotFoundError)(strm_t *pThis, int pFileNotFoundError);
+ 	rsRetVal (*SetFName)(strm_t *pThis, uchar *pszPrefix, size_t iLenPrefix);
+ 	rsRetVal (*SetDir)(strm_t *pThis, uchar *pszDir, size_t iLenDir);
+ 	rsRetVal (*Flush)(strm_t *pThis);
+@@ -198,7 +203,8 @@ BEGINinterface(strm) /* name must also be changed in ENDinterface macro! */
+ 	INTERFACEpropSetMeth(strm, iFlushInterval, int);
+ 	INTERFACEpropSetMeth(strm, pszSizeLimitCmd, uchar*);
+ 	/* v6 added */
+-	rsRetVal (*ReadLine)(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF, uint32_t trimLineOverBytes);
++	rsRetVal (*ReadLine)(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF,
++		uint32_t trimLineOverBytes, int64 *const strtOffs);
+ 	/* v7 added  2012-09-14 */
+ 	INTERFACEpropSetMeth(strm, bVeryReliableZip, int);
+ 	/* v8 added  2013-03-21 */
+@@ -207,19 +213,24 @@ BEGINinterface(strm) /* name must also be changed in ENDinterface macro! */
+ 	INTERFACEpropSetMeth(strm, cryprov, cryprov_if_t*);
+ 	INTERFACEpropSetMeth(strm, cryprovData, void*);
+ ENDinterface(strm)
+-#define strmCURR_IF_VERSION 12 /* increment whenever you change the interface structure! */
++#define strmCURR_IF_VERSION 13 /* increment whenever you change the interface structure! */
+ /* V10, 2013-09-10: added new parameter bEscapeLF, changed mode to uint8_t (rgerhards) */
+ /* V11, 2015-12-03: added new parameter bReopenOnTruncate */
+ /* V12, 2015-12-11: added new parameter trimLineOverBytes, changed mode to uint32_t */
++/* V13, 2017-09-06: added new parameter strtoffs to ReadLine() */
+ 
+ #define strmGetCurrFileNum(pStrm) ((pStrm)->iCurrFNum)
+ 
+ /* prototypes */
+ PROTOTYPEObjClassInit(strm);
+ rsRetVal strmMultiFileSeek(strm_t *pThis, unsigned int fileNum, off64_t offs, off64_t *bytesDel);
+-rsRetVal strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg, sbool bEscapeLF);
++rsRetVal strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg,
++	sbool bEscapeLF, int64 *const strtOffs);
+ int strmReadMultiLine_isTimedOut(const strm_t *const __restrict__ pThis);
+ void strmDebugOutBuf(const strm_t *const pThis);
+ void strmSetReadTimeout(strm_t *const __restrict__ pThis, const int val);
++const uchar * strmGetPrevLineSegment(strm_t *const pThis);
++const uchar * strmGetPrevMsgSegment(strm_t *const pThis);
++int strmGetPrevWasNL(const strm_t *const pThis);
+ 
+ #endif /* #ifndef STREAM_H_INCLUDED */
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1538372-imjournal-duplicates.patch b/SOURCES/rsyslog-8.24.0-rhbz1538372-imjournal-duplicates.patch
new file mode 100644
index 0000000..1788d6c
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1538372-imjournal-duplicates.patch
@@ -0,0 +1,370 @@
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Wed, 25 Jul 2018 15:05:01 -0500
+
+modification and merge of below patches for RHEL consumers, 
+also modified journal invalidate/rotation handling to keep possibility
+to continue after switch of persistent journal
+original:
+%From 3bede5ba768975c8b6fe3d1f3e11075910f52fdd Mon Sep 17 00:00:00 2001
+%From: Jiri Vymazal <jvymazal@redhat.com>
+%Date: Wed, 7 Mar 2018 11:57:29 +0100
+%Subject: [PATCH] Fetching cursor on readJournal() and simplified pollJournal()
+%
+%Fetching journal cursor in persistJournal could cause us to save
+%invalid cursor leading to duplicating messages further on, now we are
+%saving it on each readJournal() where we now that the state is good.
+%This result in simplyfing persisJournalState() a bit as well.
+%
+%pollJournal() is now cleaner and faster, correctly handles INVALIDATE
+%status from journald and is able to continue polling after journal
+%flush. Also reduced POLL_TIMEOUT a bit as it caused rsyslog to exit
+%with error in corner cases for some ppc when left at full second.
+plus
+%
+%From a99f9b4b42d261c384aee09306fc421df2cca7a5 Mon Sep 17 00:00:00 2001
+%From: Peter Portante <peter.a.portante@gmail.com>
+%Date: Wed, 24 Jan 2018 19:34:41 -0500
+%Subject: [PATCH] Proposed fix for handling journal correctly
+%
+%The fix is to immediately setup the inotify file descriptor via
+%`sd_journal_get_fd()` right after a journal open, and then
+%periodically call `sd_journal_process()` to give the client API
+%library a chance to detect deleted journal files on disk that need to
+%be closed so they can be properly erased by the file system.
+%
+%We remove the open/close dance and simplify that code as a result.
+%
+%Fixes issue #2436.
+and also:
+%From 27f96c84d34ee000fbb5d45b00233f2ec3cf2d8a Mon Sep 17 00:00:00 2001
+%From: Rainer Gerhards <rgerhards@adiscon.com>
+%Date: Tue, 24 Oct 2017 16:14:13 +0200
+%Subject: [PATCH] imjournal bugfix: do not disable itself on error
+%
+%If some functions calls inside the main loop failed, imjournal exited
+%with an error code, actually disabling all logging from the journal.
+%This was probably never intended.
+%
+%This patch makes imjournal recover the situation instead.
+%
+%closes https://github.com/rsyslog/rsyslog/issues/1895
+---
+ plugins/imjournal/imjournal.c | 211 ++++++++++++++++++++++--------------------
+ 1 file changed, 110 insertions(+), 102 deletions(-)
+
+--- a/plugins/imjournal/imjournal.c
++++ b/plugins/imjournal/imjournal.c
+@@ -80,6 +80,7 @@ static struct configSettings_s {
+ 	int iDfltFacility;
+ 	int bUseJnlPID;
+	char *dfltTag;
++	int bWorkAroundJournalBug;
+ } cs;
+ 
+ static rsRetVal facilityHdlr(uchar **pp, void *pVal);
+@@ -95,6 +96,7 @@ static struct cnfparamdescr modpdescr[] = {
+ 	{ "defaultfacility", eCmdHdlrString, 0 },
+ 	{ "usepidfromsystem", eCmdHdlrBinary, 0 },
+	{ "defaulttag", eCmdHdlrGetWord, 0 },
++	{ "workaroundjournalbug", eCmdHdlrBinary, 0 }
+ };
+ static struct cnfparamblk modpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -114,6 +114,10 @@ /* module-global parameters */
+ static const char *pid_field_name;	/* read-only after startup */
+ static ratelimit_t *ratelimiter = NULL;
+ static sd_journal *j;
++static int j_inotify_fd;
++static char *last_cursor = NULL;
++
++#define J_PROCESS_PERIOD 1024  /* Call sd_journal_process() every 1,024 records */
+ 
+ static rsRetVal persistJournalState(void);
+ static rsRetVal loadJournalState(void);
+@@ -123,6 +127,14 @@ openJournal(sd_journal** jj)
+ 
+ 	if (sd_journal_open(jj, SD_JOURNAL_LOCAL_ONLY) < 0)
+ 		iRet = RS_RET_IO_ERROR;
++	int r;
++
++	if ((r = sd_journal_get_fd(j)) < 0) {
++		errmsg.LogError(-r, RS_RET_IO_ERROR, "imjournal: sd_journal_get_fd() failed");
++		iRet = RS_RET_IO_ERROR;
++	} else {
++		j_inotify_fd = r;
++	}	
+ 	RETiRet;
+ }
+ 
+@@ -132,6 +144,7 @@ closeJournal(sd_journal** jj)
+ 		persistJournalState();
+ 	}
+ 	sd_journal_close(*jj);
++	j_inotify_fd = 0;
+ }
+ 
+ 
+@@ -262,6 +275,7 @@ readjournal(void)
+ 	char *message = NULL;
+ 	char *sys_iden = NULL;
+ 	char *sys_iden_help = NULL;
++	char *c = NULL;
+ 
+ 	const void *get;
+ 	const void *pidget;
+@@ -433,6 +437,15 @@ readjournal(void)
+ 		tv.tv_usec = timestamp % 1000000;
+ 	}
+ 
++	if (cs.bWorkAroundJournalBug) {
++		/* save journal cursor (at this point we can be sure it is valid) */
++		sd_journal_get_cursor(j, &c);
++		if (c) {
++			free(last_cursor);
++			last_cursor = c;
++		}
++	}
++
+ 	/* submit message */
+ 	enqMsg((uchar *)message, (uchar *) sys_iden_help, facility, severity, &tv, json, 0);
+ 
+@@ -413,44 +433,49 @@ persistJournalState (void)
+ 	DEFiRet;
+ 	FILE *sf; /* state file */
+ 	char tmp_sf[MAXFNAME];
+-	char *cursor;
+ 	int ret = 0;
+ 
+-	/* On success, sd_journal_get_cursor()  returns 1 in systemd
+-	   197 or older and 0 in systemd 198 or newer */
+-	if ((ret = sd_journal_get_cursor(j, &cursor)) >= 0) {
+-               /* we create a temporary name by adding a ".tmp"
+-                * suffix to the end of our state file's name
+-                */
+-               snprintf(tmp_sf, sizeof(tmp_sf), "%s.tmp", cs.stateFile);
+-               if ((sf = fopen(tmp_sf, "wb")) != NULL) {
+-			if (fprintf(sf, "%s", cursor) < 0) {
+-				iRet = RS_RET_IO_ERROR;
+-			}
+-			fclose(sf);
+-			free(cursor);
+-                       /* change the name of the file to the configured one */
+-                       if (iRet == RS_RET_OK && rename(tmp_sf, cs.stateFile) == -1) {
+-                               char errStr[256];
+-                               rs_strerror_r(errno, errStr, sizeof(errStr));
+-                               iRet = RS_RET_IO_ERROR;
+-                               errmsg.LogError(0, iRet, "rename() failed: "
+-                                       "'%s', new path: '%s'\n", errStr, cs.stateFile);
+-                       }
++	if (cs.bWorkAroundJournalBug) {
++		if (!last_cursor)
++			ABORT_FINALIZE(RS_RET_OK);
+ 
+-		} else {
+-			char errStr[256];
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_FOPEN_FAILURE, "fopen() failed: "
+-				"'%s', path: '%s'\n", errStr, tmp_sf);
+-			iRet = RS_RET_FOPEN_FAILURE;
+-		}
+-	} else {
++	} else if ((ret = sd_journal_get_cursor(j, &last_cursor)) < 0) {
+ 		char errStr[256];
+ 		rs_strerror_r(-(ret), errStr, sizeof(errStr));
+ 		errmsg.LogError(0, RS_RET_ERR, "sd_journal_get_cursor() failed: '%s'\n", errStr);
+-		iRet = RS_RET_ERR;
++		ABORT_FINALIZE(RS_RET_ERR);
+	}
++	/* we create a temporary name by adding a ".tmp"
++	 * suffix to the end of our state file's name
++	 */
++	snprintf(tmp_sf, sizeof(tmp_sf), "%s.tmp", cs.stateFile);
++
++	sf = fopen(tmp_sf, "wb");
++	if (!sf) {
++		errmsg.LogError(errno, RS_RET_FOPEN_FAILURE, "imjournal: fopen() failed for path: '%s'", tmp_sf);
++		ABORT_FINALIZE(RS_RET_FOPEN_FAILURE);
++	}
++
++	ret = fputs(last_cursor, sf);
++	if (ret < 0) {
++		errmsg.LogError(errno, RS_RET_IO_ERROR, "imjournal: failed to save cursor to: '%s'", tmp_sf);
++		ret = fclose(sf);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++	ret = fclose(sf);
++	if (ret < 0) {
++		errmsg.LogError(errno, RS_RET_IO_ERROR, "imjournal: fclose() failed for path: '%s'", tmp_sf);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++	ret = rename(tmp_sf, cs.stateFile);
++	if (ret < 0) {
++		errmsg.LogError(errno, iRet, "imjournal: rename() failed for new path: '%s'", cs.stateFile);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++finalize_it:
+ 	RETiRet;
+ }
+ 
+@@ -473,64 +473,26 @@
+  * except for the special handling of EINTR.
+  */
+ 
+-#define POLL_TIMEOUT 1000 /* timeout for poll is 1s */
++#define POLL_TIMEOUT 900000 /* timeout for poll is 900ms */
+ 
+ static rsRetVal
+ pollJournal(void)
+ {
+ 	DEFiRet;
+-	struct pollfd pollfd;
+-	int pr = 0;
+-	int jr = 0;
+-
+-	pollfd.fd = sd_journal_get_fd(j);
+-	pollfd.events = sd_journal_get_events(j);
+-	pr = poll(&pollfd, 1, POLL_TIMEOUT);
+-	if (pr == -1) {
+-		if (errno == EINTR) {
+-			/* EINTR is also received during termination
+-			 * so return now to check the term state.
+-			 */
+-			ABORT_FINALIZE(RS_RET_OK);
+-		} else {
+-			char errStr[256];
+-
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_ERR,
+-				"poll() failed: '%s'", errStr);
+-			ABORT_FINALIZE(RS_RET_ERR);
+-		}
+-	}
++	int r;
+ 
++	r = sd_journal_wait(j, POLL_TIMEOUT);
+ 
+-	jr = sd_journal_process(j);
+-	
+-	if (pr == 1 && jr == SD_JOURNAL_INVALIDATE) {
+-		/* do not persist stateFile sd_journal_get_cursor will fail! */
+-		char* tmp = cs.stateFile;
+-		cs.stateFile = NULL;
++	if (r == SD_JOURNAL_INVALIDATE) {
+ 		closeJournal(&j);
+-		cs.stateFile = tmp;
+ 
+ 		iRet = openJournal(&j);
+-		if (iRet != RS_RET_OK) {
+-			char errStr[256];
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_IO_ERROR,
+-				"sd_journal_open() failed: '%s'", errStr);
++		if (iRet != RS_RET_OK)
+ 			ABORT_FINALIZE(RS_RET_ERR);
+-		}
+ 
+-		if(cs.stateFile != NULL){
++		if (cs.stateFile)
+ 			iRet = loadJournalState();
+-		}
+-		LogMsg(0, RS_RET_OK, LOG_NOTICE, "imjournal: journal reloaded...");
+-	} else if (jr < 0) {
+-		char errStr[256];
+-		rs_strerror_r(errno, errStr, sizeof(errStr));
+-		errmsg.LogError(0, RS_RET_ERR,
+-			"sd_journal_process() failed: '%s'", errStr);
+-		ABORT_FINALIZE(RS_RET_ERR);
++		errmsg.LogMsg(0, RS_RET_OK, LOG_NOTICE, "imjournal: journal reloaded...");
+ 	}
+ 
+ finalize_it:
+@@ -631,8 +612,17 @@ loadJournalState(void)
+ 	RETiRet;
+ }
+ 
++static void
++tryRecover(void) {
++	errmsg.LogMsg(0, RS_RET_OK, LOG_INFO, "imjournal: trying to recover from unexpected "
++		"journal error");
++	closeJournal(&j);
++	srSleep(10, 0);	// do not hammer machine with too-frequent retries
++	openJournal(&j);
++}
++
+ BEGINrunInput
+-	int count = 0;
++	uint64_t count = 0;
+ CODESTARTrunInput
+ 	CHKiRet(ratelimitNew(&ratelimiter, "imjournal", NULL));
+ 	dbgprintf("imjournal: ratelimiting burst %d, interval %d\n", cs.ratelimitBurst,
+@@ -665,26 +655,38 @@ CODESTARTrunInput
+ 
+ 		r = sd_journal_next(j);
+ 		if (r < 0) {
+-			char errStr[256];
+-
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_ERR,
+-				"sd_journal_next() failed: '%s'", errStr);
+-			ABORT_FINALIZE(RS_RET_ERR);
++			tryRecover();
++			continue;
+ 		}
+ 
+ 		if (r == 0) {
+ 			/* No new messages, wait for activity. */
+-			CHKiRet(pollJournal());
++			if (pollJournal() != RS_RET_OK) {
++ 				tryRecover();
++ 			}
+ 			continue;
+ 		}
+ 
+-		CHKiRet(readjournal());
++		if (readjournal() != RS_RET_OK) {
++ 			tryRecover();
++ 			continue;
++ 		}
++
++		count++;
++
++		if ((count % J_PROCESS_PERIOD) == 0) {
++			/* Give the journal a periodic chance to detect rotated journal files to be cleaned up. */
++			r = sd_journal_process(j);
++			if (r < 0) {
++				errmsg.LogError(-r, RS_RET_ERR, "imjournal: sd_journal_process() failed");
++				tryRecover();
++				continue;
++			}
++		}
++
+ 		if (cs.stateFile) { /* can't persist without a state file */
+ 			/* TODO: This could use some finer metric. */
+-			count++;
+-			if (count == cs.iPersistStateInterval) {
+-				count = 0;
++			if ((count % cs.iPersistStateInterval) == 0) {
+ 				persistJournalState();
+ 			}
+ 		}
+@@ -901,6 +909,8 @@ CODESTARTsetModCnf
+ 			cs.bUseJnlPID = (int) pvals[i].val.d.n;
+		} else if (!strcmp(modpblk.descr[i].name, "defaulttag")) {
+			cs.dfltTag = (char *)es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if (!strcmp(modpblk.descr[i].name, "workaroundjournalbug")) {
++			cs.bWorkAroundJournalBug = (int) pvals[i].val.d.n;
+ 		} else {
+ 			dbgprintf("imjournal: program error, non-handled "
+ 				"param '%s' in beginCnfLoad\n", modpblk.descr[i].name);
+@@ -961,6 +971,8 @@ CODEmodInit_QueryRegCFSLineHdlr
+		NULL, &cs.bUseJnlPID, STD_LOADABLE_MODULE_ID));
+	CHKiRet(omsdRegCFSLineHdlr((uchar *)"imjournaldefaulttag", 0, eCmdHdlrGetWord,
+		NULL, &cs.dfltTag, STD_LOADABLE_MODULE_ID));
++	CHKiRet(omsdRegCFSLineHdlr((uchar *)"workaroundjournalbug", 0, eCmdHdlrBinary,
++		NULL, &cs.bWorkAroundJournalBug, STD_LOADABLE_MODULE_ID));
+ ENDmodInit
+ /* vim:set ai:
+  */
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1539193-mmkubernetes-new-plugin.patch b/SOURCES/rsyslog-8.24.0-rhbz1539193-mmkubernetes-new-plugin.patch
new file mode 100644
index 0000000..f37557c
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1539193-mmkubernetes-new-plugin.patch
@@ -0,0 +1,1659 @@
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Mon, 28 Jun 2018 12:07:55 +0100
+Subject: Kubernetes Metadata plugin - mmkubernetes
+
+This plugin is used to annotate records logged by Kubernetes containers.
+It will add the namespace uuid, pod uuid, pod and namespace labels and
+annotations, and other metadata associated with the pod and namespace.
+It will work with either log files in `/var/log/containers/*.log` or
+with journald entries with `CONTAINER_NAME` and `CONTAINER_ID_FULL`.
+
+For usage and configuration see syslog-doc
+
+*Credits*
+
+This work is based on https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter
+and has many of the same features.
+
+(cherry picked from commit a6264bf8f91975c9bc0fc602dcdc6881486f1579)
+(cherry picked from commit b8e68366422052dca9e0a9409baa410f20ae88c8)
+
+(cherry picked from commit 77886e21292d8220f93b3404236da0e8f7159255)
+(cherry picked from commit e4d1c7b3832eedc8a1545c2ee6bf022f545d0c76)
+(cherry picked from commit 3d9f820642b0edc78da0b5bed818590dcd31fa9c)
+(cherry picked from commit 1d49aac5cb101704486bfb065fac362ca69f06bc)
+(cherry picked from commit fc2ad45f78dd666b8c9e706ad88c17aaff146d2d)
+(cherry picked from commit 8cf87f64f6c74a4544112ec7fddc5bf4d43319a7)
+---
+ Makefile.am                                      |    5 +
+ configure.ac                                     |   35 +
+ contrib/mmkubernetes/Makefile.am                 |    6 +
+ contrib/mmkubernetes/k8s_container_name.rulebase |    3 +
+ contrib/mmkubernetes/k8s_filename.rulebase       |    2 +
+ contrib/mmkubernetes/mmkubernetes.c              | 1491 +++++++++++++++++++++++
+ contrib/mmkubernetes/sample.conf                 |    7 +
+ 7 files changed, 1549 insertions(+)
+ create mode 100644 contrib/mmkubernetes/Makefile.am
+ create mode 100644 contrib/mmkubernetes/k8s_container_name.rulebase
+ create mode 100644 contrib/mmkubernetes/k8s_filename.rulebase
+ create mode 100644 contrib/mmkubernetes/mmkubernetes.c
+ create mode 100644 contrib/mmkubernetes/sample.conf
+
+diff --git a/Makefile.am b/Makefile.am
+index a276ef9ea..b58ebaf93 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -275,6 +275,11 @@ if ENABLE_OMTCL
+ SUBDIRS += contrib/omtcl
+ endif
+ 
++# mmkubernetes
++if ENABLE_MMKUBERNETES
++SUBDIRS += contrib/mmkubernetes
++endif
++
+ # tests are added as last element, because tests may need different
+ # modules that need to be generated first
+ SUBDIRS += tests
+diff --git a/configure.ac b/configure.ac
+index a9411f4be..c664222b9 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -1889,6 +1889,39 @@ AM_CONDITIONAL(ENABLE_OMTCL, test x$enable_omtcl = xyes)
+ 
+ # END TCL SUPPORT
+ 
++# mmkubernetes - Kubernetes metadata support
++
++AC_ARG_ENABLE(mmkubernetes,
++        [AS_HELP_STRING([--enable-mmkubernetes],
++            [Enable compilation of the mmkubernetes module @<:@default=no@:>@])],
++        [case "${enableval}" in
++         yes) enable_mmkubernetes="yes" ;;
++          no) enable_mmkubernetes="no" ;;
++           *) AC_MSG_ERROR(bad value ${enableval} for --enable-mmkubernetes) ;;
++         esac],
++        [enable_mmkubernetes=no]
++)
++if test "x$enable_mmkubernetes" = "xyes"; then
++        PKG_CHECK_MODULES([CURL], [libcurl])
++        PKG_CHECK_MODULES(LIBLOGNORM, lognorm >= 2.0.3)
++
++        save_CFLAGS="$CFLAGS"
++        save_LIBS="$LIBS"
++
++        CFLAGS="$CFLAGS $LIBLOGNORM_CFLAGS"
++        LIBS="$LIBS $LIBLOGNORM_LIBS"
++
++        AC_CHECK_FUNC([ln_loadSamplesFromString],
++                      [AC_DEFINE([HAVE_LOADSAMPLESFROMSTRING], [1], [Define if ln_loadSamplesFromString exists.])],
++                      [AC_DEFINE([NO_LOADSAMPLESFROMSTRING], [1], [Define if ln_loadSamplesFromString does not exist.])])
++
++        CFLAGS="$save_CFLAGS"
++        LIBS="$save_LIBS"
++fi
++AM_CONDITIONAL(ENABLE_MMKUBERNETES, test x$enable_mmkubernetes = xyes)
++
++# END Kubernetes metadata support
++
+ # man pages
+ AC_CHECKING([if required man pages already exist])
+ have_to_generate_man_pages="no"
+@@ -2016,6 +2035,7 @@ AC_CONFIG_FILES([Makefile \
+ 		contrib/omhttpfs/Makefile \
+ 		contrib/omamqp1/Makefile \
+ 		contrib/omtcl/Makefile \
++		contrib/mmkubernetes/Makefile \
+ 		tests/Makefile])
+ AC_OUTPUT
+ 
+@@ -2090,6 +2110,7 @@ echo "    mmrfc5424addhmac enabled:                 $enable_mmrfc5424addhmac"
+ echo "    mmpstrucdata enabled:                     $enable_mmpstrucdata"
+ echo "    mmsequence enabled:                       $enable_mmsequence"
+ echo "    mmdblookup enabled:                       $enable_mmdblookup"
++echo "    mmkubernetes enabled:                     $enable_mmkubernetes"
+ echo
+ echo "---{ database support }---"
+ echo "    MySql support enabled:                    $enable_mysql"
+diff --git a/contrib/mmkubernetes/Makefile.am b/contrib/mmkubernetes/Makefile.am
+new file mode 100644
+index 000000000..3dcc235a6
+--- /dev/null
++++ b/contrib/mmkubernetes/Makefile.am
+@@ -0,0 +1,6 @@
++pkglib_LTLIBRARIES = mmkubernetes.la
++
++mmkubernetes_la_SOURCES = mmkubernetes.c
++mmkubernetes_la_CPPFLAGS = $(RSRT_CFLAGS) $(PTHREADS_CFLAGS) $(CURL_CFLAGS) $(LIBLOGNORM_CFLAGS)
++mmkubernetes_la_LDFLAGS = -module -avoid-version
++mmkubernetes_la_LIBADD = $(CURL_LIBS) $(LIBLOGNORM_LIBS)
+diff --git a/contrib/mmkubernetes/k8s_container_name.rulebase b/contrib/mmkubernetes/k8s_container_name.rulebase
+new file mode 100644
+index 000000000..35fbb317c
+--- /dev/null
++++ b/contrib/mmkubernetes/k8s_container_name.rulebase
+@@ -0,0 +1,3 @@
++version=2
++rule=:%k8s_prefix:char-to:_%_%container_name:char-to:.%.%container_hash:char-to:_%_%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%
++rule=:%k8s_prefix:char-to:_%_%container_name:char-to:_%_%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%
+diff --git a/contrib/mmkubernetes/k8s_filename.rulebase b/contrib/mmkubernetes/k8s_filename.rulebase
+new file mode 100644
+index 000000000..24c0d9138
+--- /dev/null
++++ b/contrib/mmkubernetes/k8s_filename.rulebase
+@@ -0,0 +1,2 @@
++version=2
++rule=:/var/log/containers/%pod_name:char-to:_%_%namespace_name:char-to:_%_%container_name_and_id:char-to:.%.log
+diff --git a/contrib/mmkubernetes/mmkubernetes.c b/contrib/mmkubernetes/mmkubernetes.c
+new file mode 100644
+index 000000000..5012c54f6
+--- /dev/null
++++ b/contrib/mmkubernetes/mmkubernetes.c
+@@ -0,0 +1,1491 @@
++/* mmkubernetes.c
++ * This is a message modification module. It uses metadata obtained
++ * from the message to query Kubernetes and obtain additional metadata
++ * relating to the container instance.
++ *
++ * Inspired by:
++ * https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter
++ *
++ * NOTE: read comments in module-template.h for details on the calling interface!
++ *
++ * Copyright 2016 Red Hat Inc.
++ *
++ * This file is part of rsyslog.
++ *
++ * Licensed under the Apache License, Version 2.0 (the "License");
++ * you may not use this file except in compliance with the License.
++ * You may obtain a copy of the License at
++ *
++ *       http://www.apache.org/licenses/LICENSE-2.0
++ *       -or-
++ *       see COPYING.ASL20 in the source distribution
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS,
++ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++ * See the License for the specific language governing permissions and
++ * limitations under the License.
++ */
++
++/* needed for asprintf */
++#ifndef _GNU_SOURCE
++#  define _GNU_SOURCE
++#endif
++
++#include "config.h"
++#include "rsyslog.h"
++#include <stdio.h>
++#include <stdarg.h>
++#include <stdlib.h>
++#include <string.h>
++#include <assert.h>
++#include <errno.h>
++#include <unistd.h>
++#include <sys/stat.h>
++#include <libestr.h>
++#include <liblognorm.h>
++#include <json.h>
++#include <curl/curl.h>
++#include <curl/easy.h>
++#include <pthread.h>
++#include "conf.h"
++#include "syslogd-types.h"
++#include "module-template.h"
++#include "errmsg.h"
++#include "regexp.h"
++#include "hashtable.h"
++#include "srUtils.h"
++
++/* static data */
++MODULE_TYPE_OUTPUT /* this is technically an output plugin */
++MODULE_TYPE_KEEP /* releasing the module would cause a leak through libcurl */
++MODULE_CNFNAME("mmkubernetes")
++DEF_OMOD_STATIC_DATA
++DEFobjCurrIf(errmsg)
++DEFobjCurrIf(regexp)
++
++#define HAVE_LOADSAMPLESFROMSTRING 1
++#if defined(NO_LOADSAMPLESFROMSTRING)
++#undef HAVE_LOADSAMPLESFROMSTRING
++#endif
++/* original from fluentd plugin:
++ * 'var\.log\.containers\.(?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?\
++ *   (\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace>[^_]+)_\
++ *   (?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$'
++ * this is for _tag_ match, not actual filename match - in_tail turns filename
++ * into a fluentd tag
++ */
++#define DFLT_FILENAME_LNRULES "rule=:/var/log/containers/%pod_name:char-to:_%_"\
++	"%namespace_name:char-to:_%_%container_name:char-to:-%-%container_id:char-to:.%.log"
++#define DFLT_FILENAME_RULEBASE "/etc/rsyslog.d/k8s_filename.rulebase"
++/* original from fluentd plugin:
++ *   '^(?<name_prefix>[^_]+)_(?<container_name>[^\._]+)\
++ *     (\.(?<container_hash>[^_]+))?_(?<pod_name>[^_]+)_\
++ *     (?<namespace>[^_]+)_[^_]+_[^_]+$'
++ */
++#define DFLT_CONTAINER_LNRULES "rule=:%k8s_prefix:char-to:_%_%container_name:char-to:.%."\
++	"%container_hash:char-to:_%_"\
++	"%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%\n"\
++	"rule=:%k8s_prefix:char-to:_%_%container_name:char-to:_%_"\
++	"%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%"
++#define DFLT_CONTAINER_RULEBASE "/etc/rsyslog.d/k8s_container_name.rulebase"
++#define DFLT_SRCMD_PATH "$!metadata!filename"
++#define DFLT_DSTMD_PATH "$!"
++#define DFLT_DE_DOT 1 /* true */
++#define DFLT_DE_DOT_SEPARATOR "_"
++#define DFLT_CONTAINER_NAME "$!CONTAINER_NAME" /* name of variable holding CONTAINER_NAME value */
++#define DFLT_CONTAINER_ID_FULL "$!CONTAINER_ID_FULL" /* name of variable holding CONTAINER_ID_FULL value */
++#define DFLT_KUBERNETES_URL "https://kubernetes.default.svc.cluster.local:443"
++
++static struct cache_s {
++	const uchar *kbUrl;
++	struct hashtable *mdHt;
++	struct hashtable *nsHt;
++	pthread_mutex_t *cacheMtx;
++} **caches;
++
++typedef struct {
++	int nmemb;
++	uchar **patterns;
++	regex_t *regexps;
++} annotation_match_t;
++
++/* module configuration data */
++struct modConfData_s {
++	rsconf_t *pConf;	/* our overall config object */
++	uchar *kubernetesUrl;	/* scheme, host, port, and optional path prefix for Kubernetes API lookups */
++	uchar *srcMetadataPath;	/* where to get data for kubernetes queries */
++	uchar *dstMetadataPath;	/* where to put metadata obtained from kubernetes */
++	uchar *caCertFile; /* File holding the CA cert (+optional chain) of CA that issued the Kubernetes server cert */
++	sbool allowUnsignedCerts; /* For testing/debugging - do not check for CA certs (CURLOPT_SSL_VERIFYPEER FALSE) */
++	uchar *token; /* The token value to use to authenticate to Kubernetes - takes precedence over tokenFile */
++	uchar *tokenFile; /* The file whose contents is the token value to use to authenticate to Kubernetes */
++	sbool de_dot; /* If true (default), convert '.' characters in labels & annotations to de_dot_separator */
++	uchar *de_dot_separator; /* separator character (default '_') to use for de_dotting */
++	size_t de_dot_separator_len; /* length of separator character */
++	annotation_match_t annotation_match; /* annotation keys must match these to be included in record */
++	char *fnRules; /* lognorm rules for container log filename match */
++	uchar *fnRulebase; /* lognorm rulebase filename for container log filename match */
++	char *contRules; /* lognorm rules for CONTAINER_NAME value match */
++	uchar *contRulebase; /* lognorm rulebase filename for CONTAINER_NAME value match */
++};
++
++/* action (instance) configuration data */
++typedef struct _instanceData {
++	uchar *kubernetesUrl;	/* scheme, host, port, and optional path prefix for Kubernetes API lookups */
++	msgPropDescr_t *srcMetadataDescr;	/* where to get data for kubernetes queries */
++	uchar *dstMetadataPath;	/* where to put metadata obtained from kubernetes */
++	uchar *caCertFile; /* File holding the CA cert (+optional chain) of CA that issued the Kubernetes server cert */
++	sbool allowUnsignedCerts; /* For testing/debugging - do not check for CA certs (CURLOPT_SSL_VERIFYPEER FALSE) */
++	uchar *token; /* The token value to use to authenticate to Kubernetes - takes precedence over tokenFile */
++	uchar *tokenFile; /* The file whose contents is the token value to use to authenticate to Kubernetes */
++	sbool de_dot; /* If true (default), convert '.' characters in labels & annotations to de_dot_separator */
++	uchar *de_dot_separator; /* separator character (default '_') to use for de_dotting */
++	size_t de_dot_separator_len; /* length of separator character */
++	annotation_match_t annotation_match; /* annotation keys must match these to be included in record */
++	char *fnRules; /* lognorm rules for container log filename match */
++	uchar *fnRulebase; /* lognorm rulebase filename for container log filename match */
++	ln_ctx fnCtxln;	/**< context to be used for liblognorm */
++	char *contRules; /* lognorm rules for CONTAINER_NAME value match */
++	uchar *contRulebase; /* lognorm rulebase filename for CONTAINER_NAME value match */
++	ln_ctx contCtxln;	/**< context to be used for liblognorm */
++	msgPropDescr_t *contNameDescr; /* CONTAINER_NAME field */
++	msgPropDescr_t *contIdFullDescr; /* CONTAINER_ID_FULL field */
++	struct cache_s *cache;
++} instanceData;
++
++typedef struct wrkrInstanceData {
++	instanceData *pData;
++	CURL *curlCtx;
++	struct curl_slist *curlHdr;
++	char *curlRply;
++	size_t curlRplyLen;
++} wrkrInstanceData_t;
++
++/* module parameters (v6 config format) */
++static struct cnfparamdescr modpdescr[] = {
++	{ "kubernetesurl", eCmdHdlrString, 0 },
++	{ "srcmetadatapath", eCmdHdlrString, 0 },
++	{ "dstmetadatapath", eCmdHdlrString, 0 },
++	{ "tls.cacert", eCmdHdlrString, 0 },
++	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
++	{ "token", eCmdHdlrString, 0 },
++	{ "tokenfile", eCmdHdlrString, 0 },
++	{ "annotation_match", eCmdHdlrArray, 0 },
++	{ "de_dot", eCmdHdlrBinary, 0 },
++	{ "de_dot_separator", eCmdHdlrString, 0 },
++	{ "filenamerulebase", eCmdHdlrString, 0 },
++	{ "containerrulebase", eCmdHdlrString, 0 }
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	,
++	{ "filenamerules", eCmdHdlrArray, 0 },
++	{ "containerrules", eCmdHdlrArray, 0 }
++#endif
++};
++static struct cnfparamblk modpblk = {
++	CNFPARAMBLK_VERSION,
++	sizeof(modpdescr)/sizeof(struct cnfparamdescr),
++	modpdescr
++};
++
++/* action (instance) parameters (v6 config format) */
++static struct cnfparamdescr actpdescr[] = {
++	{ "kubernetesurl", eCmdHdlrString, 0 },
++	{ "srcmetadatapath", eCmdHdlrString, 0 },
++	{ "dstmetadatapath", eCmdHdlrString, 0 },
++	{ "tls.cacert", eCmdHdlrString, 0 },
++	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
++	{ "token", eCmdHdlrString, 0 },
++	{ "tokenfile", eCmdHdlrString, 0 },
++	{ "annotation_match", eCmdHdlrArray, 0 },
++	{ "de_dot", eCmdHdlrBinary, 0 },
++	{ "de_dot_separator", eCmdHdlrString, 0 },
++	{ "filenamerulebase", eCmdHdlrString, 0 },
++	{ "containerrulebase", eCmdHdlrString, 0 }
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	,
++	{ "filenamerules", eCmdHdlrArray, 0 },
++	{ "containerrules", eCmdHdlrArray, 0 }
++#endif
++};
++static struct cnfparamblk actpblk =
++	{ CNFPARAMBLK_VERSION,
++	  sizeof(actpdescr)/sizeof(struct cnfparamdescr),
++	  actpdescr
++	};
++
++static modConfData_t *loadModConf = NULL;	/* modConf ptr to use for the current load process */
++static modConfData_t *runModConf = NULL;	/* modConf ptr to use for the current exec process */
++
++static void free_annotationmatch(annotation_match_t *match) {
++	if (match) {
++		for(int ii = 0 ; ii < match->nmemb; ++ii) {
++			if (match->patterns)
++				free(match->patterns[ii]);
++			if (match->regexps)
++				regexp.regfree(&match->regexps[ii]);
++		}
++		free(match->patterns);
++		match->patterns = NULL;
++		free(match->regexps);
++		match->regexps = NULL;
++		match->nmemb = 0;
++	}
++}
++
++static int init_annotationmatch(annotation_match_t *match, struct cnfarray *ar) {
++	DEFiRet;
++
++	match->nmemb = ar->nmemb;
++	CHKmalloc(match->patterns = calloc(sizeof(uchar*), match->nmemb));
++	CHKmalloc(match->regexps = calloc(sizeof(regex_t), match->nmemb));
++	for(int jj = 0; jj < ar->nmemb; ++jj) {
++		int rexret = 0;
++		match->patterns[jj] = (uchar*)es_str2cstr(ar->arr[jj], NULL);
++		rexret = regexp.regcomp(&match->regexps[jj],
++				(char *)match->patterns[jj], REG_EXTENDED|REG_NOSUB);
++		if (0 != rexret) {
++			char errMsg[512];
++			regexp.regerror(rexret, &match->regexps[jj], errMsg, sizeof(errMsg));
++			iRet = RS_RET_CONFIG_ERROR;
++			errmsg.LogError(0, iRet,
++					"error: could not compile annotation_match string [%s]"
++					" into an extended regexp - %d: %s\n",
++					match->patterns[jj], rexret, errMsg);
++			break;
++		}
++	}
++finalize_it:
++	if (iRet)
++		free_annotationmatch(match);
++	RETiRet;
++}
++
++static int copy_annotationmatch(annotation_match_t *src, annotation_match_t *dest) {
++	DEFiRet;
++
++	dest->nmemb = src->nmemb;
++	CHKmalloc(dest->patterns = malloc(sizeof(uchar*) * dest->nmemb));
++	CHKmalloc(dest->regexps = calloc(sizeof(regex_t), dest->nmemb));
++	for(int jj = 0 ; jj < src->nmemb ; ++jj) {
++		CHKmalloc(dest->patterns[jj] = (uchar*)strdup((char *)src->patterns[jj]));
++		/* assumes was already successfully compiled */
++		regexp.regcomp(&dest->regexps[jj], (char *)dest->patterns[jj], REG_EXTENDED|REG_NOSUB);
++	}
++finalize_it:
++    if (iRet)
++    	free_annotationmatch(dest);
++	RETiRet;
++}
++
++/* takes a hash of annotations and returns another json object hash containing only the
++ * keys that match - this logic is taken directly from fluent-plugin-kubernetes_metadata_filter
++ * except that we do not add the key multiple times to the object to be returned
++ */
++static struct json_object *match_annotations(annotation_match_t *match,
++		struct json_object *annotations) {
++	struct json_object *ret = NULL;
++
++	for (int jj = 0; jj < match->nmemb; ++jj) {
++		struct json_object_iterator it = json_object_iter_begin(annotations);
++		struct json_object_iterator itEnd = json_object_iter_end(annotations);
++		for (;!json_object_iter_equal(&it, &itEnd); json_object_iter_next(&it)) {
++			const char *const key = json_object_iter_peek_name(&it);
++			if (!ret || !fjson_object_object_get_ex(ret, key, NULL)) {
++				if (!regexp.regexec(&match->regexps[jj], key, 0, NULL, 0)) {
++					if (!ret) {
++						ret = json_object_new_object();
++					}
++					json_object_object_add(ret, key,
++						json_object_get(json_object_iter_peek_value(&it)));
++				}
++			}
++		}
++	}
++	return ret;
++}
++
++/* This will take a hash of labels or annotations and will de_dot the keys.
++ * It will return a brand new hash.  AFAICT, there is no safe way to
++ * iterate over the hash while modifying it in place.
++ */
++static struct json_object *de_dot_json_object(struct json_object *jobj,
++		const char *delim, size_t delim_len) {
++	struct json_object *ret = NULL;
++	struct json_object_iterator it = json_object_iter_begin(jobj);
++	struct json_object_iterator itEnd = json_object_iter_end(jobj);
++	es_str_t *new_es_key = NULL;
++	DEFiRet;
++
++	ret = json_object_new_object();
++	while (!json_object_iter_equal(&it, &itEnd)) {
++		const char *const key = json_object_iter_peek_name(&it);
++		const char *cc = strstr(key, ".");
++		if (NULL == cc) {
++			json_object_object_add(ret, key,
++					json_object_get(json_object_iter_peek_value(&it)));
++		} else {
++			char *new_key = NULL;
++			const char *prevcc = key;
++			new_es_key = es_newStrFromCStr(key, (es_size_t)(cc-prevcc));
++			while (cc) {
++				if (es_addBuf(&new_es_key, (char *)delim, (es_size_t)delim_len))
++					ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++				cc += 1; /* one past . */
++				prevcc = cc; /* beginning of next substring */
++				if ((cc = strstr(prevcc, ".")) || (cc = strchr(prevcc, '\0'))) {
++					if (es_addBuf(&new_es_key, (char *)prevcc, (es_size_t)(cc-prevcc)))
++						ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++					if (!*cc)
++						cc = NULL; /* EOS - done */
++				}
++			}
++			new_key = es_str2cstr(new_es_key, NULL);
++			es_deleteStr(new_es_key);
++			new_es_key = NULL;
++			json_object_object_add(ret, new_key,
++					json_object_get(json_object_iter_peek_value(&it)));
++			free(new_key);
++		}
++		json_object_iter_next(&it);
++	}
++finalize_it:
++	if (iRet != RS_RET_OK) {
++		json_object_put(ret);
++		ret = NULL;
++	}
++	if (new_es_key)
++		es_deleteStr(new_es_key);
++	return ret;
++}
++
++/* given a "metadata" object field, do
++ * - make sure "annotations" field has only the matching keys
++ * - de_dot the "labels" and "annotations" fields keys
++ * This modifies the jMetadata object in place
++ */
++static void parse_labels_annotations(struct json_object *jMetadata,
++		annotation_match_t *match, sbool de_dot,
++		const char *delim, size_t delim_len) {
++	struct json_object *jo = NULL;
++
++	if (fjson_object_object_get_ex(jMetadata, "annotations", &jo)) {
++		if ((jo = match_annotations(match, jo)))
++			json_object_object_add(jMetadata, "annotations", jo);
++		else
++			json_object_object_del(jMetadata, "annotations");
++	}
++	/* dedot labels and annotations */
++	if (de_dot) {
++		struct json_object *jo2 = NULL;
++		if (fjson_object_object_get_ex(jMetadata, "annotations", &jo)) {
++			if ((jo2 = de_dot_json_object(jo, delim, delim_len))) {
++				json_object_object_add(jMetadata, "annotations", jo2);
++			}
++		}
++		if (fjson_object_object_get_ex(jMetadata, "labels", &jo)) {
++			if ((jo2 = de_dot_json_object(jo, delim, delim_len))) {
++				json_object_object_add(jMetadata, "labels", jo2);
++			}
++		}
++	}
++}
++
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++static int array_to_rules(struct cnfarray *ar, char **rules) {
++	DEFiRet;
++	es_str_t *tmpstr = NULL;
++	es_size_t size = 0;
++
++	if (rules == NULL)
++		FINALIZE;
++	*rules = NULL;
++	if (!ar->nmemb)
++		FINALIZE;
++	for (int jj = 0; jj < ar->nmemb; jj++)
++		size += es_strlen(ar->arr[jj]);
++	if (!size)
++		FINALIZE;
++	CHKmalloc(tmpstr = es_newStr(size));
++	CHKiRet((es_addStr(&tmpstr, ar->arr[0])));
++	CHKiRet((es_addBufConstcstr(&tmpstr, "\n")));
++	for(int jj=1; jj < ar->nmemb; ++jj) {
++		CHKiRet((es_addStr(&tmpstr, ar->arr[jj])));
++		CHKiRet((es_addBufConstcstr(&tmpstr, "\n")));
++	}
++	CHKiRet((es_addBufConstcstr(&tmpstr, "\0")));
++	CHKmalloc(*rules = es_str2cstr(tmpstr, NULL));
++finalize_it:
++	if (tmpstr) {
++		es_deleteStr(tmpstr);
++	}
++    if (iRet != RS_RET_OK) {
++    	free(*rules);
++    	*rules = NULL;
++    }
++	RETiRet;
++}
++#endif
++
++/* callback for liblognorm error messages */
++static void
++errCallBack(void __attribute__((unused)) *cookie, const char *msg,
++	    size_t __attribute__((unused)) lenMsg)
++{
++	errmsg.LogError(0, RS_RET_ERR_LIBLOGNORM, "liblognorm error: %s", msg);
++}
++
++static rsRetVal
++set_lnctx(ln_ctx *ctxln, char *instRules, uchar *instRulebase, char *modRules, uchar *modRulebase)
++{
++	DEFiRet;
++	if (ctxln == NULL)
++		FINALIZE;
++	CHKmalloc(*ctxln = ln_initCtx());
++	ln_setErrMsgCB(*ctxln, errCallBack, NULL);
++	if(instRules) {
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		if(ln_loadSamplesFromString(*ctxln, instRules) !=0) {
++			errmsg.LogError(0, RS_RET_NO_RULEBASE, "error: normalization rules '%s' "
++					"could not be loaded", instRules);
++			ABORT_FINALIZE(RS_RET_ERR_LIBLOGNORM_SAMPDB_LOAD);
++		}
++#else
++		(void)instRules;
++#endif
++	} else if(instRulebase) {
++		if(ln_loadSamples(*ctxln, (char*) instRulebase) != 0) {
++			errmsg.LogError(0, RS_RET_NO_RULEBASE, "error: normalization rulebase '%s' "
++					"could not be loaded", instRulebase);
++			ABORT_FINALIZE(RS_RET_ERR_LIBLOGNORM_SAMPDB_LOAD);
++		}
++	} else if(modRules) {
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		if(ln_loadSamplesFromString(*ctxln, modRules) !=0) {
++			errmsg.LogError(0, RS_RET_NO_RULEBASE, "error: normalization rules '%s' "
++					"could not be loaded", modRules);
++			ABORT_FINALIZE(RS_RET_ERR_LIBLOGNORM_SAMPDB_LOAD);
++		}
++#else
++		(void)modRules;
++#endif
++	} else if(modRulebase) {
++		if(ln_loadSamples(*ctxln, (char*) modRulebase) != 0) {
++			errmsg.LogError(0, RS_RET_NO_RULEBASE, "error: normalization rulebase '%s' "
++					"could not be loaded", modRulebase);
++			ABORT_FINALIZE(RS_RET_ERR_LIBLOGNORM_SAMPDB_LOAD);
++		}
++	}
++finalize_it:
++	if (iRet != RS_RET_OK){
++		ln_exitCtx(*ctxln);
++		*ctxln = NULL;
++	}
++	RETiRet;
++}
++
++BEGINbeginCnfLoad
++CODESTARTbeginCnfLoad
++	loadModConf = pModConf;
++	pModConf->pConf = pConf;
++ENDbeginCnfLoad
++
++
++BEGINsetModCnf
++	struct cnfparamvals *pvals = NULL;
++	int i;
++	FILE *fp;
++	int ret;
++CODESTARTsetModCnf
++	pvals = nvlstGetParams(lst, &modpblk, NULL);
++	if(pvals == NULL) {
++		errmsg.LogError(0, RS_RET_MISSING_CNFPARAMS, "mmkubernetes: "
++			"error processing module config parameters [module(...)]");
++		ABORT_FINALIZE(RS_RET_MISSING_CNFPARAMS);
++	}
++
++	if(Debug) {
++		dbgprintf("module (global) param blk for mmkubernetes:\n");
++		cnfparamsPrint(&modpblk, pvals);
++	}
++
++	loadModConf->de_dot = DFLT_DE_DOT;
++	for(i = 0 ; i < modpblk.nParams ; ++i) {
++		if(!pvals[i].bUsed) {
++			continue;
++		} else if(!strcmp(modpblk.descr[i].name, "kubernetesurl")) {
++			free(loadModConf->kubernetesUrl);
++			loadModConf->kubernetesUrl = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(modpblk.descr[i].name, "srcmetadatapath")) {
++			free(loadModConf->srcMetadataPath);
++			loadModConf->srcMetadataPath = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			/* todo: sanitize the path */
++		} else if(!strcmp(modpblk.descr[i].name, "dstmetadatapath")) {
++			free(loadModConf->dstMetadataPath);
++			loadModConf->dstMetadataPath = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			/* todo: sanitize the path */
++		} else if(!strcmp(modpblk.descr[i].name, "tls.cacert")) {
++			free(loadModConf->caCertFile);
++			loadModConf->caCertFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->caCertFile, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: certificate file %s couldn't be accessed: %s\n",
++						loadModConf->caCertFile, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(modpblk.descr[i].name, "allowunsignedcerts")) {
++			loadModConf->allowUnsignedCerts = pvals[i].val.d.n;
++		} else if(!strcmp(modpblk.descr[i].name, "token")) {
++			free(loadModConf->token);
++			loadModConf->token = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(modpblk.descr[i].name, "tokenfile")) {
++			free(loadModConf->tokenFile);
++			loadModConf->tokenFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->tokenFile, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: token file %s couldn't be accessed: %s\n",
++						loadModConf->tokenFile, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(modpblk.descr[i].name, "annotation_match")) {
++			free_annotationmatch(&loadModConf->annotation_match);
++			if ((ret = init_annotationmatch(&loadModConf->annotation_match, pvals[i].val.d.ar)))
++				ABORT_FINALIZE(ret);
++		} else if(!strcmp(modpblk.descr[i].name, "de_dot")) {
++			loadModConf->de_dot = pvals[i].val.d.n;
++		} else if(!strcmp(modpblk.descr[i].name, "de_dot_separator")) {
++			free(loadModConf->de_dot_separator);
++			loadModConf->de_dot_separator = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		} else if(!strcmp(modpblk.descr[i].name, "filenamerules")) {
++			free(loadModConf->fnRules);
++			CHKiRet((array_to_rules(pvals[i].val.d.ar, &loadModConf->fnRules)));
++#endif
++		} else if(!strcmp(modpblk.descr[i].name, "filenamerulebase")) {
++			free(loadModConf->fnRulebase);
++			loadModConf->fnRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->fnRulebase, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: filenamerulebase file %s couldn't be accessed: %s\n",
++						loadModConf->fnRulebase, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		} else if(!strcmp(modpblk.descr[i].name, "containerrules")) {
++			free(loadModConf->contRules);
++			CHKiRet((array_to_rules(pvals[i].val.d.ar, &loadModConf->contRules)));
++#endif
++		} else if(!strcmp(modpblk.descr[i].name, "containerrulebase")) {
++			free(loadModConf->contRulebase);
++			loadModConf->contRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->contRulebase, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: containerrulebase file %s couldn't be accessed: %s\n",
++						loadModConf->contRulebase, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else {
++			dbgprintf("mmkubernetes: program error, non-handled "
++				"param '%s' in module() block\n", modpblk.descr[i].name);
++			/* todo: error message? */
++		}
++	}
++
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	if (loadModConf->fnRules && loadModConf->fnRulebase) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++				"mmkubernetes: only 1 of filenamerules or filenamerulebase may be used");
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++	if (loadModConf->contRules && loadModConf->contRulebase) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++				"mmkubernetes: only 1 of containerrules or containerrulebase may be used");
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++#endif
++
++	/* set defaults */
++	if(loadModConf->srcMetadataPath == NULL)
++		loadModConf->srcMetadataPath = (uchar *) strdup(DFLT_SRCMD_PATH);
++	if(loadModConf->dstMetadataPath == NULL)
++		loadModConf->dstMetadataPath = (uchar *) strdup(DFLT_DSTMD_PATH);
++	if(loadModConf->de_dot_separator == NULL)
++		loadModConf->de_dot_separator = (uchar *) strdup(DFLT_DE_DOT_SEPARATOR);
++	if(loadModConf->de_dot_separator)
++		loadModConf->de_dot_separator_len = strlen((const char *)loadModConf->de_dot_separator);
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	if (loadModConf->fnRules == NULL && loadModConf->fnRulebase == NULL)
++		loadModConf->fnRules = strdup(DFLT_FILENAME_LNRULES);
++	if (loadModConf->contRules == NULL && loadModConf->contRulebase == NULL)
++		loadModConf->contRules = strdup(DFLT_CONTAINER_LNRULES);
++#else
++	if (loadModConf->fnRulebase == NULL)
++		loadModConf->fnRulebase = (uchar *)strdup(DFLT_FILENAME_RULEBASE);
++	if (loadModConf->contRulebase == NULL)
++		loadModConf->contRulebase = (uchar *)strdup(DFLT_CONTAINER_RULEBASE);
++#endif
++	caches = calloc(1, sizeof(struct cache_s *));
++
++finalize_it:
++	if(pvals != NULL)
++		cnfparamvalsDestruct(pvals, &modpblk);
++ENDsetModCnf
++
++
++BEGINcreateInstance
++CODESTARTcreateInstance
++ENDcreateInstance
++
++
++BEGINfreeInstance
++CODESTARTfreeInstance
++	free(pData->kubernetesUrl);
++	msgPropDescrDestruct(pData->srcMetadataDescr);
++	free(pData->srcMetadataDescr);
++	free(pData->dstMetadataPath);
++	free(pData->caCertFile);
++	free(pData->token);
++	free(pData->tokenFile);
++	free(pData->fnRules);
++	free(pData->fnRulebase);
++	ln_exitCtx(pData->fnCtxln);
++	free(pData->contRules);
++	free(pData->contRulebase);
++	ln_exitCtx(pData->contCtxln);
++	free_annotationmatch(&pData->annotation_match);
++	free(pData->de_dot_separator);
++	msgPropDescrDestruct(pData->contNameDescr);
++	free(pData->contNameDescr);
++	msgPropDescrDestruct(pData->contIdFullDescr);
++	free(pData->contIdFullDescr);
++ENDfreeInstance
++
++static size_t curlCB(char *data, size_t size, size_t nmemb, void *usrptr)
++{
++	DEFiRet;
++	wrkrInstanceData_t *pWrkrData = (wrkrInstanceData_t *) usrptr;
++	char * buf;
++	size_t newlen;
++
++	newlen = pWrkrData->curlRplyLen + size * nmemb;
++	CHKmalloc(buf = realloc(pWrkrData->curlRply, newlen));
++	memcpy(buf + pWrkrData->curlRplyLen, data, size * nmemb);
++	pWrkrData->curlRply = buf;
++	pWrkrData->curlRplyLen = newlen;
++
++finalize_it:
++	if (iRet != RS_RET_OK) {
++		return 0;
++	}
++	return size * nmemb;
++}
++
++BEGINcreateWrkrInstance
++CODESTARTcreateWrkrInstance
++	CURL *ctx;
++	struct curl_slist *hdr = NULL;
++	char *tokenHdr = NULL;
++	FILE *fp = NULL;
++	char *token = NULL;
++
++	hdr = curl_slist_append(hdr, "Content-Type: text/json; charset=utf-8");
++	if (pWrkrData->pData->token) {
++		if ((-1 == asprintf(&tokenHdr, "Authorization: Bearer %s", pWrkrData->pData->token)) ||
++			(!tokenHdr)) {
++			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++		}
++	} else if (pWrkrData->pData->tokenFile) {
++		struct stat statbuf;
++		fp = fopen((const char*)pWrkrData->pData->tokenFile, "r");
++		if (fp && !fstat(fileno(fp), &statbuf)) {
++			size_t bytesread;
++			CHKmalloc(token = malloc((statbuf.st_size+1)*sizeof(char)));
++			if (0 < (bytesread = fread(token, sizeof(char), statbuf.st_size, fp))) {
++				token[bytesread] = '\0';
++				if ((-1 == asprintf(&tokenHdr, "Authorization: Bearer %s", token)) ||
++					(!tokenHdr)) {
++					ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++				}
++			}
++			free(token);
++			token = NULL;
++		}
++		if (fp) {
++			fclose(fp);
++			fp = NULL;
++		}
++	}
++	if (tokenHdr) {
++		hdr = curl_slist_append(hdr, tokenHdr);
++		free(tokenHdr);
++	}
++	pWrkrData->curlHdr = hdr;
++	ctx = curl_easy_init();
++	curl_easy_setopt(ctx, CURLOPT_HTTPHEADER, hdr);
++	curl_easy_setopt(ctx, CURLOPT_WRITEFUNCTION, curlCB);
++	curl_easy_setopt(ctx, CURLOPT_WRITEDATA, pWrkrData);
++	if(pWrkrData->pData->caCertFile)
++		curl_easy_setopt(ctx, CURLOPT_CAINFO, pWrkrData->pData->caCertFile);
++	if(pWrkrData->pData->allowUnsignedCerts)
++		curl_easy_setopt(ctx, CURLOPT_SSL_VERIFYPEER, 0);
++
++	pWrkrData->curlCtx = ctx;
++finalize_it:
++	free(token);
++	if (fp) {
++		fclose(fp);
++	}
++ENDcreateWrkrInstance
++
++
++BEGINfreeWrkrInstance
++CODESTARTfreeWrkrInstance
++	curl_easy_cleanup(pWrkrData->curlCtx);
++	curl_slist_free_all(pWrkrData->curlHdr);
++ENDfreeWrkrInstance
++
++
++static struct cache_s *cacheNew(const uchar *url)
++{
++	struct cache_s *cache;
++
++	if (NULL == (cache = calloc(1, sizeof(struct cache_s)))) {
++		FINALIZE;
++	}
++	cache->kbUrl = url;
++	cache->mdHt = create_hashtable(100, hash_from_string,
++		key_equals_string, (void (*)(void *)) json_object_put);
++	cache->nsHt = create_hashtable(100, hash_from_string,
++		key_equals_string, (void (*)(void *)) json_object_put);
++	cache->cacheMtx = malloc(sizeof(pthread_mutex_t));
++	if (!cache->mdHt || !cache->nsHt || !cache->cacheMtx) {
++		free (cache);
++		cache = NULL;
++		FINALIZE;
++	}
++	pthread_mutex_init(cache->cacheMtx, NULL);
++
++finalize_it:
++	return cache;
++}
++
++
++static void cacheFree(struct cache_s *cache)
++{
++	hashtable_destroy(cache->mdHt, 1);
++	hashtable_destroy(cache->nsHt, 1);
++	pthread_mutex_destroy(cache->cacheMtx);
++	free(cache->cacheMtx);
++	free(cache);
++}
++
++
++BEGINnewActInst
++	struct cnfparamvals *pvals = NULL;
++	int i;
++	FILE *fp;
++	char *rxstr = NULL;
++	char *srcMetadataPath = NULL;
++CODESTARTnewActInst
++	DBGPRINTF("newActInst (mmkubernetes)\n");
++
++	pvals = nvlstGetParams(lst, &actpblk, NULL);
++	if(pvals == NULL) {
++		errmsg.LogError(0, RS_RET_MISSING_CNFPARAMS, "mmkubernetes: "
++			"error processing config parameters [action(...)]");
++		ABORT_FINALIZE(RS_RET_MISSING_CNFPARAMS);
++	}
++
++	if(Debug) {
++		dbgprintf("action param blk in mmkubernetes:\n");
++		cnfparamsPrint(&actpblk, pvals);
++	}
++
++	CODE_STD_STRING_REQUESTnewActInst(1)
++	CHKiRet(OMSRsetEntry(*ppOMSR, 0, NULL, OMSR_TPL_AS_MSG));
++	CHKiRet(createInstance(&pData));
++
++	pData->de_dot = loadModConf->de_dot;
++	pData->allowUnsignedCerts = loadModConf->allowUnsignedCerts;
++	for(i = 0 ; i < actpblk.nParams ; ++i) {
++		if(!pvals[i].bUsed) {
++			continue;
++		} else if(!strcmp(actpblk.descr[i].name, "kubernetesurl")) {
++			free(pData->kubernetesUrl);
++			pData->kubernetesUrl = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(actpblk.descr[i].name, "srcmetadatapath")) {
++			msgPropDescrDestruct(pData->srcMetadataDescr);
++			free(pData->srcMetadataDescr);
++			CHKmalloc(pData->srcMetadataDescr = MALLOC(sizeof(msgPropDescr_t)));
++			srcMetadataPath = es_str2cstr(pvals[i].val.d.estr, NULL);
++			CHKiRet(msgPropDescrFill(pData->srcMetadataDescr, (uchar *)srcMetadataPath,
++				strlen(srcMetadataPath)));
++			/* todo: sanitize the path */
++		} else if(!strcmp(actpblk.descr[i].name, "dstmetadatapath")) {
++			free(pData->dstMetadataPath);
++			pData->dstMetadataPath = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			/* todo: sanitize the path */
++		} else if(!strcmp(actpblk.descr[i].name, "tls.cacert")) {
++			free(pData->caCertFile);
++			pData->caCertFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->caCertFile, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: certificate file %s couldn't be accessed: %s\n",
++						pData->caCertFile, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "allowunsignedcerts")) {
++			pData->allowUnsignedCerts = pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "token")) {
++			free(pData->token);
++			pData->token = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(actpblk.descr[i].name, "tokenfile")) {
++			free(pData->tokenFile);
++			pData->tokenFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->tokenFile, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: token file %s couldn't be accessed: %s\n",
++						pData->tokenFile, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "annotation_match")) {
++			free_annotationmatch(&pData->annotation_match);
++			if (RS_RET_OK != (iRet = init_annotationmatch(&pData->annotation_match, pvals[i].val.d.ar)))
++				ABORT_FINALIZE(iRet);
++		} else if(!strcmp(actpblk.descr[i].name, "de_dot")) {
++			pData->de_dot = pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "de_dot_separator")) {
++			free(pData->de_dot_separator);
++			pData->de_dot_separator = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		} else if(!strcmp(modpblk.descr[i].name, "filenamerules")) {
++			free(pData->fnRules);
++			CHKiRet((array_to_rules(pvals[i].val.d.ar, &pData->fnRules)));
++#endif
++		} else if(!strcmp(modpblk.descr[i].name, "filenamerulebase")) {
++			free(pData->fnRulebase);
++			pData->fnRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->fnRulebase, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: filenamerulebase file %s couldn't be accessed: %s\n",
++						pData->fnRulebase, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		} else if(!strcmp(modpblk.descr[i].name, "containerrules")) {
++			free(pData->contRules);
++			CHKiRet((array_to_rules(pvals[i].val.d.ar, &pData->contRules)));
++#endif
++		} else if(!strcmp(modpblk.descr[i].name, "containerrulebase")) {
++			free(pData->contRulebase);
++			pData->contRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->contRulebase, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: containerrulebase file %s couldn't be accessed: %s\n",
++						pData->contRulebase, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else {
++			dbgprintf("mmkubernetes: program error, non-handled "
++				"param '%s' in action() block\n", actpblk.descr[i].name);
++			/* todo: error message? */
++		}
++	}
++
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	if (pData->fnRules && pData->fnRulebase) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++		    "mmkubernetes: only 1 of filenamerules or filenamerulebase may be used");
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++	if (pData->contRules && pData->contRulebase) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++			"mmkubernetes: only 1 of containerrules or containerrulebase may be used");
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++#endif
++	CHKiRet(set_lnctx(&pData->fnCtxln, pData->fnRules, pData->fnRulebase,
++			loadModConf->fnRules, loadModConf->fnRulebase));
++	CHKiRet(set_lnctx(&pData->contCtxln, pData->contRules, pData->contRulebase,
++			loadModConf->contRules, loadModConf->contRulebase));
++
++	if(pData->kubernetesUrl == NULL) {
++		if(loadModConf->kubernetesUrl == NULL) {
++			CHKmalloc(pData->kubernetesUrl = (uchar *) strdup(DFLT_KUBERNETES_URL));
++		} else {
++			CHKmalloc(pData->kubernetesUrl = (uchar *) strdup((char *) loadModConf->kubernetesUrl));
++		}
++	}
++	if(pData->srcMetadataDescr == NULL) {
++		CHKmalloc(pData->srcMetadataDescr = MALLOC(sizeof(msgPropDescr_t)));
++		CHKiRet(msgPropDescrFill(pData->srcMetadataDescr, loadModConf->srcMetadataPath,
++			strlen((char *)loadModConf->srcMetadataPath)));
++	}
++	if(pData->dstMetadataPath == NULL)
++		pData->dstMetadataPath = (uchar *) strdup((char *) loadModConf->dstMetadataPath);
++	if(pData->caCertFile == NULL && loadModConf->caCertFile)
++		pData->caCertFile = (uchar *) strdup((char *) loadModConf->caCertFile);
++	if(pData->token == NULL && loadModConf->token)
++		pData->token = (uchar *) strdup((char *) loadModConf->token);
++	if(pData->tokenFile == NULL && loadModConf->tokenFile)
++		pData->tokenFile = (uchar *) strdup((char *) loadModConf->tokenFile);
++	if(pData->de_dot_separator == NULL && loadModConf->de_dot_separator)
++		pData->de_dot_separator = (uchar *) strdup((char *) loadModConf->de_dot_separator);
++	if((pData->annotation_match.nmemb == 0) && (loadModConf->annotation_match.nmemb > 0))
++		copy_annotationmatch(&loadModConf->annotation_match, &pData->annotation_match);
++
++	if(pData->de_dot_separator)
++		pData->de_dot_separator_len = strlen((const char *)pData->de_dot_separator);
++
++	CHKmalloc(pData->contNameDescr = MALLOC(sizeof(msgPropDescr_t)));
++	CHKiRet(msgPropDescrFill(pData->contNameDescr, (uchar*) DFLT_CONTAINER_NAME,
++			strlen(DFLT_CONTAINER_NAME)));
++	CHKmalloc(pData->contIdFullDescr = MALLOC(sizeof(msgPropDescr_t)));
++	CHKiRet(msgPropDescrFill(pData->contIdFullDescr, (uchar*) DFLT_CONTAINER_ID_FULL,
++			strlen(DFLT_CONTAINER_NAME)));
++
++	/* get the cache for this url */
++	for(i = 0; caches[i] != NULL; i++) {
++		if(!strcmp((char *) pData->kubernetesUrl, (char *) caches[i]->kbUrl))
++			break;
++	}
++	if(caches[i] != NULL) {
++		pData->cache = caches[i];
++	} else {
++		CHKmalloc(pData->cache = cacheNew(pData->kubernetesUrl));
++
++		CHKmalloc(caches = realloc(caches, (i + 2) * sizeof(struct cache_s *)));
++		caches[i] = pData->cache;
++		caches[i + 1] = NULL;
++	}
++CODE_STD_FINALIZERnewActInst
++	if(pvals != NULL)
++		cnfparamvalsDestruct(pvals, &actpblk);
++	free(rxstr);
++	free(srcMetadataPath);
++ENDnewActInst
++
++
++/* legacy config format is not supported */
++BEGINparseSelectorAct
++CODESTARTparseSelectorAct
++CODE_STD_STRING_REQUESTparseSelectorAct(1)
++	if(strncmp((char *) p, ":mmkubernetes:", sizeof(":mmkubernetes:") - 1)) {
++		errmsg.LogError(0, RS_RET_LEGA_ACT_NOT_SUPPORTED,
++			"mmkubernetes supports only v6+ config format, use: "
++			"action(type=\"mmkubernetes\" ...)");
++	}
++	ABORT_FINALIZE(RS_RET_CONFLINE_UNPROCESSED);
++CODE_STD_FINALIZERparseSelectorAct
++ENDparseSelectorAct
++
++
++BEGINendCnfLoad
++CODESTARTendCnfLoad
++ENDendCnfLoad
++
++
++BEGINcheckCnf
++CODESTARTcheckCnf
++ENDcheckCnf
++
++
++BEGINactivateCnf
++CODESTARTactivateCnf
++	runModConf = pModConf;
++ENDactivateCnf
++
++
++BEGINfreeCnf
++CODESTARTfreeCnf
++	int i;
++
++	free(pModConf->kubernetesUrl);
++	free(pModConf->srcMetadataPath);
++	free(pModConf->dstMetadataPath);
++	free(pModConf->caCertFile);
++	free(pModConf->token);
++	free(pModConf->tokenFile);
++	free(pModConf->de_dot_separator);
++	free(pModConf->fnRules);
++	free(pModConf->fnRulebase);
++	free(pModConf->contRules);
++	free(pModConf->contRulebase);
++	free_annotationmatch(&pModConf->annotation_match);
++	for(i = 0; caches[i] != NULL; i++)
++		cacheFree(caches[i]);
++	free(caches);
++ENDfreeCnf
++
++
++BEGINdbgPrintInstInfo
++CODESTARTdbgPrintInstInfo
++	dbgprintf("mmkubernetes\n");
++	dbgprintf("\tkubernetesUrl='%s'\n", pData->kubernetesUrl);
++	dbgprintf("\tsrcMetadataPath='%s'\n", pData->srcMetadataDescr->name);
++	dbgprintf("\tdstMetadataPath='%s'\n", pData->dstMetadataPath);
++	dbgprintf("\ttls.cacert='%s'\n", pData->caCertFile);
++	dbgprintf("\tallowUnsignedCerts='%d'\n", pData->allowUnsignedCerts);
++	dbgprintf("\ttoken='%s'\n", pData->token);
++	dbgprintf("\ttokenFile='%s'\n", pData->tokenFile);
++	dbgprintf("\tde_dot='%d'\n", pData->de_dot);
++	dbgprintf("\tde_dot_separator='%s'\n", pData->de_dot_separator);
++	dbgprintf("\tfilenamerulebase='%s'\n", pData->fnRulebase);
++	dbgprintf("\tcontainerrulebase='%s'\n", pData->contRulebase);
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	dbgprintf("\tfilenamerules='%s'\n", pData->fnRules);
++	dbgprintf("\tcontainerrules='%s'\n", pData->contRules);
++#endif
++ENDdbgPrintInstInfo
++
++
++BEGINtryResume
++CODESTARTtryResume
++ENDtryResume
++
++static rsRetVal
++extractMsgMetadata(smsg_t *pMsg, instanceData *pData, struct json_object **json)
++{
++	DEFiRet;
++	uchar *filename = NULL, *container_name = NULL, *container_id_full = NULL;
++	rs_size_t fnLen, container_name_len, container_id_full_len;
++	unsigned short freeFn = 0, free_container_name = 0, free_container_id_full = 0;
++	int lnret;
++	struct json_object *cnid = NULL;
++
++	if (!json)
++		FINALIZE;
++	*json = NULL;
++	/* extract metadata from the CONTAINER_NAME field and see if CONTAINER_ID_FULL is present */
++	container_name = MsgGetProp(pMsg, NULL, pData->contNameDescr,
++				    &container_name_len, &free_container_name, NULL);
++	container_id_full = MsgGetProp(
++		pMsg, NULL, pData->contIdFullDescr, &container_id_full_len, &free_container_id_full, NULL);
++
++	if (container_name && container_id_full && container_name_len && container_id_full_len) {
++		dbgprintf("mmkubernetes: CONTAINER_NAME: '%s'  CONTAINER_ID_FULL: '%s'.\n",
++			  container_name, container_id_full);
++		if ((lnret = ln_normalize(pData->contCtxln, (char*)container_name,
++					  container_name_len, json))) {
++			if (LN_WRONGPARSER != lnret) {
++				LogMsg(0, RS_RET_ERR, LOG_ERR,
++					"mmkubernetes: error parsing container_name [%s]: [%d]",
++					container_name, lnret);
++
++				ABORT_FINALIZE(RS_RET_ERR);
++			}
++			/* else assume parser didn't find a match and fall through */
++		} else if (fjson_object_object_get_ex(*json, "pod_name", NULL) &&
++			fjson_object_object_get_ex(*json, "namespace_name", NULL) &&
++			fjson_object_object_get_ex(*json, "container_name", NULL)) {
++			/* if we have fields for pod name, namespace name, container name,
++			 * and container id, we are good to go */
++			/* add field for container id */
++			json_object_object_add(*json, "container_id",
++				json_object_new_string_len((const char *)container_id_full,
++							   container_id_full_len));
++			ABORT_FINALIZE(RS_RET_OK);
++		}
++	}
++
++	/* extract metadata from the file name */
++	filename = MsgGetProp(pMsg, NULL, pData->srcMetadataDescr, &fnLen, &freeFn, NULL);
++	if((filename == NULL) || (fnLen == 0))
++		ABORT_FINALIZE(RS_RET_NOT_FOUND);
++
++	dbgprintf("mmkubernetes: filename: '%s' len %d.\n", filename, fnLen);
++	if ((lnret = ln_normalize(pData->fnCtxln, (char*)filename, fnLen, json))) {
++		if (LN_WRONGPARSER != lnret) {
++			LogMsg(0, RS_RET_ERR, LOG_ERR,
++				"mmkubernetes: error parsing container_name [%s]: [%d]",
++				filename, lnret);
++
++			ABORT_FINALIZE(RS_RET_ERR);
++		} else {
++			/* no match */
++			ABORT_FINALIZE(RS_RET_NOT_FOUND);
++		}
++}
++	/* if we have fields for pod name, namespace name, container name,
++	 * and container id, we are good to go */
++	if (fjson_object_object_get_ex(*json, "pod_name", NULL) &&
++		fjson_object_object_get_ex(*json, "namespace_name", NULL) &&
++		fjson_object_object_get_ex(*json, "container_name_and_id", &cnid)) {
++		/* parse container_name_and_id into container_name and container_id */
++		const char *container_name_and_id = json_object_get_string(cnid);
++		const char *last_dash = NULL;
++		if (container_name_and_id && (last_dash = strrchr(container_name_and_id, '-')) &&
++			*(last_dash + 1) && (last_dash != container_name_and_id)) {
++			json_object_object_add(*json, "container_name",
++				json_object_new_string_len(container_name_and_id,
++							   (int)(last_dash-container_name_and_id)));
++			json_object_object_add(*json, "container_id",
++					json_object_new_string(last_dash + 1));
++			ABORT_FINALIZE(RS_RET_OK);
++		}
++	}
++	ABORT_FINALIZE(RS_RET_NOT_FOUND);
++finalize_it:
++	if(freeFn)
++		free(filename);
++	if (free_container_name)
++		free(container_name);
++	if (free_container_id_full)
++		free(container_id_full);
++	if (iRet != RS_RET_OK) {
++		json_object_put(*json);
++		*json = NULL;
++	}
++	RETiRet;
++}
++
++
++static rsRetVal
++queryKB(wrkrInstanceData_t *pWrkrData, char *url, struct json_object **rply)
++{
++	DEFiRet;
++	CURLcode ccode;
++	struct json_tokener *jt = NULL;
++	struct json_object *jo;
++	long resp_code = 400;
++
++	/* query kubernetes for pod info */
++	ccode = curl_easy_setopt(pWrkrData->curlCtx, CURLOPT_URL, url);
++	if(ccode != CURLE_OK)
++		ABORT_FINALIZE(RS_RET_ERR);
++	if(CURLE_OK != (ccode = curl_easy_perform(pWrkrData->curlCtx))) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: failed to connect to [%s] - %d:%s\n",
++			      url, ccode, curl_easy_strerror(ccode));
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(CURLE_OK != (ccode = curl_easy_getinfo(pWrkrData->curlCtx,
++					CURLINFO_RESPONSE_CODE, &resp_code))) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: could not get response code from query to [%s] - %d:%s\n",
++			      url, ccode, curl_easy_strerror(ccode));
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code == 401) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: Unauthorized: not allowed to view url - "
++			      "check token/auth credentials [%s]\n",
++			      url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code == 403) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: Forbidden: no access - "
++			      "check permissions to view url [%s]\n",
++			      url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code == 404) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: Not Found: the resource does not exist at url [%s]\n",
++			      url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code == 429) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: Too Many Requests: the server is too heavily loaded "
++			      "to provide the data for the requested url [%s]\n",
++			      url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code != 200) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: server returned unexpected code [%ld] for url [%s]\n",
++			      resp_code, url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	/* parse retrieved data */
++	jt = json_tokener_new();
++	json_tokener_reset(jt);
++	jo = json_tokener_parse_ex(jt, pWrkrData->curlRply, pWrkrData->curlRplyLen);
++	json_tokener_free(jt);
++	if(!json_object_is_type(jo, json_type_object)) {
++		json_object_put(jo);
++		jo = NULL;
++		errmsg.LogMsg(0, RS_RET_JSON_PARSE_ERR, LOG_INFO,
++			      "mmkubernetes: unable to parse string as JSON:[%.*s]\n",
++			      (int)pWrkrData->curlRplyLen, pWrkrData->curlRply);
++		ABORT_FINALIZE(RS_RET_JSON_PARSE_ERR);
++	}
++
++	dbgprintf("mmkubernetes: queryKB reply:\n%s\n",
++		json_object_to_json_string_ext(jo, JSON_C_TO_STRING_PRETTY));
++
++	*rply = jo;
++
++finalize_it:
++	if(pWrkrData->curlRply != NULL) {
++		free(pWrkrData->curlRply);
++		pWrkrData->curlRply = NULL;
++		pWrkrData->curlRplyLen = 0;
++	}
++	RETiRet;
++}
++
++
++/* versions < 8.16.0 don't support BEGINdoAction_NoStrings */
++#if defined(BEGINdoAction_NoStrings)
++BEGINdoAction_NoStrings
++	smsg_t **ppMsg = (smsg_t **) pMsgData;
++	smsg_t *pMsg = ppMsg[0];
++#else
++BEGINdoAction
++	smsg_t *pMsg = (smsg_t*) ppString[0];
++#endif
++	const char *podName = NULL, *ns = NULL, *containerName = NULL,
++		*containerID = NULL;
++	char *mdKey = NULL;
++	struct json_object *jMetadata = NULL, *jMetadataCopy = NULL, *jMsgMeta = NULL,
++			*jo = NULL;
++	int add_ns_metadata = 0;
++CODESTARTdoAction
++	CHKiRet_Hdlr(extractMsgMetadata(pMsg, pWrkrData->pData, &jMsgMeta)) {
++		ABORT_FINALIZE((iRet == RS_RET_NOT_FOUND) ? RS_RET_OK : iRet);
++	}
++
++	if (fjson_object_object_get_ex(jMsgMeta, "pod_name", &jo))
++		podName = json_object_get_string(jo);
++	if (fjson_object_object_get_ex(jMsgMeta, "namespace_name", &jo))
++		ns = json_object_get_string(jo);
++	if (fjson_object_object_get_ex(jMsgMeta, "container_name", &jo))
++		containerName = json_object_get_string(jo);
++	if (fjson_object_object_get_ex(jMsgMeta, "container_id", &jo))
++		containerID = json_object_get_string(jo);
++	assert(podName != NULL);
++	assert(ns != NULL);
++	assert(containerName != NULL);
++	assert(containerID != NULL);
++
++	dbgprintf("mmkubernetes:\n  podName: '%s'\n  namespace: '%s'\n  containerName: '%s'\n"
++		"  containerID: '%s'\n", podName, ns, containerName, containerID);
++
++	/* check cache for metadata */
++	if ((-1 == asprintf(&mdKey, "%s_%s_%s", ns, podName, containerName)) ||
++		(!mdKey)) {
++		ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++	}
++	pthread_mutex_lock(pWrkrData->pData->cache->cacheMtx);
++	jMetadata = hashtable_search(pWrkrData->pData->cache->mdHt, mdKey);
++
++	if(jMetadata == NULL) {
++		char *url = NULL;
++		struct json_object *jReply = NULL, *jo2 = NULL, *jNsMeta = NULL, *jPodData = NULL;
++
++		/* check cache for namespace metadata */
++		jNsMeta = hashtable_search(pWrkrData->pData->cache->nsHt, (char *)ns);
++
++		if(jNsMeta == NULL) {
++			/* query kubernetes for namespace info */
++			/* todo: move url definitions elsewhere */
++			if ((-1 == asprintf(&url, "%s/api/v1/namespaces/%s",
++				 (char *) pWrkrData->pData->kubernetesUrl, ns)) ||
++				(!url)) {
++				pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++				ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++			}
++			iRet = queryKB(pWrkrData, url, &jReply);
++			free(url);
++			/* todo: implement support for the .orphaned namespace */
++			if (iRet != RS_RET_OK) {
++				json_object_put(jReply);
++				jReply = NULL;
++				pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++				FINALIZE;
++			}
++
++			if(fjson_object_object_get_ex(jReply, "metadata", &jNsMeta)) {
++				jNsMeta = json_object_get(jNsMeta);
++				parse_labels_annotations(jNsMeta, &pWrkrData->pData->annotation_match,
++					pWrkrData->pData->de_dot,
++					(const char *)pWrkrData->pData->de_dot_separator,
++					pWrkrData->pData->de_dot_separator_len);
++				add_ns_metadata = 1;
++			} else {
++				/* namespace with no metadata??? */
++				errmsg.LogMsg(0, RS_RET_ERR, LOG_INFO,
++					      "mmkubernetes: namespace [%s] has no metadata!\n", ns);
++				jNsMeta = NULL;
++			}
++
++			json_object_put(jReply);
++			jReply = NULL;
++		}
++
++		if ((-1 == asprintf(&url, "%s/api/v1/namespaces/%s/pods/%s",
++			 (char *) pWrkrData->pData->kubernetesUrl, ns, podName)) ||
++			(!url)) {
++			pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++		}
++		iRet = queryKB(pWrkrData, url, &jReply);
++		free(url);
++		if(iRet != RS_RET_OK) {
++			if(jNsMeta && add_ns_metadata) {
++				hashtable_insert(pWrkrData->pData->cache->nsHt, strdup(ns), jNsMeta);
++			}
++			json_object_put(jReply);
++			jReply = NULL;
++			pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++			FINALIZE;
++		}
++
++		jo = json_object_new_object();
++		if(jNsMeta && fjson_object_object_get_ex(jNsMeta, "uid", &jo2))
++			json_object_object_add(jo, "namespace_id", json_object_get(jo2));
++		if(jNsMeta && fjson_object_object_get_ex(jNsMeta, "labels", &jo2))
++			json_object_object_add(jo, "namespace_labels", json_object_get(jo2));
++		if(jNsMeta && fjson_object_object_get_ex(jNsMeta, "annotations", &jo2))
++			json_object_object_add(jo, "namespace_annotations", json_object_get(jo2));
++		if(jNsMeta && fjson_object_object_get_ex(jNsMeta, "creationTimestamp", &jo2))
++			json_object_object_add(jo, "creation_timestamp", json_object_get(jo2));
++		if(fjson_object_object_get_ex(jReply, "metadata", &jPodData)) {
++			if(fjson_object_object_get_ex(jPodData, "uid", &jo2))
++				json_object_object_add(jo, "pod_id", json_object_get(jo2));
++			parse_labels_annotations(jPodData, &pWrkrData->pData->annotation_match,
++				pWrkrData->pData->de_dot,
++				(const char *)pWrkrData->pData->de_dot_separator,
++				pWrkrData->pData->de_dot_separator_len);
++			if(fjson_object_object_get_ex(jPodData, "annotations", &jo2))
++				json_object_object_add(jo, "annotations", json_object_get(jo2));
++			if(fjson_object_object_get_ex(jPodData, "labels", &jo2))
++				json_object_object_add(jo, "labels", json_object_get(jo2));
++		}
++		if(fjson_object_object_get_ex(jReply, "spec", &jPodData)) {
++			if(fjson_object_object_get_ex(jPodData, "nodeName", &jo2)) {
++				json_object_object_add(jo, "host", json_object_get(jo2));
++			}
++		}
++		json_object_put(jReply);
++		jReply = NULL;
++
++		if (fjson_object_object_get_ex(jMsgMeta, "pod_name", &jo2))
++			json_object_object_add(jo, "pod_name", json_object_get(jo2));
++		if (fjson_object_object_get_ex(jMsgMeta, "namespace_name", &jo2))
++			json_object_object_add(jo, "namespace_name", json_object_get(jo2));
++		if (fjson_object_object_get_ex(jMsgMeta, "container_name", &jo2))
++			json_object_object_add(jo, "container_name", json_object_get(jo2));
++		json_object_object_add(jo, "master_url",
++			json_object_new_string((const char *)pWrkrData->pData->kubernetesUrl));
++		jMetadata = json_object_new_object();
++		json_object_object_add(jMetadata, "kubernetes", jo);
++		jo = json_object_new_object();
++		if (fjson_object_object_get_ex(jMsgMeta, "container_id", &jo2))
++			json_object_object_add(jo, "container_id", json_object_get(jo2));
++		json_object_object_add(jMetadata, "docker", jo);
++
++		hashtable_insert(pWrkrData->pData->cache->mdHt, mdKey, jMetadata);
++		mdKey = NULL;
++		if(jNsMeta && add_ns_metadata) {
++			hashtable_insert(pWrkrData->pData->cache->nsHt, strdup(ns), jNsMeta);
++			ns = NULL;
++		}
++	}
++
++	/* make a copy of the metadata for the msg to own */
++	/* todo: use json_object_deep_copy when implementation available in libfastjson */
++	/* yes, this is expensive - but there is no other way to make this thread safe - we
++	 * can't allow the msg to have a shared pointer to an element inside the cache,
++	 * outside of the cache lock
++	 */
++	jMetadataCopy = json_tokener_parse(json_object_get_string(jMetadata));
++	pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++	/* the +1 is there to skip the leading '$' */
++	msgAddJSON(pMsg, (uchar *) pWrkrData->pData->dstMetadataPath + 1, jMetadataCopy, 0, 0);
++
++finalize_it:
++	json_object_put(jMsgMeta);
++	free(mdKey);
++ENDdoAction
++
++
++BEGINisCompatibleWithFeature
++CODESTARTisCompatibleWithFeature
++ENDisCompatibleWithFeature
++
++
++/* all the macros bellow have to be in a specific order */
++BEGINmodExit
++CODESTARTmodExit
++	curl_global_cleanup();
++
++	objRelease(regexp, LM_REGEXP_FILENAME);
++	objRelease(errmsg, CORE_COMPONENT);
++ENDmodExit
++
++
++BEGINqueryEtryPt
++CODESTARTqueryEtryPt
++CODEqueryEtryPt_STD_OMOD_QUERIES
++CODEqueryEtryPt_STD_OMOD8_QUERIES
++CODEqueryEtryPt_STD_CONF2_QUERIES
++CODEqueryEtryPt_STD_CONF2_setModCnf_QUERIES
++CODEqueryEtryPt_STD_CONF2_OMOD_QUERIES
++ENDqueryEtryPt
++
++
++BEGINmodInit()
++CODESTARTmodInit
++	*ipIFVersProvided = CURR_MOD_IF_VERSION; /* we only support the current interface specification */
++CODEmodInit_QueryRegCFSLineHdlr
++	DBGPRINTF("mmkubernetes: module compiled with rsyslog version %s.\n", VERSION);
++	CHKiRet(objUse(errmsg, CORE_COMPONENT));
++	CHKiRet(objUse(regexp, LM_REGEXP_FILENAME));
++
++	/* CURL_GLOBAL_ALL initializes more than is needed but the
++	 * libcurl documentation discourages use of other values
++	 */
++	curl_global_init(CURL_GLOBAL_ALL);
++ENDmodInit
+diff --git a/contrib/mmkubernetes/sample.conf b/contrib/mmkubernetes/sample.conf
+new file mode 100644
+index 000000000..4c400ed51
+--- /dev/null
++++ b/contrib/mmkubernetes/sample.conf
+@@ -0,0 +1,7 @@
++module(load="mmkubernetes") # see docs for all module and action parameters
++
++# $!metadata!filename added by imfile using addmetadata="on"
++# e.g. input(type="imfile" file="/var/log/containers/*.log" tag="kubernetes" addmetadata="on")
++# $!CONTAINER_NAME and $!CONTAINER_ID_FULL added by imjournal
++
++action(type="mmkubernetes")
+-- 
+2.14.4
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1545582-imjournal-duplicates.patch b/SOURCES/rsyslog-8.24.0-rhbz1545582-imjournal-duplicates.patch
deleted file mode 100644
index 1b392c4..0000000
--- a/SOURCES/rsyslog-8.24.0-rhbz1545582-imjournal-duplicates.patch
+++ /dev/null
@@ -1,318 +0,0 @@
-From: Jiri Vymazal <jvymazal@redhat.com>
-Date: Wed, 14 Mar 2018 90:05:01 -0500
-
-modification and merge of below patches for RHEL consumers, 
-also modified journal invalidate/rotation handling to keep possibility
-to continue after switch of persistent journal
-original:
-%
-%From a99f9b4b42d261c384aee09306fc421df2cca7a5 Mon Sep 17 00:00:00 2001
-%From: Peter Portante <peter.a.portante@gmail.com>
-%Date: Wed, 24 Jan 2018 19:34:41 -0500
-%Subject: [PATCH] Proposed fix for handling journal correctly
-%
-%The fix is to immediately setup the inotify file descriptor via
-%`sd_journal_get_fd()` right after a journal open, and then
-%periodically call `sd_journal_process()` to give the client API
-%library a chance to detect deleted journal files on disk that need to
-%be closed so they can be properly erased by the file system.
-%
-%We remove the open/close dance and simplify that code as a result.
-%
-%Fixes issue #2436.
-and also:
-%From 27f96c84d34ee000fbb5d45b00233f2ec3cf2d8a Mon Sep 17 00:00:00 2001
-%From: Rainer Gerhards <rgerhards@adiscon.com>
-%Date: Tue, 24 Oct 2017 16:14:13 +0200
-%Subject: [PATCH] imjournal bugfix: do not disable itself on error
-%
-%If some functions calls inside the main loop failed, imjournal exited
-%with an error code, actually disabling all logging from the journal.
-%This was probably never intended.
-%
-%This patch makes imjournal recover the situation instead.
-%
-%closes https://github.com/rsyslog/rsyslog/issues/1895
----
- plugins/imjournal/imjournal.c | 206 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------------------------------------------------------------
- 1 file changed, 104 insertions(+), 102 deletions(-)
-
---- a/plugins/imjournal/imjournal.c
-+++ b/plugins/imjournal/imjournal.c
-@@ -114,6 +114,10 @@ /* module-global parameters */
- static const char *pid_field_name;	/* read-only after startup */
- static ratelimit_t *ratelimiter = NULL;
- static sd_journal *j;
-+static int j_inotify_fd;
-+static char *last_cursor = NULL;
-+
-+#define J_PROCESS_PERIOD 1024  /* Call sd_journal_process() every 1,024 records */
- 
- static rsRetVal persistJournalState(void);
- static rsRetVal loadJournalState(void);
-@@ -123,6 +127,14 @@ openJournal(sd_journal** jj)
- 
- 	if (sd_journal_open(jj, SD_JOURNAL_LOCAL_ONLY) < 0)
- 		iRet = RS_RET_IO_ERROR;
-+	int r;
-+
-+	if ((r = sd_journal_get_fd(j)) < 0) {
-+		errmsg.LogError(-r, RS_RET_IO_ERROR, "imjournal: sd_journal_get_fd() failed");
-+		iRet = RS_RET_IO_ERROR;
-+	} else {
-+		j_inotify_fd = r;
-+	}	
- 	RETiRet;
- }
- 
-@@ -132,6 +144,7 @@ closeJournal(sd_journal** jj)
- 		persistJournalState();
- 	}
- 	sd_journal_close(*jj);
-+	j_inotify_fd = 0;
- }
- 
- 
-@@ -262,6 +275,7 @@ readjournal(void)
- 	char *message = NULL;
- 	char *sys_iden = NULL;
- 	char *sys_iden_help = NULL;
-+	char *c = NULL;
- 
- 	const void *get;
- 	const void *pidget;
-@@ -393,6 +407,12 @@ readjournal(void)
- 		tv.tv_usec = timestamp % 1000000;
- 	}
- 
-+        sd_journal_get_cursor(j, &c);
-+        if (c) {
-+                free(last_cursor);
-+                last_cursor = c;
-+        }
-+
- 	/* submit message */
- 	enqMsg((uchar *)message, (uchar *) sys_iden_help, facility, severity, &tv, json, 0);
- 
-@@ -413,44 +433,41 @@ persistJournalState (void)
- 	DEFiRet;
- 	FILE *sf; /* state file */
- 	char tmp_sf[MAXFNAME];
--	char *cursor;
--	int ret = 0;
-+	int r = 0;
- 
--	/* On success, sd_journal_get_cursor()  returns 1 in systemd
--	   197 or older and 0 in systemd 198 or newer */
--	if ((ret = sd_journal_get_cursor(j, &cursor)) >= 0) {
--               /* we create a temporary name by adding a ".tmp"
--                * suffix to the end of our state file's name
--                */
--               snprintf(tmp_sf, sizeof(tmp_sf), "%s.tmp", cs.stateFile);
--               if ((sf = fopen(tmp_sf, "wb")) != NULL) {
--			if (fprintf(sf, "%s", cursor) < 0) {
--				iRet = RS_RET_IO_ERROR;
--			}
--			fclose(sf);
--			free(cursor);
--                       /* change the name of the file to the configured one */
--                       if (iRet == RS_RET_OK && rename(tmp_sf, cs.stateFile) == -1) {
--                               char errStr[256];
--                               rs_strerror_r(errno, errStr, sizeof(errStr));
--                               iRet = RS_RET_IO_ERROR;
--                               errmsg.LogError(0, iRet, "rename() failed: "
--                                       "'%s', new path: '%s'\n", errStr, cs.stateFile);
--                       }
-+	if (!last_cursor)
-+		ABORT_FINALIZE(RS_RET_OK);
- 
--		} else {
--			char errStr[256];
--			rs_strerror_r(errno, errStr, sizeof(errStr));
--			errmsg.LogError(0, RS_RET_FOPEN_FAILURE, "fopen() failed: "
--				"'%s', path: '%s'\n", errStr, tmp_sf);
--			iRet = RS_RET_FOPEN_FAILURE;
--		}
--	} else {
--		char errStr[256];
--		rs_strerror_r(-(ret), errStr, sizeof(errStr));
--		errmsg.LogError(0, RS_RET_ERR, "sd_journal_get_cursor() failed: '%s'\n", errStr);
--		iRet = RS_RET_ERR;
--	}
-+	/* we create a temporary name by adding a ".tmp"
-+	 * suffix to the end of our state file's name
-+	 */
-+	snprintf(tmp_sf, sizeof(tmp_sf), "%s.tmp", cs.stateFile);
-+
-+	sf = fopen(tmp_sf, "wb");
-+	if (!sf) {
-+		errmsg.LogError(errno, RS_RET_FOPEN_FAILURE, "imjournal: fopen() failed for path: '%s'", tmp_sf);
-+		ABORT_FINALIZE(RS_RET_FOPEN_FAILURE);
-+	}
-+
-+	r = fprintf(sf, "%s", last_cursor);
-+	if (r < 0) {
-+		errmsg.LogError(errno, RS_RET_FOPEN_FAILURE, "imjournal: failed to save cursor to: '%s'", tmp_sf);
-+		ABORT_FINALIZE(RS_RET_IO_ERROR);
-+	}
-+
-+	r = fclose(sf);
-+	if (r < 0) {
-+		errmsg.LogError(errno, iRet, "imjournal: fclose() failed for path: '%s'", tmp_sf);
-+		ABORT_FINALIZE(RS_RET_IO_ERROR);
-+	}
-+
-+	r = rename(tmp_sf, cs.stateFile);
-+	if (r < 0) {
-+		errmsg.LogError(errno, iRet, "imjournal: rename() failed for new path: '%s'", cs.stateFile);
-+		ABORT_FINALIZE(RS_RET_IO_ERROR);
-+	}
-+
-+finalize_it:
- 	RETiRet;
- }
- 
-@@ -473,64 +473,29 @@
-  * except for the special handling of EINTR.
-  */
- 
--#define POLL_TIMEOUT 1000 /* timeout for poll is 1s */
-+#define POLL_TIMEOUT 900000 /* timeout for poll is 900ms */
- 
- static rsRetVal
- pollJournal(void)
- {
- 	DEFiRet;
--	struct pollfd pollfd;
--	int pr = 0;
--	int jr = 0;
--
--	pollfd.fd = sd_journal_get_fd(j);
--	pollfd.events = sd_journal_get_events(j);
--	pr = poll(&pollfd, 1, POLL_TIMEOUT);
--	if (pr == -1) {
--		if (errno == EINTR) {
--			/* EINTR is also received during termination
--			 * so return now to check the term state.
--			 */
--			ABORT_FINALIZE(RS_RET_OK);
--		} else {
--			char errStr[256];
--
--			rs_strerror_r(errno, errStr, sizeof(errStr));
--			errmsg.LogError(0, RS_RET_ERR,
--				"poll() failed: '%s'", errStr);
--			ABORT_FINALIZE(RS_RET_ERR);
--		}
--	}
-+	int r;
- 
-+	for (;;) {
-+		r = sd_journal_wait(j, POLL_TIMEOUT);
-+		break;
-+	}
- 
--	jr = sd_journal_process(j);
--	
--	if (pr == 1 && jr == SD_JOURNAL_INVALIDATE) {
--		/* do not persist stateFile sd_journal_get_cursor will fail! */
--		char* tmp = cs.stateFile;
--		cs.stateFile = NULL;
-+	if (r == SD_JOURNAL_INVALIDATE) {
- 		closeJournal(&j);
--		cs.stateFile = tmp;
- 
- 		iRet = openJournal(&j);
--		if (iRet != RS_RET_OK) {
--			char errStr[256];
--			rs_strerror_r(errno, errStr, sizeof(errStr));
--			errmsg.LogError(0, RS_RET_IO_ERROR,
--				"sd_journal_open() failed: '%s'", errStr);
-+		if (iRet != RS_RET_OK)
- 			ABORT_FINALIZE(RS_RET_ERR);
--		}
- 
--		if(cs.stateFile != NULL){
-+		if (cs.stateFile)
- 			iRet = loadJournalState();
--		}
--		LogMsg(0, RS_RET_OK, LOG_NOTICE, "imjournal: journal reloaded...");
--	} else if (jr < 0) {
--		char errStr[256];
--		rs_strerror_r(errno, errStr, sizeof(errStr));
--		errmsg.LogError(0, RS_RET_ERR,
--			"sd_journal_process() failed: '%s'", errStr);
--		ABORT_FINALIZE(RS_RET_ERR);
-+		errmsg.LogMsg(0, RS_RET_OK, LOG_NOTICE, "imjournal: journal reloaded...");
- 	}
- 
- finalize_it:
-@@ -631,8 +612,17 @@ loadJournalState(void)
- 	RETiRet;
- }
- 
-+static void
-+tryRecover(void) {
-+	errmsg.LogMsg(0, RS_RET_OK, LOG_INFO, "imjournal: trying to recover from unexpected "
-+		"journal error");
-+	closeJournal(&j);
-+	srSleep(10, 0);	// do not hammer machine with too-frequent retries
-+	openJournal(&j);
-+}
-+
- BEGINrunInput
--	int count = 0;
-+	uint64_t count = 0;
- CODESTARTrunInput
- 	CHKiRet(ratelimitNew(&ratelimiter, "imjournal", NULL));
- 	dbgprintf("imjournal: ratelimiting burst %d, interval %d\n", cs.ratelimitBurst,
-@@ -665,26 +655,38 @@ CODESTARTrunInput
- 
- 		r = sd_journal_next(j);
- 		if (r < 0) {
--			char errStr[256];
--
--			rs_strerror_r(errno, errStr, sizeof(errStr));
--			errmsg.LogError(0, RS_RET_ERR,
--				"sd_journal_next() failed: '%s'", errStr);
--			ABORT_FINALIZE(RS_RET_ERR);
-+			tryRecover();
-+			continue;
- 		}
- 
- 		if (r == 0) {
- 			/* No new messages, wait for activity. */
--			CHKiRet(pollJournal());
-+			if (pollJournal() != RS_RET_OK) {
-+ 				tryRecover();
-+ 			}
- 			continue;
- 		}
- 
--		CHKiRet(readjournal());
-+		if (readjournal() != RS_RET_OK) {
-+ 			tryRecover();
-+ 			continue;
-+ 		}
-+
-+		count++;
-+
-+		if ((count % J_PROCESS_PERIOD) == 0) {
-+			/* Give the journal a periodic chance to detect rotated journal files to be cleaned up. */
-+			r = sd_journal_process(j);
-+			if (r < 0) {
-+				errmsg.LogError(-r, RS_RET_ERR, "imjournal: sd_journal_process() failed");
-+				tryRecover();
-+				continue;
-+			}
-+		}
-+
- 		if (cs.stateFile) { /* can't persist without a state file */
- 			/* TODO: This could use some finer metric. */
--			count++;
--			if (count == cs.iPersistStateInterval) {
--				count = 0;
-+			if ((count % cs.iPersistStateInterval) == 0) {
- 				persistJournalState();
- 			}
- 		}
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1559408-async-writer.patch b/SOURCES/rsyslog-8.24.0-rhbz1559408-async-writer.patch
new file mode 100644
index 0000000..53563de
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1559408-async-writer.patch
@@ -0,0 +1,14 @@
+diff -up ./runtime/stream.c.fix ./runtime/stream.c
+--- ./runtime/stream.c.fix	2018-06-25 17:39:39.223082288 +0200
++++ ./runtime/stream.c	2018-06-25 17:40:26.549846798 +0200
+@@ -1427,10 +1427,8 @@ asyncWriterThread(void *pPtr)
+ 			}
+ 			if(bTimedOut && pThis->iBufPtr > 0) {
+ 				/* if we timed out, we need to flush pending data */
+-				d_pthread_mutex_unlock(&pThis->mut);
+ 				strmFlushInternal(pThis, 1);
+ 				bTimedOut = 0;
+-				d_pthread_mutex_lock(&pThis->mut); 
+ 				continue;
+ 			}
+ 			bTimedOut = 0;
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-replace-cJSON-with-libfastjson.patch b/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-replace-cJSON-with-libfastjson.patch
new file mode 100644
index 0000000..37c7a95
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-replace-cJSON-with-libfastjson.patch
@@ -0,0 +1,1105 @@
+From 6267b5a57c432a3be68f362c571beb062d47b3a7 Mon Sep 17 00:00:00 2001
+From: PascalWithopf <pwithopf@adiscon.com>
+Date: Tue, 23 May 2017 15:32:34 +0200
+Subject: [PATCH 10/11] omelasticsearch: replace cJSON with libfastjson
+
+(cherry picked from commit 7982f50675471220c5ba035371a8f7537a50442b)
+(cherry picked from commit 0b09c29db0cec5a215a95d03cfc37a27e486811c)
+---
+ plugins/omelasticsearch/Makefile.am       |   3 +-
+ plugins/omelasticsearch/cJSON/cjson.c     | 525 ------------------------------
+ plugins/omelasticsearch/cJSON/cjson.h     | 130 --------
+ plugins/omelasticsearch/omelasticsearch.c | 171 +++++-----
+ 12 files changed, 84 insertions(+), 1323 deletions(-)
+ delete mode 100644 plugins/omelasticsearch/cJSON/cjson.c
+ delete mode 100644 plugins/omelasticsearch/cJSON/cjson.h
+
+diff --git a/plugins/omelasticsearch/Makefile.am b/plugins/omelasticsearch/Makefile.am
+index ba85a896d..2fadb74dc 100644
+--- a/plugins/omelasticsearch/Makefile.am
++++ b/plugins/omelasticsearch/Makefile.am
+@@ -1,7 +1,6 @@
+ pkglib_LTLIBRARIES = omelasticsearch.la
+ 
+-# TODO: replace cJSON
+-omelasticsearch_la_SOURCES = omelasticsearch.c cJSON/cjson.c  cJSON/cjson.h
++omelasticsearch_la_SOURCES = omelasticsearch.c
+ omelasticsearch_la_CPPFLAGS =  $(RSRT_CFLAGS) $(PTHREADS_CFLAGS)
+ omelasticsearch_la_LDFLAGS = -module -avoid-version
+ omelasticsearch_la_LIBADD =  $(CURL_LIBS) $(LIBM)
+diff --git a/plugins/omelasticsearch/cJSON/cjson.c b/plugins/omelasticsearch/cJSON/cjson.c
+deleted file mode 100644
+index 6f7d43a23..000000000
+--- a/plugins/omelasticsearch/cJSON/cjson.c
++++ /dev/null
+@@ -1,525 +0,0 @@
+-/*
+-  Copyright (c) 2009 Dave Gamble
+-
+-  Permission is hereby granted, free of charge, to any person obtaining a copy
+-  of this software and associated documentation files (the "Software"), to deal
+-  in the Software without restriction, including without limitation the rights
+-  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+-  copies of the Software, and to permit persons to whom the Software is
+-  furnished to do so, subject to the following conditions:
+-
+-  The above copyright notice and this permission notice shall be included in
+-  all copies or substantial portions of the Software.
+-
+-  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+-  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+-  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+-  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+-  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+-  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+-  THE SOFTWARE.
+-*/
+-
+-/* this code has several warnings, but we ignore them because
+- * this seems to work and we do not want to engage in that code body. If
+- * we really run into troubles, it is better to change to libfastjson, which
+- * we should do in the medium to long term anyhow...
+- */
+-#pragma GCC diagnostic ignored "-Wmissing-prototypes"
+-#pragma GCC diagnostic ignored "-Wredundant-decls"
+-#pragma GCC diagnostic ignored "-Wstrict-prototypes"
+-#pragma GCC diagnostic ignored "-Wswitch-default"
+-#pragma GCC diagnostic ignored "-Wold-style-definition"
+-
+-/* cJSON */
+-/* JSON parser in C. */
+-
+-#include <string.h>
+-#include <stdio.h>
+-#include <math.h>
+-#include <stdlib.h>
+-#include <float.h>
+-#include <limits.h>
+-#include <ctype.h>
+-#include "cjson.h"
+-
+-static const char *ep;
+-
+-const char *cJSON_GetErrorPtr() {return ep;}
+-
+-static int cJSON_strcasecmp(const char *s1,const char *s2)
+-{
+-	if (!s1) return (s1==s2)?0:1;if (!s2) return 1;
+-	for(; tolower(*s1) == tolower(*s2); ++s1, ++s2)	if(*s1 == 0)	return 0;
+-	return tolower(*(const unsigned char *)s1) - tolower(*(const unsigned char *)s2);
+-}
+-
+-static void *(*cJSON_malloc)(size_t sz) = malloc;
+-static void (*cJSON_free)(void *ptr) = free;
+-
+-static char* cJSON_strdup(const char* str)
+-{
+-      size_t len;
+-      char* copy;
+-
+-      len = strlen(str) + 1;
+-      if (!(copy = (char*)cJSON_malloc(len))) return 0;
+-      memcpy(copy,str,len);
+-      return copy;
+-}
+-
+-void cJSON_InitHooks(cJSON_Hooks* hooks)
+-{
+-    if (!hooks) { /* Reset hooks */
+-        cJSON_malloc = malloc;
+-        cJSON_free = free;
+-        return;
+-    }
+-
+-	cJSON_malloc = (hooks->malloc_fn)?hooks->malloc_fn:malloc;
+-	cJSON_free	 = (hooks->free_fn)?hooks->free_fn:free;
+-}
+-
+-/* Internal constructor. */
+-static cJSON *cJSON_New_Item()
+-{
+-	cJSON* node = (cJSON*)cJSON_malloc(sizeof(cJSON));
+-	if (node) memset(node,0,sizeof(cJSON));
+-	return node;
+-}
+-
+-/* Delete a cJSON structure. */
+-void cJSON_Delete(cJSON *c)
+-{
+-	cJSON *next;
+-	while (c)
+-	{
+-		next=c->next;
+-		if (!(c->type&cJSON_IsReference) && c->child) cJSON_Delete(c->child);
+-		if (!(c->type&cJSON_IsReference) && c->valuestring) cJSON_free(c->valuestring);
+-		if (c->string) cJSON_free(c->string);
+-		cJSON_free(c);
+-		c=next;
+-	}
+-}
+-
+-/* Parse the input text to generate a number, and populate the result into item. */
+-static const char *parse_number(cJSON *item,const char *num)
+-{
+-	double n=0,sign=1,scale=0;int subscale=0,signsubscale=1;
+-
+-	/* Could use sscanf for this? */
+-	if (*num=='-') sign=-1,num++;	/* Has sign? */
+-	if (*num=='0') num++;			/* is zero */
+-	if (*num>='1' && *num<='9')	do	n=(n*10.0)+(*num++ -'0');	while (*num>='0' && *num<='9');	/* Number? */
+-	if (*num=='.' && num[1]>='0' && num[1]<='9') {num++;		do	n=(n*10.0)+(*num++ -'0'),scale--; while (*num>='0' && *num<='9');}	/* Fractional part? */
+-	if (*num=='e' || *num=='E')		/* Exponent? */
+-	{	num++;if (*num=='+') num++;	else if (*num=='-') signsubscale=-1,num++;		/* With sign? */
+-		while (*num>='0' && *num<='9') subscale=(subscale*10)+(*num++ - '0');	/* Number? */
+-	}
+-
+-	n=sign*n*pow(10.0,(scale+subscale*signsubscale));	/* number = +/- number.fraction * 10^+/- exponent */
+-	
+-	item->valuedouble=n;
+-	item->valueint=(int)n;
+-	item->type=cJSON_Number;
+-	return num;
+-}
+-
+-/* Render the number nicely from the given item into a string. */
+-char *cJSON_print_number(cJSON *item)
+-{
+-	char *str;
+-	double d=item->valuedouble;
+-	if (fabs(((double)item->valueint)-d)<=DBL_EPSILON && d<=INT_MAX && d>=INT_MIN)
+-	{
+-		str=(char*)cJSON_malloc(21);	/* 2^64+1 can be represented in 21 chars. */
+-		if (str) sprintf(str,"%d",item->valueint);
+-	}
+-	else
+-	{
+-		str=(char*)cJSON_malloc(64);	/* This is a nice tradeoff. */
+-		if (str)
+-		{
+-			if (fabs(floor(d)-d)<=DBL_EPSILON)			sprintf(str,"%.0f",d);
+-			else if (fabs(d)<1.0e-6 || fabs(d)>1.0e9)	sprintf(str,"%e",d);
+-			else										sprintf(str,"%f",d);
+-		}
+-	}
+-	return str;
+-}
+-
+-/* Parse the input text into an unescaped cstring, and populate item. */
+-static const unsigned char firstByteMark[7] = { 0x00, 0x00, 0xC0, 0xE0, 0xF0, 0xF8, 0xFC };
+-static const char *parse_string(cJSON *item,const char *str)
+-{
+-	const char *ptr=str+1;char *ptr2;char *out;int len=0;unsigned uc,uc2;
+-	if (*str!='\"') {ep=str;return 0;}	/* not a string! */
+-	
+-	while (*ptr!='\"' && *ptr && ++len) if (*ptr++ == '\\') ptr++;	/* Skip escaped quotes. */
+-	
+-	out=(char*)cJSON_malloc(len+1);	/* This is how long we need for the string, roughly. */
+-	if (!out) return 0;
+-	
+-	ptr=str+1;ptr2=out;
+-	while (*ptr!='\"' && *ptr)
+-	{
+-		if (*ptr!='\\') *ptr2++=*ptr++;
+-		else
+-		{
+-			ptr++;
+-			switch (*ptr)
+-			{
+-				case 'b': *ptr2++='\b';	break;
+-				case 'f': *ptr2++='\f';	break;
+-				case 'n': *ptr2++='\n';	break;
+-				case 'r': *ptr2++='\r';	break;
+-				case 't': *ptr2++='\t';	break;
+-				case 'u':	 /* transcode utf16 to utf8. */
+-					sscanf(ptr+1,"%4x",&uc);ptr+=4;	/* get the unicode char. */
+-
+-					if ((uc>=0xDC00 && uc<=0xDFFF) || uc==0)	break;	// check for invalid.
+-
+-					if (uc>=0xD800 && uc<=0xDBFF)	// UTF16 surrogate pairs.
+-					{
+-						if (ptr[1]!='\\' || ptr[2]!='u')	break;	// missing second-half of surrogate.
+-						sscanf(ptr+3,"%4x",&uc2);ptr+=6;
+-						if (uc2<0xDC00 || uc2>0xDFFF)		break;	// invalid second-half of surrogate.
+-						uc=0x10000 | ((uc&0x3FF)<<10) | (uc2&0x3FF);
+-					}
+-
+-					len=4;if (uc<0x80) len=1;else if (uc<0x800) len=2;else if (uc<0x10000) len=3; ptr2+=len;
+-					
+-					switch (len) {
+-						case 4: *--ptr2 =((uc | 0x80) & 0xBF); uc >>= 6;
+-						case 3: *--ptr2 =((uc | 0x80) & 0xBF); uc >>= 6;
+-						case 2: *--ptr2 =((uc | 0x80) & 0xBF); uc >>= 6;
+-						case 1: *--ptr2 =(uc | firstByteMark[len]);
+-					}
+-					ptr2+=len;
+-					break;
+-				default:  *ptr2++=*ptr; break;
+-			}
+-			ptr++;
+-		}
+-	}
+-	*ptr2=0;
+-	if (*ptr=='\"') ptr++;
+-	item->valuestring=out;
+-	item->type=cJSON_String;
+-	return ptr;
+-}
+-
+-/* Render the cstring provided to an escaped version that can be printed. */
+-static char *print_string_ptr(const char *str)
+-{
+-	const char *ptr;char *ptr2,*out;int len=0;unsigned char token;
+-	
+-	if (!str) return cJSON_strdup("");
+-	ptr=str;while ((token=*ptr) && ++len) {if (strchr("\"\\\b\f\n\r\t",token)) len++; else if (token<32) len+=5;ptr++;}
+-	
+-	out=(char*)cJSON_malloc(len+3);
+-	if (!out) return 0;
+-
+-	ptr2=out;ptr=str;
+-	*ptr2++='\"';
+-	while (*ptr)
+-	{
+-		if ((unsigned char)*ptr>31 && *ptr!='\"' && *ptr!='\\') *ptr2++=*ptr++;
+-		else
+-		{
+-			*ptr2++='\\';
+-			switch (token=*ptr++)
+-			{
+-				case '\\':	*ptr2++='\\';	break;
+-				case '\"':	*ptr2++='\"';	break;
+-				case '\b':	*ptr2++='b';	break;
+-				case '\f':	*ptr2++='f';	break;
+-				case '\n':	*ptr2++='n';	break;
+-				case '\r':	*ptr2++='r';	break;
+-				case '\t':	*ptr2++='t';	break;
+-				default: sprintf(ptr2,"u%04x",token);ptr2+=5;	break;	/* escape and print */
+-			}
+-		}
+-	}
+-	*ptr2++='\"';*ptr2++=0;
+-	return out;
+-}
+-/* Invote print_string_ptr (which is useful) on an item. */
+-static char *print_string(cJSON *item)	{return print_string_ptr(item->valuestring);}
+-
+-/* Predeclare these prototypes. */
+-static const char *parse_value(cJSON *item,const char *value);
+-static char *print_value(cJSON *item,int depth,int fmt);
+-static const char *parse_array(cJSON *item,const char *value);
+-static char *print_array(cJSON *item,int depth,int fmt);
+-static const char *parse_object(cJSON *item,const char *value);
+-static char *print_object(cJSON *item,int depth,int fmt);
+-
+-/* Utility to jump whitespace and cr/lf */
+-static const char *skip(const char *in) {while (in && *in && (unsigned char)*in<=32) in++; return in;}
+-
+-/* Parse an object - create a new root, and populate. */
+-cJSON *cJSON_Parse(const char *value)
+-{
+-	cJSON *c=cJSON_New_Item();
+-	ep=0;
+-	if (!c) return 0;       /* memory fail */
+-
+-	if (!parse_value(c,skip(value))) {cJSON_Delete(c);return 0;}
+-	return c;
+-}
+-
+-/* Render a cJSON item/entity/structure to text. */
+-char *cJSON_Print(cJSON *item)				{return print_value(item,0,1);}
+-char *cJSON_PrintUnformatted(cJSON *item)	{return print_value(item,0,0);}
+-
+-/* Parser core - when encountering text, process appropriately. */
+-static const char *parse_value(cJSON *item,const char *value)
+-{
+-	if (!value)						return 0;	/* Fail on null. */
+-	if (!strncmp(value,"null",4))	{ item->type=cJSON_NULL;  return value+4; }
+-	if (!strncmp(value,"false",5))	{ item->type=cJSON_False; return value+5; }
+-	if (!strncmp(value,"true",4))	{ item->type=cJSON_True; item->valueint=1;	return value+4; }
+-	if (*value=='\"')				{ return parse_string(item,value); }
+-	if (*value=='-' || (*value>='0' && *value<='9'))	{ return parse_number(item,value); }
+-	if (*value=='[')				{ return parse_array(item,value); }
+-	if (*value=='{')				{ return parse_object(item,value); }
+-
+-	ep=value;return 0;	/* failure. */
+-}
+-
+-/* Render a value to text. */
+-static char *print_value(cJSON *item,int depth,int fmt)
+-{
+-	char *out=0;
+-	if (!item) return 0;
+-	switch ((item->type)&255)
+-	{
+-		case cJSON_NULL:	out=cJSON_strdup("null");	break;
+-		case cJSON_False:	out=cJSON_strdup("false");break;
+-		case cJSON_True:	out=cJSON_strdup("true"); break;
+-		case cJSON_Number:	out=cJSON_print_number(item);break;
+-		case cJSON_String:	out=print_string(item);break;
+-		case cJSON_Array:	out=print_array(item,depth,fmt);break;
+-		case cJSON_Object:	out=print_object(item,depth,fmt);break;
+-	}
+-	return out;
+-}
+-
+-/* Build an array from input text. */
+-static const char *parse_array(cJSON *item,const char *value)
+-{
+-	cJSON *child;
+-	if (*value!='[')	{ep=value;return 0;}	/* not an array! */
+-
+-	item->type=cJSON_Array;
+-	value=skip(value+1);
+-	if (*value==']') return value+1;	/* empty array. */
+-
+-	item->child=child=cJSON_New_Item();
+-	if (!item->child) return 0;		 /* memory fail */
+-	value=skip(parse_value(child,skip(value)));	/* skip any spacing, get the value. */
+-	if (!value) return 0;
+-
+-	while (*value==',')
+-	{
+-		cJSON *new_item;
+-		if (!(new_item=cJSON_New_Item())) return 0; 	/* memory fail */
+-		child->next=new_item;new_item->prev=child;child=new_item;
+-		value=skip(parse_value(child,skip(value+1)));
+-		if (!value) return 0;	/* memory fail */
+-	}
+-
+-	if (*value==']') return value+1;	/* end of array */
+-	ep=value;return 0;	/* malformed. */
+-}
+-
+-/* Render an array to text */
+-static char *print_array(cJSON *item,int depth,int fmt)
+-{
+-	char **entries;
+-	char *out=0,*ptr,*ret;int len=5;
+-	cJSON *child=item->child;
+-	int numentries=0,i=0,fail=0;
+-	
+-	/* How many entries in the array? */
+-	while (child) numentries++,child=child->next;
+-	/* Allocate an array to hold the values for each */
+-	entries=(char**)cJSON_malloc(numentries*sizeof(char*));
+-	if (!entries) return 0;
+-	memset(entries,0,numentries*sizeof(char*));
+-	/* Retrieve all the results: */
+-	child=item->child;
+-	while (child && !fail)
+-	{
+-		ret=print_value(child,depth+1,fmt);
+-		entries[i++]=ret;
+-		if (ret) len+=strlen(ret)+2+(fmt?1:0); else fail=1;
+-		child=child->next;
+-	}
+-	
+-	/* If we didn't fail, try to malloc the output string */
+-	if (!fail) out=(char*)cJSON_malloc(len);
+-	/* If that fails, we fail. */
+-	if (!out) fail=1;
+-
+-	/* Handle failure. */
+-	if (fail)
+-	{
+-		for (i=0;i<numentries;i++) if (entries[i]) cJSON_free(entries[i]);
+-		cJSON_free(entries);
+-		return 0;
+-	}
+-	
+-	/* Compose the output array. */
+-	*out='[';
+-	ptr=out+1;*ptr=0;
+-	for (i=0;i<numentries;i++)
+-	{
+-		strcpy(ptr,entries[i]);ptr+=strlen(entries[i]);
+-		if (i!=numentries-1) {*ptr++=',';if(fmt)*ptr++=' ';*ptr=0;}
+-		cJSON_free(entries[i]);
+-	}
+-	cJSON_free(entries);
+-	*ptr++=']';*ptr++=0;
+-	return out;	
+-}
+-
+-/* Build an object from the text. */
+-static const char *parse_object(cJSON *item,const char *value)
+-{
+-	cJSON *child;
+-	if (*value!='{')	{ep=value;return 0;}	/* not an object! */
+-	
+-	item->type=cJSON_Object;
+-	value=skip(value+1);
+-	if (*value=='}') return value+1;	/* empty array. */
+-	
+-	item->child=child=cJSON_New_Item();
+-	if (!item->child) return 0;
+-	value=skip(parse_string(child,skip(value)));
+-	if (!value) return 0;
+-	child->string=child->valuestring;child->valuestring=0;
+-	if (*value!=':') {ep=value;return 0;}	/* fail! */
+-	value=skip(parse_value(child,skip(value+1)));	/* skip any spacing, get the value. */
+-	if (!value) return 0;
+-	
+-	while (*value==',')
+-	{
+-		cJSON *new_item;
+-		if (!(new_item=cJSON_New_Item()))	return 0; /* memory fail */
+-		child->next=new_item;new_item->prev=child;child=new_item;
+-		value=skip(parse_string(child,skip(value+1)));
+-		if (!value) return 0;
+-		child->string=child->valuestring;child->valuestring=0;
+-		if (*value!=':') {ep=value;return 0;}	/* fail! */
+-		value=skip(parse_value(child,skip(value+1)));	/* skip any spacing, get the value. */
+-		if (!value) return 0;
+-	}
+-	
+-	if (*value=='}') return value+1;	/* end of array */
+-	ep=value;return 0;	/* malformed. */
+-}
+-
+-/* Render an object to text. */
+-static char *print_object(cJSON *item,int depth,int fmt)
+-{
+-	char **entries=0,**names=0;
+-	char *out=0,*ptr,*ret,*str;int len=7,i=0,j;
+-	cJSON *child=item->child;
+-	int numentries=0,fail=0;
+-	/* Count the number of entries. */
+-	while (child) numentries++,child=child->next;
+-	/* Allocate space for the names and the objects */
+-	entries=(char**)cJSON_malloc(numentries*sizeof(char*));
+-	if (!entries) return 0;
+-	names=(char**)cJSON_malloc(numentries*sizeof(char*));
+-	if (!names) {cJSON_free(entries);return 0;}
+-	memset(entries,0,sizeof(char*)*numentries);
+-	memset(names,0,sizeof(char*)*numentries);
+-
+-	/* Collect all the results into our arrays: */
+-	child=item->child;depth++;if (fmt) len+=depth;
+-	while (child)
+-	{
+-		names[i]=str=print_string_ptr(child->string);
+-		entries[i++]=ret=print_value(child,depth,fmt);
+-		if (str && ret) len+=strlen(ret)+strlen(str)+2+(fmt?2+depth:0); else fail=1;
+-		child=child->next;
+-	}
+-	
+-	/* Try to allocate the output string */
+-	if (!fail) out=(char*)cJSON_malloc(len);
+-	if (!out) fail=1;
+-
+-	/* Handle failure */
+-	if (fail)
+-	{
+-		for (i=0;i<numentries;i++) {if (names[i]) cJSON_free(names[i]);if (entries[i]) cJSON_free(entries[i]);}
+-		cJSON_free(names);cJSON_free(entries);
+-		return 0;
+-	}
+-	
+-	/* Compose the output: */
+-	*out='{';ptr=out+1;if (fmt)*ptr++='\n';*ptr=0;
+-	for (i=0;i<numentries;i++)
+-	{
+-		if (fmt) for (j=0;j<depth;j++) *ptr++='\t';
+-		strcpy(ptr,names[i]);ptr+=strlen(names[i]);
+-		*ptr++=':';if (fmt) *ptr++='\t';
+-		strcpy(ptr,entries[i]);ptr+=strlen(entries[i]);
+-		if (i!=numentries-1) *ptr++=',';
+-		if (fmt) *ptr++='\n';*ptr=0;
+-		cJSON_free(names[i]);cJSON_free(entries[i]);
+-	}
+-	
+-	cJSON_free(names);cJSON_free(entries);
+-	if (fmt) for (i=0;i<depth-1;i++) *ptr++='\t';
+-	*ptr++='}';*ptr++=0;
+-	return out;	
+-}
+-
+-/* Get Array size/item / object item. */
+-int    cJSON_GetArraySize(cJSON *array)							{cJSON *c=array->child;int i=0;while(c)i++,c=c->next;return i;}
+-cJSON *cJSON_GetArrayItem(cJSON *array,int item)				{cJSON *c=array->child;  while (c && item>0) item--,c=c->next; return c;}
+-cJSON *cJSON_GetObjectItem(cJSON *object,const char *string)	{cJSON *c=object->child; while (c && cJSON_strcasecmp(c->string,string)) c=c->next; return c;}
+-
+-/* Utility for array list handling. */
+-static void suffix_object(cJSON *prev,cJSON *item) {prev->next=item;item->prev=prev;}
+-/* Utility for handling references. */
+-static cJSON *create_reference(cJSON *item) {cJSON *ref=cJSON_New_Item();if (!ref) return 0;memcpy(ref,item,sizeof(cJSON));ref->string=0;ref->type|=cJSON_IsReference;ref->next=ref->prev=0;return ref;}
+-
+-/* Add item to array/object. */
+-void   cJSON_AddItemToArray(cJSON *array, cJSON *item)						{cJSON *c=array->child;if (!item) return; if (!c) {array->child=item;} else {while (c && c->next) c=c->next; suffix_object(c,item);}}
+-void   cJSON_AddItemToObject(cJSON *object,const char *string,cJSON *item)	{if (!item) return; if (item->string) cJSON_free(item->string);item->string=cJSON_strdup(string);cJSON_AddItemToArray(object,item);}
+-void	cJSON_AddItemReferenceToArray(cJSON *array, cJSON *item)						{cJSON_AddItemToArray(array,create_reference(item));}
+-void	cJSON_AddItemReferenceToObject(cJSON *object,const char *string,cJSON *item)	{cJSON_AddItemToObject(object,string,create_reference(item));}
+-
+-cJSON *cJSON_DetachItemFromArray(cJSON *array,int which)			{cJSON *c=array->child;while (c && which>0) c=c->next,which--;if (!c) return 0;
+-	if (c->prev) c->prev->next=c->next;if (c->next) c->next->prev=c->prev;if (c==array->child) array->child=c->next;c->prev=c->next=0;return c;}
+-void   cJSON_DeleteItemFromArray(cJSON *array,int which)			{cJSON_Delete(cJSON_DetachItemFromArray(array,which));}
+-cJSON *cJSON_DetachItemFromObject(cJSON *object,const char *string) {int i=0;cJSON *c=object->child;while (c && cJSON_strcasecmp(c->string,string)) i++,c=c->next;if (c) return cJSON_DetachItemFromArray(object,i);return 0;}
+-void   cJSON_DeleteItemFromObject(cJSON *object,const char *string) {cJSON_Delete(cJSON_DetachItemFromObject(object,string));}
+-
+-/* Replace array/object items with new ones. */
+-void   cJSON_ReplaceItemInArray(cJSON *array,int which,cJSON *newitem)		{cJSON *c=array->child;while (c && which>0) c=c->next,which--;if (!c) return;
+-	newitem->next=c->next;newitem->prev=c->prev;if (newitem->next) newitem->next->prev=newitem;
+-	if (c==array->child) array->child=newitem; else newitem->prev->next=newitem;c->next=c->prev=0;cJSON_Delete(c);}
+-void   cJSON_ReplaceItemInObject(cJSON *object,const char *string,cJSON *newitem){int i=0;cJSON *c=object->child;while(c && cJSON_strcasecmp(c->string,string))i++,c=c->next;if(c){newitem->string=cJSON_strdup(string);cJSON_ReplaceItemInArray(object,i,newitem);}}
+-
+-/* Create basic types: */
+-cJSON *cJSON_CreateNull()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_NULL;return item;}
+-cJSON *cJSON_CreateTrue()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_True;return item;}
+-cJSON *cJSON_CreateFalse()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_False;return item;}
+-cJSON *cJSON_CreateBool(int b)					{cJSON *item=cJSON_New_Item();if(item)item->type=b?cJSON_True:cJSON_False;return item;}
+-cJSON *cJSON_CreateNumber(double num)			{cJSON *item=cJSON_New_Item();if(item){item->type=cJSON_Number;item->valuedouble=num;item->valueint=(int)num;}return item;}
+-cJSON *cJSON_CreateString(const char *string)	{cJSON *item=cJSON_New_Item();if(item){item->type=cJSON_String;item->valuestring=cJSON_strdup(string);}return item;}
+-cJSON *cJSON_CreateArray()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_Array;return item;}
+-cJSON *cJSON_CreateObject()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_Object;return item;}
+-
+-/* Create Arrays: */
+-cJSON *cJSON_CreateIntArray(int *numbers,int count)				{int i;cJSON *n=0,*p=0,*a=cJSON_CreateArray();for(i=0;a && i<count;i++){n=cJSON_CreateNumber(numbers[i]);if(!i)a->child=n;else suffix_object(p,n);p=n;}return a;}
+-cJSON *cJSON_CreateFloatArray(float *numbers,int count)			{int i;cJSON *n=0,*p=0,*a=cJSON_CreateArray();for(i=0;a && i<count;i++){n=cJSON_CreateNumber(numbers[i]);if(!i)a->child=n;else suffix_object(p,n);p=n;}return a;}
+-cJSON *cJSON_CreateDoubleArray(double *numbers,int count)		{int i;cJSON *n=0,*p=0,*a=cJSON_CreateArray();for(i=0;a && i<count;i++){n=cJSON_CreateNumber(numbers[i]);if(!i)a->child=n;else suffix_object(p,n);p=n;}return a;}
+-cJSON *cJSON_CreateStringArray(const char **strings,int count)	{int i;cJSON *n=0,*p=0,*a=cJSON_CreateArray();for(i=0;a && i<count;i++){n=cJSON_CreateString(strings[i]);if(!i)a->child=n;else suffix_object(p,n);p=n;}return a;}
+diff --git a/plugins/omelasticsearch/cJSON/cjson.h b/plugins/omelasticsearch/cJSON/cjson.h
+deleted file mode 100644
+index a621720ce..000000000
+--- a/plugins/omelasticsearch/cJSON/cjson.h
++++ /dev/null
+@@ -1,130 +0,0 @@
+-/*
+-  Copyright (c) 2009 Dave Gamble
+- 
+-  Permission is hereby granted, free of charge, to any person obtaining a copy
+-  of this software and associated documentation files (the "Software"), to deal
+-  in the Software without restriction, including without limitation the rights
+-  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+-  copies of the Software, and to permit persons to whom the Software is
+-  furnished to do so, subject to the following conditions:
+- 
+-  The above copyright notice and this permission notice shall be included in
+-  all copies or substantial portions of the Software.
+- 
+-  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+-  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+-  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+-  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+-  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+-  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+-  THE SOFTWARE.
+-*/
+-
+-#ifndef cJSON__h
+-#define cJSON__h
+-
+-#ifdef __cplusplus
+-extern "C"
+-{
+-#endif
+-
+-/* cJSON Types: */
+-#define cJSON_False 0
+-#define cJSON_True 1
+-#define cJSON_NULL 2
+-#define cJSON_Number 3
+-#define cJSON_String 4
+-#define cJSON_Array 5
+-#define cJSON_Object 6
+-	
+-#define cJSON_IsReference 256
+-
+-/* The cJSON structure: */
+-typedef struct cJSON {
+-	struct cJSON *next,*prev;	/* next/prev allow you to walk array/object chains. Alternatively, use GetArraySize/GetArrayItem/GetObjectItem */
+-	struct cJSON *child;		/* An array or object item will have a child pointer pointing to a chain of the items in the array/object. */
+-
+-	int type;					/* The type of the item, as above. */
+-
+-	char *valuestring;			/* The item's string, if type==cJSON_String */
+-	int valueint;				/* The item's number, if type==cJSON_Number */
+-	double valuedouble;			/* The item's number, if type==cJSON_Number */
+-
+-	char *string;				/* The item's name string, if this item is the child of, or is in the list of subitems of an object. */
+-} cJSON;
+-
+-typedef struct cJSON_Hooks {
+-      void *(*malloc_fn)(size_t sz);
+-      void (*free_fn)(void *ptr);
+-} cJSON_Hooks;
+-
+-/* Supply malloc, realloc and free functions to cJSON */
+-extern void cJSON_InitHooks(cJSON_Hooks* hooks);
+-
+-
+-/* Supply a block of JSON, and this returns a cJSON object you can interrogate. Call cJSON_Delete when finished. */
+-extern cJSON *cJSON_Parse(const char *value);
+-/* Render a cJSON entity to text for transfer/storage. Free the char* when finished. */
+-extern char  *cJSON_Print(cJSON *item);
+-/* Render a cJSON entity to text for transfer/storage without any formatting. Free the char* when finished. */
+-extern char  *cJSON_PrintUnformatted(cJSON *item);
+-/* Delete a cJSON entity and all subentities. */
+-extern void   cJSON_Delete(cJSON *c);
+-
+-/* Returns the number of items in an array (or object). */
+-extern int	  cJSON_GetArraySize(cJSON *array);
+-/* Retrieve item number "item" from array "array". Returns NULL if unsuccessful. */
+-extern cJSON *cJSON_GetArrayItem(cJSON *array,int item);
+-/* Get item "string" from object. Case insensitive. */
+-extern cJSON *cJSON_GetObjectItem(cJSON *object,const char *string);
+-
+-/* For analysing failed parses. This returns a pointer to the parse error. You'll probably need to look a few chars back to make sense of it. Defined when cJSON_Parse() returns 0. 0 when cJSON_Parse() succeeds. */
+-extern const char *cJSON_GetErrorPtr();
+-	
+-/* These calls create a cJSON item of the appropriate type. */
+-extern cJSON *cJSON_CreateNull();
+-extern cJSON *cJSON_CreateTrue();
+-extern cJSON *cJSON_CreateFalse();
+-extern cJSON *cJSON_CreateBool(int b);
+-extern cJSON *cJSON_CreateNumber(double num);
+-extern cJSON *cJSON_CreateString(const char *string);
+-extern cJSON *cJSON_CreateArray();
+-extern cJSON *cJSON_CreateObject();
+-
+-/* These utilities create an Array of count items. */
+-extern cJSON *cJSON_CreateIntArray(int *numbers,int count);
+-extern cJSON *cJSON_CreateFloatArray(float *numbers,int count);
+-extern cJSON *cJSON_CreateDoubleArray(double *numbers,int count);
+-extern cJSON *cJSON_CreateStringArray(const char **strings,int count);
+-
+-/* Append item to the specified array/object. */
+-extern void cJSON_AddItemToArray(cJSON *array, cJSON *item);
+-extern void	cJSON_AddItemToObject(cJSON *object,const char *string,cJSON *item);
+-/* Append reference to item to the specified array/object. Use this when you want to add an existing cJSON to a new cJSON, but don't want to corrupt your existing cJSON. */
+-extern void cJSON_AddItemReferenceToArray(cJSON *array, cJSON *item);
+-extern void	cJSON_AddItemReferenceToObject(cJSON *object,const char *string,cJSON *item);
+-
+-/* Remove/Detatch items from Arrays/Objects. */
+-extern cJSON *cJSON_DetachItemFromArray(cJSON *array,int which);
+-extern void   cJSON_DeleteItemFromArray(cJSON *array,int which);
+-extern cJSON *cJSON_DetachItemFromObject(cJSON *object,const char *string);
+-extern void   cJSON_DeleteItemFromObject(cJSON *object,const char *string);
+-	
+-/* Update array items. */
+-extern void cJSON_ReplaceItemInArray(cJSON *array,int which,cJSON *newitem);
+-extern void cJSON_ReplaceItemInObject(cJSON *object,const char *string,cJSON *newitem);
+-
+-/* rger: added helpers */
+-
+-char *cJSON_print_number(cJSON *item);
+-#define cJSON_AddNullToObject(object,name)	cJSON_AddItemToObject(object, name, cJSON_CreateNull())
+-#define cJSON_AddTrueToObject(object,name)	cJSON_AddItemToObject(object, name, cJSON_CreateTrue())
+-#define cJSON_AddFalseToObject(object,name)		cJSON_AddItemToObject(object, name, cJSON_CreateFalse())
+-#define cJSON_AddNumberToObject(object,name,n)	cJSON_AddItemToObject(object, name, cJSON_CreateNumber(n))
+-#define cJSON_AddStringToObject(object,name,s)	cJSON_AddItemToObject(object, name, cJSON_CreateString(s))
+-
+-#ifdef __cplusplus
+-}
+-#endif
+-
+-#endif
+diff --git a/plugins/omelasticsearch/omelasticsearch.c b/plugins/omelasticsearch/omelasticsearch.c
+index 88bd5e16c..ed2b47535 100644
+--- a/plugins/omelasticsearch/omelasticsearch.c
++++ b/plugins/omelasticsearch/omelasticsearch.c
+@@ -41,7 +41,7 @@
+ #if defined(__FreeBSD__)
+ #include <unistd.h>
+ #endif
+-#include "cJSON/cjson.h"
++#include <json.h>
+ #include "conf.h"
+ #include "syslogd-types.h"
+ #include "srUtils.h"
+@@ -626,29 +626,29 @@ finalize_it:
+  * Dumps entire bulk request and response in error log
+  */
+ static rsRetVal
+-getDataErrorDefault(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRoot,uchar *reqmsg,char **rendered)
++getDataErrorDefault(wrkrInstanceData_t *pWrkrData,fjson_object **pReplyRoot,uchar *reqmsg,char **rendered)
+ {
+ 	DEFiRet;
+-	cJSON *req=0;
+-	cJSON *errRoot=0;
+-	cJSON *replyRoot = *pReplyRoot;
++	fjson_object *req=NULL;
++	fjson_object *errRoot=NULL;
++	fjson_object *replyRoot = *pReplyRoot;
+ 
+-	if((req=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+-	cJSON_AddItemToObject(req, "url", cJSON_CreateString((char*)pWrkrData->restURL));
+-	cJSON_AddItemToObject(req, "postdata", cJSON_CreateString((char*)reqmsg));
++	if((req=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	fjson_object_object_add(req, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	fjson_object_object_add(req, "postdata", fjson_object_new_string((char*)reqmsg));
+ 
+-	if((errRoot=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+-	cJSON_AddItemToObject(errRoot, "request", req);
+-	cJSON_AddItemToObject(errRoot, "reply", replyRoot);
+-	*rendered = cJSON_Print(errRoot);
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	fjson_object_object_add(errRoot, "request", req);
++	fjson_object_object_add(errRoot, "reply", replyRoot);
++	*rendered = strdup((char*)fjson_object_to_json_string(errRoot));
+ 
+-	req=0;
+-	cJSON_Delete(errRoot);
++	req=NULL;
++	fjson_object_put(errRoot);
+ 
+ 	*pReplyRoot = NULL; /* tell caller not to delete once again! */
+ 
+ 	finalize_it:
+-		cJSON_Delete(req);
++		fjson_object_put(req);
+ 		RETiRet;
+ }
+ 
+@@ -703,8 +703,8 @@ finalize_it:
+ /*
+  * check the status of response from ES
+  */
+-static int checkReplyStatus(cJSON* ok) {
+-	return (ok == NULL || ok->type != cJSON_Number || ok->valueint < 0 || ok->valueint > 299);
++static int checkReplyStatus(fjson_object* ok) {
++	return (ok == NULL || !fjson_object_is_type(ok, fjson_type_int) || fjson_object_get_int(ok) < 0 || fjson_object_get_int(ok) > 299);
+ }
+ 
+ /*
+@@ -712,7 +712,7 @@ static int checkReplyStatus(cJSON* ok) {
+  */
+ typedef struct exeContext{
+ 	int statusCheckOnly;
+-	cJSON *errRoot;
++	fjson_object *errRoot;
+ 	rsRetVal (*prepareErrorFileContent)(struct exeContext *ctx,int itemStatus,char *request,char *response);
+ 
+ 
+@@ -722,25 +722,24 @@ typedef struct exeContext{
+  * get content to be written in error file using context passed
+  */
+ static rsRetVal
+-parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRoot,uchar *reqmsg,context *ctx)
++parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **pReplyRoot,uchar *reqmsg,context *ctx)
+ {
+ 	DEFiRet;
+-	cJSON *replyRoot = *pReplyRoot;
++	fjson_object *replyRoot = *pReplyRoot;
+ 	int i;
+ 	int numitems;
+-	cJSON *items=0;
++	fjson_object *items=NULL;
+ 
+ 
+ 	/*iterate over items*/
+-	items = cJSON_GetObjectItem(replyRoot, "items");
+-	if(items == NULL || items->type != cJSON_Array) {
++	if(!fjson_object_object_get_ex(replyRoot, "items", &items)) {
+ 		DBGPRINTF("omelasticsearch: error in elasticsearch reply: "
+ 			  "bulkmode insert does not return array, reply is: %s\n",
+ 			  pWrkrData->reply);
+ 		ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 	}
+ 
+-	numitems = cJSON_GetArraySize(items);
++	numitems = fjson_object_array_length(items);
+ 
+ 	DBGPRINTF("omelasticsearch: Entire request %s\n",reqmsg);
+ 	const char *lastReqRead= (char*)reqmsg;
+@@ -748,32 +747,32 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRo
+ 	DBGPRINTF("omelasticsearch: %d items in reply\n", numitems);
+ 	for(i = 0 ; i < numitems ; ++i) {
+ 
+-		cJSON *item=0;
+-		cJSON *result=0;
+-		cJSON *ok=0;
++		fjson_object *item=NULL;
++		fjson_object *result=NULL;
++		fjson_object *ok=NULL;
+ 		int itemStatus=0;
+-		item = cJSON_GetArrayItem(items, i);
++		item = fjson_object_array_get_idx(items, i);
+ 		if(item == NULL)  {
+ 			DBGPRINTF("omelasticsearch: error in elasticsearch reply: "
+ 				  "cannot obtain reply array item %d\n", i);
+ 			ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 		}
+-		result = item->child;
+-		if(result == NULL || result->type != cJSON_Object) {
++		fjson_object_object_get_ex(item, "create", &result);
++		if(result == NULL || !fjson_object_is_type(result, fjson_type_object)) {
+ 			DBGPRINTF("omelasticsearch: error in elasticsearch reply: "
+ 				  "cannot obtain 'result' item for #%d\n", i);
+ 			ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 		}
+ 
+-		ok = cJSON_GetObjectItem(result, "status");
++		fjson_object_object_get_ex(result, "status", &ok);
+ 		itemStatus = checkReplyStatus(ok);
+-
++		
+ 		char *request =0;
+ 		char *response =0;
+ 		if(ctx->statusCheckOnly)
+ 		{
+ 			if(itemStatus) {
+-				DBGPRINTF("omelasticsearch: error in elasticsearch reply: item %d, status is %d\n", i, ok->valueint);
++				DBGPRINTF("omelasticsearch: error in elasticsearch reply: item %d, status is %d\n", i, fjson_object_get_int(ok));
+ 				DBGPRINTF("omelasticsearch: status check found error.\n");
+ 				ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 			}
+@@ -786,13 +785,12 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRo
+ 				DBGPRINTF("omelasticsearch: Couldn't get post request\n");
+ 				ABORT_FINALIZE(RS_RET_ERR);
+ 			}
+-
+-			response = cJSON_PrintUnformatted(result);
++			response = (char*)fjson_object_to_json_string_ext(result, FJSON_TO_STRING_PLAIN);
+ 
+ 			if(response==NULL)
+ 			{
+ 				free(request);/*as its has been assigned.*/
+-				DBGPRINTF("omelasticsearch: Error getting cJSON_PrintUnformatted. Cannot continue\n");
++				DBGPRINTF("omelasticsearch: Error getting fjson_object_to_string_ext. Cannot continue\n");
+ 				ABORT_FINALIZE(RS_RET_ERR);
+ 			}
+ 
+@@ -801,7 +799,6 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRo
+ 
+ 			/*free memory in any case*/
+ 			free(request);
+-			free(response);
+ 
+ 			if(ret != RS_RET_OK)
+ 			{
+@@ -826,23 +823,23 @@ getDataErrorOnly(context *ctx,int itemStatus,char *request,char *response)
+ 	DEFiRet;
+ 	if(itemStatus)
+ 	{
+-		cJSON *onlyErrorResponses =0;
+-		cJSON *onlyErrorRequests=0;
++		fjson_object *onlyErrorResponses =NULL;
++		fjson_object *onlyErrorRequests=NULL;
+ 
+-		if((onlyErrorResponses=cJSON_GetObjectItem(ctx->errRoot, "reply")) == NULL)
++		if(!fjson_object_object_get_ex(ctx->errRoot, "reply", &onlyErrorResponses))
+ 		{
+ 			DBGPRINTF("omelasticsearch: Failed to get reply json array. Invalid context. Cannot continue\n");
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+-		cJSON_AddItemToArray(onlyErrorResponses, cJSON_CreateString(response));
++		fjson_object_array_add(onlyErrorResponses, fjson_object_new_string(response));
+ 
+-		if((onlyErrorRequests=cJSON_GetObjectItem(ctx->errRoot, "request")) == NULL)
++		if(!fjson_object_object_get_ex(ctx->errRoot, "request", &onlyErrorRequests))
+ 		{
+ 			DBGPRINTF("omelasticsearch: Failed to get request json array. Invalid context. Cannot continue\n");
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+ 
+-		cJSON_AddItemToArray(onlyErrorRequests, cJSON_CreateString(request));
++		fjson_object_array_add(onlyErrorRequests, fjson_object_new_string(request));
+ 
+ 	}
+ 
+@@ -861,24 +858,24 @@ getDataInterleaved(context *ctx,
+ 	char *response)
+ {
+ 	DEFiRet;
+-	cJSON *interleaved =0;
+-	if((interleaved=cJSON_GetObjectItem(ctx->errRoot, "response")) == NULL)
++	fjson_object *interleaved =NULL;
++	if(!fjson_object_object_get_ex(ctx->errRoot, "response", &interleaved))
+ 	{
+ 		DBGPRINTF("omelasticsearch: Failed to get response json array. Invalid context. Cannot continue\n");
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+ 
+-	cJSON *interleavedNode=0;
++	fjson_object *interleavedNode=NULL;
+ 	/*create interleaved node that has req and response json data*/
+-	if((interleavedNode=cJSON_CreateObject()) == NULL)
++	if((interleavedNode=fjson_object_new_object()) == NULL)
+ 	{
+ 		DBGPRINTF("omelasticsearch: Failed to create interleaved node. Cann't continue\n");
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+-	cJSON_AddItemToObject(interleavedNode,"request", cJSON_CreateString(request));
+-	cJSON_AddItemToObject(interleavedNode,"reply", cJSON_CreateString(response));
++	fjson_object_object_add(interleavedNode,"request", fjson_object_new_string(request));
++	fjson_object_object_add(interleavedNode,"reply", fjson_object_new_string(response));
+ 
+-	cJSON_AddItemToArray(interleaved, interleavedNode);
++	fjson_object_array_add(interleaved, interleavedNode);
+ 
+ 
+ 
+@@ -912,24 +909,24 @@ static rsRetVal
+ initializeErrorOnlyConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+ 	DEFiRet;
+ 	ctx->statusCheckOnly=0;
+-	cJSON *errRoot=0;
+-	cJSON *onlyErrorResponses =0;
+-	cJSON *onlyErrorRequests=0;
+-	if((errRoot=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	fjson_object *errRoot=NULL;
++	fjson_object *onlyErrorResponses =NULL;
++	fjson_object *onlyErrorRequests=NULL;
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+ 
+-	if((onlyErrorResponses=cJSON_CreateArray()) == NULL) {
+-		cJSON_Delete(errRoot);
++	if((onlyErrorResponses=fjson_object_new_array()) == NULL) {
++		fjson_object_put(errRoot);
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+-	if((onlyErrorRequests=cJSON_CreateArray()) == NULL) {
+-		cJSON_Delete(errRoot);
+-		cJSON_Delete(onlyErrorResponses);
++	if((onlyErrorRequests=fjson_object_new_array()) == NULL) {
++		fjson_object_put(errRoot);
++		fjson_object_put(onlyErrorResponses);
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+ 
+-	cJSON_AddItemToObject(errRoot, "url", cJSON_CreateString((char*)pWrkrData->restURL));
+-	cJSON_AddItemToObject(errRoot,"request",onlyErrorRequests);
+-	cJSON_AddItemToObject(errRoot, "reply", onlyErrorResponses);
++	fjson_object_object_add(errRoot, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	fjson_object_object_add(errRoot,"request",onlyErrorRequests);
++	fjson_object_object_add(errRoot, "reply", onlyErrorResponses);
+ 	ctx->errRoot = errRoot;
+ 	ctx->prepareErrorFileContent= &getDataErrorOnly;
+ 	finalize_it:
+@@ -943,17 +940,17 @@ static rsRetVal
+ initializeInterleavedConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+ 	DEFiRet;
+ 	ctx->statusCheckOnly=0;
+-	cJSON *errRoot=0;
+-	cJSON *interleaved =0;
+-	if((errRoot=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+-	if((interleaved=cJSON_CreateArray()) == NULL) {
+-		cJSON_Delete(errRoot);
++	fjson_object *errRoot=NULL;
++	fjson_object *interleaved =NULL;
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	if((interleaved=fjson_object_new_array()) == NULL) {
++		fjson_object_put(errRoot);
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+ 
+ 
+-	cJSON_AddItemToObject(errRoot, "url", cJSON_CreateString((char*)pWrkrData->restURL));
+-	cJSON_AddItemToObject(errRoot,"response",interleaved);
++	fjson_object_object_add(errRoot, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	fjson_object_object_add(errRoot,"response",interleaved);
+ 	ctx->errRoot = errRoot;
+ 	ctx->prepareErrorFileContent= &getDataInterleaved;
+ 	finalize_it:
+@@ -965,17 +962,17 @@ static rsRetVal
+ initializeErrorInterleavedConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+ 	DEFiRet;
+ 	ctx->statusCheckOnly=0;
+-	cJSON *errRoot=0;
+-	cJSON *interleaved =0;
+-	if((errRoot=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+-	if((interleaved=cJSON_CreateArray()) == NULL) {
+-		cJSON_Delete(errRoot);
++	fjson_object *errRoot=NULL;
++	fjson_object *interleaved =NULL;
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	if((interleaved=fjson_object_new_array()) == NULL) {
++		fjson_object_put(errRoot);
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+ 
+ 
+-	cJSON_AddItemToObject(errRoot, "url", cJSON_CreateString((char*)pWrkrData->restURL));
+-	cJSON_AddItemToObject(errRoot,"response",interleaved);
++	fjson_object_object_add(errRoot, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	fjson_object_object_add(errRoot,"response",interleaved);
+ 	ctx->errRoot = errRoot;
+ 	ctx->prepareErrorFileContent= &getDataErrorOnlyInterleaved;
+ 	finalize_it:
+@@ -988,7 +985,7 @@ initializeErrorInterleavedConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+  * needs to be closed, HUP must be sent.
+  */
+ static rsRetVal
+-writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, cJSON **pReplyRoot, uchar *reqmsg)
++writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, fjson_object **pReplyRoot, uchar *reqmsg)
+ {
+ 	char *rendered = NULL;
+ 	size_t toWrite;
+@@ -1054,7 +1051,7 @@ writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, cJSON **pRepl
+ 			DBGPRINTF("omelasticsearch: error creating file content.\n");
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+-		rendered = cJSON_Print(ctx.errRoot);
++		rendered = (char*)fjson_object_to_json_string(ctx.errRoot);
+ 	}
+ 
+ 
+@@ -1084,14 +1081,13 @@ writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, cJSON **pRepl
+ finalize_it:
+ 	if(bMutLocked)
+ 		pthread_mutex_unlock(&pData->mutErrFile);
+-	cJSON_Delete(ctx.errRoot);
+-	free(rendered);
++	fjson_object_put(ctx.errRoot);
+ 	RETiRet;
+ }
+ 
+ 
+ static rsRetVal
+-checkResultBulkmode(wrkrInstanceData_t *pWrkrData, cJSON *root)
++checkResultBulkmode(wrkrInstanceData_t *pWrkrData, fjson_object *root)
+ {
+ 	DEFiRet;
+ 	context ctx;
+@@ -1111,11 +1107,11 @@ checkResultBulkmode(wrkrInstanceData_t *pWrkrData, cJSON *root)
+ static rsRetVal
+ checkResult(wrkrInstanceData_t *pWrkrData, uchar *reqmsg)
+ {
+-	cJSON *root;
+-	cJSON *status;
++	fjson_object *root;
++	fjson_object *status;
+ 	DEFiRet;
+ 
+-	root = cJSON_Parse(pWrkrData->reply);
++	root = fjson_tokener_parse(pWrkrData->reply);
+ 	if(root == NULL) {
+ 		DBGPRINTF("omelasticsearch: could not parse JSON result \n");
+ 		ABORT_FINALIZE(RS_RET_ERR);
+@@ -1124,10 +1120,7 @@ checkResult(wrkrInstanceData_t *pWrkrData, uchar *reqmsg)
+ 	if(pWrkrData->pData->bulkmode) {
+ 		iRet = checkResultBulkmode(pWrkrData, root);
+ 	} else {
+-		status = cJSON_GetObjectItem(root, "status");
+-		/* as far as we know, no "status" means all went well */
+-		if(status != NULL &&
+-		   (status->type == cJSON_Number || status->valueint >= 0 || status->valueint <= 299)) {
++		if(fjson_object_object_get_ex(root, "status", &status)) {
+ 			iRet = RS_RET_DATAFAIL;
+ 		}
+ 	}
+@@ -1143,7 +1136,7 @@ checkResult(wrkrInstanceData_t *pWrkrData, uchar *reqmsg)
+ 
+ finalize_it:
+ 	if(root != NULL)
+-		cJSON_Delete(root);
++		fjson_object_put(root);
+ 	if(iRet != RS_RET_OK) {
+ 		STATSCOUNTER_INC(indexESFail, mutIndexESFail);
+ 	}
+diff --git a/tests/es-bulk-errfile-empty.sh b/tests/es-bulk-errfile-empty.sh
+index 1f27f62fe..95883cb3d 100755
+--- a/tests/es-bulk-errfile-empty.sh
++++ b/tests/es-bulk-errfile-empty.sh
+@@ -12,6 +12,7 @@ echo \[es-bulk-errfile-empty\]: basic test for elasticsearch functionality
+ if [ -f rsyslog.errorfile ]
+ then
+     echo "error: error file exists!"
++    cat rsyslog.errorfile
+     exit 1
+ fi
+ . $srcdir/diag.sh seq-check  0 9999
+-- 
+2.14.4
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-write-op-types-bulk-rejection-retries.patch b/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-write-op-types-bulk-rejection-retries.patch
new file mode 100644
index 0000000..14c7bf5
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-write-op-types-bulk-rejection-retries.patch
@@ -0,0 +1,738 @@
+From 989be897340eb458b00efedfd5e082bb362db79a Mon Sep 17 00:00:00 2001
+From: Rich Megginson <rmeggins@redhat.com>
+Date: Tue, 15 May 2018 16:03:25 -0600
+Subject: [PATCH 11/11] omelasticsearch: write op types; bulk rejection retries
+
+Add support for a 'create' write operation type in addition to
+the default 'index'.  Using create allows specifying a unique id
+for each record, and allows duplicate document detection.
+
+Add support for checking each record returned in a bulk index
+request response.  Allow specifying a ruleset to send each failed
+record to.  Add a local variable `omes` which contains the
+information in the error response, so that users can control how
+to handle responses e.g. retry, or send to an error file.
+
+Add support for response stats - count successes, duplicates, and
+different types of failures.
+
+Add testing for bulk index rejections.
+
+(cherry picked from commit 57dd368a2a915d79c94a8dc0de30c93a0bbdc8fe)
+(cherry picked from commit 30a15621e1e7e393b2153e9fe5c13f724dea25b5)
+---
+ plugins/omelasticsearch/omelasticsearch.c | 441 ++++++++++++++++++++++++++++--
+ 1 file changed, 415 insertions(+), 26 deletions(-)
+
+diff --git a/plugins/omelasticsearch/omelasticsearch.c b/plugins/omelasticsearch/omelasticsearch.c
+index ed2b47535..ca61ae28f 100644
+--- a/plugins/omelasticsearch/omelasticsearch.c
++++ b/plugins/omelasticsearch/omelasticsearch.c
+@@ -51,6 +51,8 @@
+ #include "statsobj.h"
+ #include "cfsysline.h"
+ #include "unicode-helper.h"
++#include "ratelimit.h"
++#include "ruleset.h"
+ 
+ #ifndef O_LARGEFILE
+ #  define O_LARGEFILE 0
+@@ -64,6 +66,8 @@ MODULE_CNFNAME("omelasticsearch")
+ DEF_OMOD_STATIC_DATA
+ DEFobjCurrIf(errmsg)
+ DEFobjCurrIf(statsobj)
++DEFobjCurrIf(prop)
++DEFobjCurrIf(ruleset)
+ 
+ statsobj_t *indexStats;
+ STATSCOUNTER_DEF(indexSubmit, mutIndexSubmit)
+@@ -71,19 +75,35 @@ STATSCOUNTER_DEF(indexHTTPFail, mutIndexHTTPFail)
+ STATSCOUNTER_DEF(indexHTTPReqFail, mutIndexHTTPReqFail)
+ STATSCOUNTER_DEF(checkConnFail, mutCheckConnFail)
+ STATSCOUNTER_DEF(indexESFail, mutIndexESFail)
++STATSCOUNTER_DEF(indexSuccess, mutIndexSuccess)
++STATSCOUNTER_DEF(indexBadResponse, mutIndexBadResponse)
++STATSCOUNTER_DEF(indexDuplicate, mutIndexDuplicate)
++STATSCOUNTER_DEF(indexBadArgument, mutIndexBadArgument)
++STATSCOUNTER_DEF(indexBulkRejection, mutIndexBulkRejection)
++STATSCOUNTER_DEF(indexOtherResponse, mutIndexOtherResponse)
+ 
++static prop_t *pInputName = NULL;
+ 
+ #	define META_STRT "{\"index\":{\"_index\": \""
++#	define META_STRT_CREATE "{\"create\":{\"_index\": \""
+ #	define META_TYPE "\",\"_type\":\""
+ #	define META_PARENT "\",\"_parent\":\""
+ #	define META_ID "\", \"_id\":\""
+ #	define META_END  "\"}}\n"
+ 
++typedef enum {
++	ES_WRITE_INDEX,
++	ES_WRITE_CREATE,
++	ES_WRITE_UPDATE, /* not supported */
++	ES_WRITE_UPSERT /* not supported */
++} es_write_ops_t;
++
+ /* REST API for elasticsearch hits this URL:
+  * http://<hostName>:<restPort>/<searchIndex>/<searchType>
+  */
++/* bulk API uses /_bulk */
+ typedef struct curl_slist HEADER;
+-typedef struct _instanceData {
++typedef struct instanceConf_s {
+ 	int defaultPort;
+ 	int fdErrFile;		/* error file fd or -1 if not open */
+ 	pthread_mutex_t mutErrFile;
+@@ -113,8 +133,25 @@ typedef struct _instanceData {
+ 	uchar *caCertFile;
+ 	uchar *myCertFile;
+ 	uchar *myPrivKeyFile;
++	es_write_ops_t writeOperation;
++	sbool retryFailures;
++	int ratelimitInterval;
++	int ratelimitBurst;
++	/* for retries */
++	ratelimit_t *ratelimiter;
++	uchar *retryRulesetName;
++	ruleset_t *retryRuleset;
++	struct instanceConf_s *next;
+ } instanceData;
+ 
++typedef instanceConf_t instanceData;
++
++struct modConfData_s {
++	rsconf_t *pConf;		/* our overall config object */
++	instanceConf_t *root, *tail;
++};
++static modConfData_t *loadModConf = NULL;	/* modConf ptr to use for the current load process */
++
+ typedef struct wrkrInstanceData {
+ 	instanceData *pData;
+ 	int serverIndex;
+@@ -160,7 +197,12 @@ static struct cnfparamdescr actpdescr[] = {
+ 	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
+ 	{ "tls.cacert", eCmdHdlrString, 0 },
+ 	{ "tls.mycert", eCmdHdlrString, 0 },
+-	{ "tls.myprivkey", eCmdHdlrString, 0 }
++	{ "tls.myprivkey", eCmdHdlrString, 0 },
++	{ "writeoperation", eCmdHdlrGetWord, 0 },
++	{ "retryfailures", eCmdHdlrBinary, 0 },
++	{ "ratelimit.interval", eCmdHdlrInt, 0 },
++	{ "ratelimit.burst", eCmdHdlrInt, 0 },
++	{ "retryruleset", eCmdHdlrString, 0 }
+ };
+ static struct cnfparamblk actpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -177,6 +219,9 @@ CODESTARTcreateInstance
+ 	pData->caCertFile = NULL;
+ 	pData->myCertFile = NULL;
+ 	pData->myPrivKeyFile = NULL;
++	pData->ratelimiter = NULL;
++	pData->retryRulesetName = NULL;
++	pData->retryRuleset = NULL;
+ ENDcreateInstance
+ 
+ BEGINcreateWrkrInstance
+@@ -228,6 +273,9 @@ CODESTARTfreeInstance
+ 	free(pData->caCertFile);
+ 	free(pData->myCertFile);
+ 	free(pData->myPrivKeyFile);
++	free(pData->retryRulesetName);
++	if (pData->ratelimiter != NULL)
++		ratelimitDestruct(pData->ratelimiter);
+ ENDfreeInstance
+ 
+ BEGINfreeWrkrInstance
+@@ -285,6 +333,10 @@ CODESTARTdbgPrintInstInfo
+ 	dbgprintf("\ttls.cacert='%s'\n", pData->caCertFile);
+ 	dbgprintf("\ttls.mycert='%s'\n", pData->myCertFile);
+ 	dbgprintf("\ttls.myprivkey='%s'\n", pData->myPrivKeyFile);
++	dbgprintf("\twriteoperation='%d'\n", pData->writeOperation);
++	dbgprintf("\tretryfailures='%d'\n", pData->retryFailures);
++	dbgprintf("\tratelimit.interval='%d'\n", pData->ratelimitInterval);
++	dbgprintf("\tratelimit.burst='%d'\n", pData->ratelimitBurst);
+ ENDdbgPrintInstInfo
+ 
+ 
+@@ -557,7 +609,11 @@ finalize_it:
+ static size_t
+ computeMessageSize(wrkrInstanceData_t *pWrkrData, uchar *message, uchar **tpls)
+ {
+-	size_t r = sizeof(META_STRT)-1 + sizeof(META_TYPE)-1 + sizeof(META_END)-1 + sizeof("\n")-1;
++	size_t r = sizeof(META_TYPE)-1 + sizeof(META_END)-1 + sizeof("\n")-1;
++	if (pWrkrData->pData->writeOperation == ES_WRITE_CREATE)
++		r += sizeof(META_STRT_CREATE)-1;
++	else
++		r += sizeof(META_STRT)-1;
+ 
+ 	uchar *searchIndex = 0;
+ 	uchar *searchType;
+@@ -594,7 +650,10 @@ buildBatch(wrkrInstanceData_t *pWrkrData, uchar *message, uchar **tpls)
+ 	DEFiRet;
+ 
+ 	getIndexTypeAndParent(pWrkrData->pData, tpls, &searchIndex, &searchType, &parent, &bulkId);
+-	r = es_addBuf(&pWrkrData->batch.data, META_STRT, sizeof(META_STRT)-1);
++	if (pWrkrData->pData->writeOperation == ES_WRITE_CREATE)
++		r = es_addBuf(&pWrkrData->batch.data, META_STRT_CREATE, sizeof(META_STRT_CREATE)-1);
++	else
++		r = es_addBuf(&pWrkrData->batch.data, META_STRT, sizeof(META_STRT)-1);
+ 	if(r == 0) r = es_addBuf(&pWrkrData->batch.data, (char*)searchIndex,
+ 				 ustrlen(searchIndex));
+ 	if(r == 0) r = es_addBuf(&pWrkrData->batch.data, META_TYPE, sizeof(META_TYPE)-1);
+@@ -709,13 +768,20 @@ static int checkReplyStatus(fjson_object* ok) {
+ 
+ /*
+  * Context object for error file content creation or status check
++ * response_item - the full {"create":{"_index":"idxname",.....}}
++ * response_body - the inner hash of the response_item - {"_index":"idxname",...}
++ * status - the "status" field from the inner hash - "status":500
++ *          should be able to use fjson_object_get_int(status) to get the http result code
+  */
+ typedef struct exeContext{
+ 	int statusCheckOnly;
+ 	fjson_object *errRoot;
+-	rsRetVal (*prepareErrorFileContent)(struct exeContext *ctx,int itemStatus,char *request,char *response);
+-
+-
++	rsRetVal (*prepareErrorFileContent)(struct exeContext *ctx,int itemStatus,char *request,char *response,
++			fjson_object *response_item, fjson_object *response_body, fjson_object *status);
++	es_write_ops_t writeOperation;
++	ratelimit_t *ratelimiter;
++	ruleset_t *retryRuleset;
++	struct json_tokener *jTokener;
+ } context;
+ 
+ /*
+@@ -728,8 +794,15 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+ 	fjson_object *replyRoot = *pReplyRoot;
+ 	int i;
+ 	int numitems;
+-	fjson_object *items=NULL;
++	fjson_object *items=NULL, *jo_errors = NULL;
++	int errors = 0;
+ 
++	if(fjson_object_object_get_ex(replyRoot, "errors", &jo_errors)) {
++		errors = fjson_object_get_boolean(jo_errors);
++		if (!errors && pWrkrData->pData->retryFailures) {
++			return RS_RET_OK;
++		}
++	}
+ 
+ 	/*iterate over items*/
+ 	if(!fjson_object_object_get_ex(replyRoot, "items", &items)) {
+@@ -741,7 +814,11 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+ 
+ 	numitems = fjson_object_array_length(items);
+ 
+-	DBGPRINTF("omelasticsearch: Entire request %s\n",reqmsg);
++	if (reqmsg) {
++		DBGPRINTF("omelasticsearch: Entire request %s\n", reqmsg);
++	} else {
++		DBGPRINTF("omelasticsearch: Empty request\n");
++	}
+ 	const char *lastReqRead= (char*)reqmsg;
+ 
+ 	DBGPRINTF("omelasticsearch: %d items in reply\n", numitems);
+@@ -769,8 +846,7 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+ 		
+ 		char *request =0;
+ 		char *response =0;
+-		if(ctx->statusCheckOnly)
+-		{
++		if(ctx->statusCheckOnly || (NULL == lastReqRead)) {
+ 			if(itemStatus) {
+ 				DBGPRINTF("omelasticsearch: error in elasticsearch reply: item %d, status is %d\n", i, fjson_object_get_int(ok));
+ 				DBGPRINTF("omelasticsearch: status check found error.\n");
+@@ -795,7 +871,8 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+ 			}
+ 
+ 			/*call the context*/
+-			rsRetVal ret = ctx->prepareErrorFileContent(ctx, itemStatus, request,response);
++			rsRetVal ret = ctx->prepareErrorFileContent(ctx, itemStatus, request,
++					response, item, result, ok);
+ 
+ 			/*free memory in any case*/
+ 			free(request);
+@@ -818,11 +895,14 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+  * Dumps only failed requests of bulk insert
+  */
+ static rsRetVal
+-getDataErrorOnly(context *ctx,int itemStatus,char *request,char *response)
++getDataErrorOnly(context *ctx,int itemStatus,char *request,char *response,
++		fjson_object *response_item, fjson_object *response_body, fjson_object *status)
+ {
+ 	DEFiRet;
+-	if(itemStatus)
+-	{
++	(void)response_item; /* unused */
++	(void)response_body; /* unused */
++	(void)status; /* unused */
++	if(itemStatus) {
+ 		fjson_object *onlyErrorResponses =NULL;
+ 		fjson_object *onlyErrorRequests=NULL;
+ 
+@@ -855,9 +935,16 @@ static rsRetVal
+ getDataInterleaved(context *ctx,
+ 	int __attribute__((unused)) itemStatus,
+ 	char *request,
+-	char *response)
++	char *response,
++	fjson_object *response_item,
++	fjson_object *response_body,
++	fjson_object *status
++)
+ {
+ 	DEFiRet;
++	(void)response_item; /* unused */
++	(void)response_body; /* unused */
++	(void)status; /* unused */
+ 	fjson_object *interleaved =NULL;
+ 	if(!fjson_object_object_get_ex(ctx->errRoot, "response", &interleaved))
+ 	{
+@@ -889,11 +976,13 @@ getDataInterleaved(context *ctx,
+  */
+ 
+ static rsRetVal
+-getDataErrorOnlyInterleaved(context *ctx,int itemStatus,char *request,char *response)
++getDataErrorOnlyInterleaved(context *ctx,int itemStatus,char *request,char *response,
++		fjson_object *response_item, fjson_object *response_body, fjson_object *status)
+ {
+ 	DEFiRet;
+ 	if (itemStatus) {
+-		if(getDataInterleaved(ctx, itemStatus,request,response)!= RS_RET_OK) {
++		if(getDataInterleaved(ctx, itemStatus,request,response,
++				response_item, response_body, status)!= RS_RET_OK) {
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+ 	}
+@@ -902,6 +991,141 @@ getDataErrorOnlyInterleaved(context *ctx,int itemStatus,char *request,char *resp
+ 		RETiRet;
+ }
+ 
++/* request string looks like this:
++ * "{\"create\":{\"_index\": \"rsyslog_testbench\",\"_type\":\"test-type\",
++ *   \"_id\":\"FAEAFC0D17C847DA8BD6F47BC5B3800A\"}}\n
++ * {\"msgnum\":\"x00000000\",\"viaq_msg_id\":\"FAEAFC0D17C847DA8BD6F47BC5B3800A\"}\n"
++ * we don't want the meta header, only the data part
++ * start = first \n + 1
++ * end = last \n
++ */
++static rsRetVal
++createMsgFromRequest(const char *request, context *ctx, smsg_t **msg)
++{
++	DEFiRet;
++	fjson_object *jo_msg = NULL;
++	const char *datastart, *dataend;
++	size_t datalen;
++	enum json_tokener_error json_error;
++
++	*msg = NULL;
++	if (!(datastart = strchr(request, '\n')) || (datastart[1] != '{')) {
++		LogError(0, RS_RET_ERR,
++			"omelasticsearch: malformed original request - "
++			"could not find start of original data [%s]",
++			request);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	datastart++; /* advance to { */
++	if (!(dataend = strchr(datastart, '\n')) || (dataend[1] != '\0')) {
++		LogError(0, RS_RET_ERR,
++			"omelasticsearch: malformed original request - "
++			"could not find end of original data [%s]",
++			request);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	datalen = dataend - datastart;
++	json_tokener_reset(ctx->jTokener);
++	fjson_object *jo_request = json_tokener_parse_ex(ctx->jTokener, datastart, datalen);
++	json_error = fjson_tokener_get_error(ctx->jTokener);
++	if (!jo_request || (json_error != fjson_tokener_success)) {
++		LogError(0, RS_RET_ERR,
++			"omelasticsearch: parse error [%s] - could not convert original "
++			"request JSON back into JSON object [%s]",
++			fjson_tokener_error_desc(json_error), request);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++
++	CHKiRet(msgConstruct(msg));
++	MsgSetFlowControlType(*msg, eFLOWCTL_FULL_DELAY);
++	MsgSetInputName(*msg, pInputName);
++	if (fjson_object_object_get_ex(jo_request, "message", &jo_msg)) {
++		const char *rawmsg = json_object_get_string(jo_msg);
++		const size_t msgLen = (size_t)json_object_get_string_len(jo_msg);
++		MsgSetRawMsg(*msg, rawmsg, msgLen);
++	} else {
++		MsgSetRawMsg(*msg, request, strlen(request));
++	}
++	MsgSetMSGoffs(*msg, 0);	/* we do not have a header... */
++	CHKiRet(msgAddJSON(*msg, (uchar*)"!", jo_request, 0, 0));
++
++	finalize_it:
++		RETiRet;
++
++}
++
++
++static rsRetVal
++getDataRetryFailures(context *ctx,int itemStatus,char *request,char *response,
++		fjson_object *response_item, fjson_object *response_body, fjson_object *status)
++{
++	DEFiRet;
++	fjson_object *omes = NULL, *jo = NULL;
++	int istatus = fjson_object_get_int(status);
++	int iscreateop = 0;
++	struct json_object_iterator it = json_object_iter_begin(response_item);
++	struct json_object_iterator itEnd = json_object_iter_end(response_item);
++	const char *optype = NULL;
++	smsg_t *msg = NULL;
++
++	(void)response;
++	(void)itemStatus;
++	CHKiRet(createMsgFromRequest(request, ctx, &msg));
++	CHKmalloc(msg);
++	/* add status as local variables */
++	omes = json_object_new_object();
++	if (!json_object_iter_equal(&it, &itEnd))
++		optype = json_object_iter_peek_name(&it);
++	if (optype && !strcmp("create", optype))
++		iscreateop = 1;
++	if (optype && !strcmp("index", optype) && (ctx->writeOperation == ES_WRITE_INDEX))
++		iscreateop = 1;
++	if (optype) {
++		jo = json_object_new_string(optype);
++	} else {
++		jo = json_object_new_string("unknown");
++	}
++	json_object_object_add(omes, "writeoperation", jo);
++
++	if (!optype) {
++		STATSCOUNTER_INC(indexBadResponse, mutIndexBadResponse);
++	} else if ((istatus == 200) || (istatus == 201)) {
++		STATSCOUNTER_INC(indexSuccess, mutIndexSuccess);
++	} else if ((istatus == 409) && iscreateop) {
++		STATSCOUNTER_INC(indexDuplicate, mutIndexDuplicate);
++	} else if (istatus == 400 || (istatus < 200)) {
++		STATSCOUNTER_INC(indexBadArgument, mutIndexBadArgument);
++	} else {
++		fjson_object *error = NULL, *errtype = NULL;
++		if(fjson_object_object_get_ex(response_body, "error", &error) &&
++		   fjson_object_object_get_ex(error, "type", &errtype)) {
++			if (istatus == 429) {
++				STATSCOUNTER_INC(indexBulkRejection, mutIndexBulkRejection);
++			} else {
++				STATSCOUNTER_INC(indexOtherResponse, mutIndexOtherResponse);
++			}
++		} else {
++			STATSCOUNTER_INC(indexBadResponse, mutIndexBadResponse);
++		}
++	}
++	/* add response_body fields to local var omes */
++	it = json_object_iter_begin(response_body);
++	itEnd = json_object_iter_end(response_body);
++	while (!json_object_iter_equal(&it, &itEnd)) {
++		json_object_object_add(omes, json_object_iter_peek_name(&it),
++			json_object_get(json_object_iter_peek_value(&it)));
++		json_object_iter_next(&it);
++	}
++	CHKiRet(msgAddJSON(msg, (uchar*)".omes", omes, 0, 0));
++	omes = NULL;
++	MsgSetRuleset(msg, ctx->retryRuleset);
++	CHKiRet(ratelimitAddMsg(ctx->ratelimiter, NULL, msg));
++finalize_it:
++	if (omes)
++		json_object_put(omes);
++	RETiRet;
++}
++
+ /*
+  * get erroronly context
+  */
+@@ -979,6 +1203,23 @@ initializeErrorInterleavedConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+ 		RETiRet;
+ }
+ 
++/*get retry failures context*/
++static rsRetVal
++initializeRetryFailuresContext(wrkrInstanceData_t *pWrkrData,context *ctx){
++	DEFiRet;
++	ctx->statusCheckOnly=0;
++	fjson_object *errRoot=NULL;
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++
++
++	fjson_object_object_add(errRoot, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	ctx->errRoot = errRoot;
++	ctx->prepareErrorFileContent= &getDataRetryFailures;
++	CHKmalloc(ctx->jTokener = json_tokener_new());
++	finalize_it:
++		RETiRet;
++}
++
+ 
+ /* write data error request/replies to separate error file
+  * Note: we open the file but never close it before exit. If it
+@@ -994,6 +1235,10 @@ writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, fjson_object
+ 	char errStr[1024];
+ 	context ctx;
+ 	ctx.errRoot=0;
++	ctx.writeOperation = pWrkrData->pData->writeOperation;
++	ctx.ratelimiter = pWrkrData->pData->ratelimiter;
++	ctx.retryRuleset = pWrkrData->pData->retryRuleset;
++	ctx.jTokener = NULL;
+ 	DEFiRet;
+ 
+ 	if(pData->errorFile == NULL) {
+@@ -1039,9 +1284,12 @@ writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, fjson_object
+ 				DBGPRINTF("omelasticsearch: error initializing error interleaved context.\n");
+ 				ABORT_FINALIZE(RS_RET_ERR);
+ 			}
+-		}
+-		else
+-		{
++		} else if(pData->retryFailures) {
++			if(initializeRetryFailuresContext(pWrkrData, &ctx) != RS_RET_OK) {
++				DBGPRINTF("omelasticsearch: error initializing retry failures context.\n");
++				ABORT_FINALIZE(RS_RET_ERR);
++			}
++		} else {
+ 			DBGPRINTF("omelasticsearch: None of the modes match file write. No data to write.\n");
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+@@ -1082,25 +1330,38 @@ finalize_it:
+ 	if(bMutLocked)
+ 		pthread_mutex_unlock(&pData->mutErrFile);
+ 	fjson_object_put(ctx.errRoot);
++	if (ctx.jTokener)
++		json_tokener_free(ctx.jTokener);
++	free(rendered);
+ 	RETiRet;
+ }
+ 
+ 
+ static rsRetVal
+-checkResultBulkmode(wrkrInstanceData_t *pWrkrData, fjson_object *root)
++checkResultBulkmode(wrkrInstanceData_t *pWrkrData, fjson_object *root, uchar *reqmsg)
+ {
+ 	DEFiRet;
+ 	context ctx;
+-	ctx.statusCheckOnly=1;
+ 	ctx.errRoot = 0;
+-	if(parseRequestAndResponseForContext(pWrkrData,&root,0,&ctx)!= RS_RET_OK)
+-	{
++	ctx.writeOperation = pWrkrData->pData->writeOperation;
++	ctx.ratelimiter = pWrkrData->pData->ratelimiter;
++	ctx.retryRuleset = pWrkrData->pData->retryRuleset;
++	ctx.statusCheckOnly=1;
++	ctx.jTokener = NULL;
++	if (pWrkrData->pData->retryFailures) {
++		ctx.statusCheckOnly=0;
++		CHKiRet(initializeRetryFailuresContext(pWrkrData, &ctx));
++	}
++	if(parseRequestAndResponseForContext(pWrkrData,&root,reqmsg,&ctx)!= RS_RET_OK) {
+ 		DBGPRINTF("omelasticsearch: error found in elasticsearch reply\n");
+ 		ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 	}
+ 
+-	finalize_it:
+-		RETiRet;
++finalize_it:
++	fjson_object_put(ctx.errRoot);
++	if (ctx.jTokener)
++		json_tokener_free(ctx.jTokener);
++	RETiRet;
+ }
+ 
+ 
+@@ -1118,7 +1378,7 @@ checkResult(wrkrInstanceData_t *pWrkrData, uchar *reqmsg)
+ 	}
+ 
+ 	if(pWrkrData->pData->bulkmode) {
+-		iRet = checkResultBulkmode(pWrkrData, root);
++		iRet = checkResultBulkmode(pWrkrData, root, reqmsg);
+ 	} else {
+ 		if(fjson_object_object_get_ex(root, "status", &status)) {
+ 			iRet = RS_RET_DATAFAIL;
+@@ -1397,6 +1657,13 @@ setInstParamDefaults(instanceData *pData)
+ 	pData->caCertFile = NULL;
+ 	pData->myCertFile = NULL;
+ 	pData->myPrivKeyFile = NULL;
++	pData->writeOperation = ES_WRITE_INDEX;
++	pData->retryFailures = 0;
++	pData->ratelimitBurst = 20000;
++	pData->ratelimitInterval = 600;
++	pData->ratelimiter = NULL;
++	pData->retryRulesetName = NULL;
++	pData->retryRuleset = NULL;
+ }
+ 
+ BEGINnewActInst
+@@ -1495,6 +1762,27 @@ CODESTARTnewActInst
+ 			} else {
+ 				fclose(fp);
+ 			}
++		} else if(!strcmp(actpblk.descr[i].name, "writeoperation")) {
++			char *writeop = es_str2cstr(pvals[i].val.d.estr, NULL);
++			if (writeop && !strcmp(writeop, "create")) {
++				pData->writeOperation = ES_WRITE_CREATE;
++			} else if (writeop && !strcmp(writeop, "index")) {
++				pData->writeOperation = ES_WRITE_INDEX;
++			} else if (writeop) {
++				errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++					"omelasticsearch: invalid value '%s' for writeoperation: "
++					"must be one of 'index' or 'create' - using default value 'index'", writeop);
++				pData->writeOperation = ES_WRITE_INDEX;
++			}
++			free(writeop);
++		} else if(!strcmp(actpblk.descr[i].name, "retryfailures")) {
++			pData->retryFailures = pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "ratelimit.burst")) {
++			pData->ratelimitBurst = (int) pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "ratelimit.interval")) {
++			pData->ratelimitInterval = (int) pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "retryruleset")) {
++			pData->retryRulesetName = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else {
+ 			dbgprintf("omelasticsearch: program error, non-handled "
+ 			  "param '%s'\n", actpblk.descr[i].name);
+@@ -1661,6 +1949,27 @@ CODESTARTnewActInst
+ 		pData->searchIndex = (uchar*) strdup("system");
+ 	if(pData->searchType == NULL)
+ 		pData->searchType = (uchar*) strdup("events");
++
++	if ((pData->writeOperation != ES_WRITE_INDEX) && (pData->bulkId == NULL)) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++			"omelasticsearch: writeoperation '%d' requires bulkid", pData->writeOperation);
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++
++	if (pData->retryFailures) {
++		CHKiRet(ratelimitNew(&pData->ratelimiter, "omelasticsearch", NULL));
++		ratelimitSetLinuxLike(pData->ratelimiter, pData->ratelimitInterval, pData->ratelimitBurst);
++		ratelimitSetNoTimeCache(pData->ratelimiter);
++	}
++
++	/* node created, let's add to list of instance configs for the module */
++	if(loadModConf->tail == NULL) {
++		loadModConf->tail = loadModConf->root = pData;
++	} else {
++		loadModConf->tail->next = pData;
++		loadModConf->tail = pData;
++	}
++
+ CODE_STD_FINALIZERnewActInst
+ 	cnfparamvalsDestruct(pvals, &actpblk);
+ 	if (serverParam)
+@@ -1680,6 +1989,51 @@ CODE_STD_STRING_REQUESTparseSelectorAct(1)
+ CODE_STD_FINALIZERparseSelectorAct
+ ENDparseSelectorAct
+ 
++
++BEGINbeginCnfLoad
++CODESTARTbeginCnfLoad
++	loadModConf = pModConf;
++	pModConf->pConf = pConf;
++	pModConf->root = pModConf->tail = NULL;
++ENDbeginCnfLoad
++
++
++BEGINendCnfLoad
++CODESTARTendCnfLoad
++	loadModConf = NULL; /* done loading */
++ENDendCnfLoad
++
++
++BEGINcheckCnf
++	instanceConf_t *inst;
++CODESTARTcheckCnf
++	for(inst = pModConf->root ; inst != NULL ; inst = inst->next) {
++		ruleset_t *pRuleset;
++		rsRetVal localRet;
++
++		if (inst->retryRulesetName) {
++			localRet = ruleset.GetRuleset(pModConf->pConf, &pRuleset, inst->retryRulesetName);
++			if(localRet == RS_RET_NOT_FOUND) {
++				errmsg.LogError(0, localRet, "omelasticsearch: retryruleset '%s' not found - "
++						"no retry ruleset will be used", inst->retryRulesetName);
++			} else {
++				inst->retryRuleset = pRuleset;
++			}
++		}
++	}
++ENDcheckCnf
++
++
++BEGINactivateCnf
++CODESTARTactivateCnf
++ENDactivateCnf
++
++
++BEGINfreeCnf
++CODESTARTfreeCnf
++ENDfreeCnf
++
++
+ BEGINdoHUP
+ CODESTARTdoHUP
+ 	if(pData->fdErrFile != -1) {
+@@ -1691,10 +2045,14 @@ ENDdoHUP
+ 
+ BEGINmodExit
+ CODESTARTmodExit
++	if(pInputName != NULL)
++		prop.Destruct(&pInputName);
+ 	curl_global_cleanup();
+ 	statsobj.Destruct(&indexStats);
+ 	objRelease(errmsg, CORE_COMPONENT);
+-        objRelease(statsobj, CORE_COMPONENT);
++	objRelease(statsobj, CORE_COMPONENT);
++	objRelease(prop, CORE_COMPONENT);
++	objRelease(ruleset, CORE_COMPONENT);
+ ENDmodExit
+ 
+ BEGINqueryEtryPt
+@@ -1705,6 +2063,7 @@ CODEqueryEtryPt_IsCompatibleWithFeature_IF_OMOD_QUERIES
+ CODEqueryEtryPt_STD_CONF2_OMOD_QUERIES
+ CODEqueryEtryPt_doHUP
+ CODEqueryEtryPt_TXIF_OMOD_QUERIES /* we support the transactional interface! */
++CODEqueryEtryPt_STD_CONF2_QUERIES
+ ENDqueryEtryPt
+ 
+ 
+@@ -1714,6 +2073,8 @@ CODESTARTmodInit
+ CODEmodInit_QueryRegCFSLineHdlr
+ 	CHKiRet(objUse(errmsg, CORE_COMPONENT));
+ 	CHKiRet(objUse(statsobj, CORE_COMPONENT));
++	CHKiRet(objUse(prop, CORE_COMPONENT));
++	CHKiRet(objUse(ruleset, CORE_COMPONENT));
+ 
+ 	if (curl_global_init(CURL_GLOBAL_ALL) != 0) {
+ 		errmsg.LogError(0, RS_RET_OBJ_CREATION_FAILED, "CURL fail. -elasticsearch indexing disabled");
+@@ -1739,7 +2100,28 @@ CODEmodInit_QueryRegCFSLineHdlr
+ 	STATSCOUNTER_INIT(indexESFail, mutIndexESFail);
+ 	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"failed.es",
+ 		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexESFail));
++	STATSCOUNTER_INIT(indexSuccess, mutIndexSuccess);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.success",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexSuccess));
++	STATSCOUNTER_INIT(indexBadResponse, mutIndexBadResponse);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.bad",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexBadResponse));
++	STATSCOUNTER_INIT(indexDuplicate, mutIndexDuplicate);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.duplicate",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexDuplicate));
++	STATSCOUNTER_INIT(indexBadArgument, mutIndexBadArgument);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.badargument",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexBadArgument));
++	STATSCOUNTER_INIT(indexBulkRejection, mutIndexBulkRejection);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.bulkrejection",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexBulkRejection));
++	STATSCOUNTER_INIT(indexOtherResponse, mutIndexOtherResponse);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.other",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexOtherResponse));
+ 	CHKiRet(statsobj.ConstructFinalize(indexStats));
++	CHKiRet(prop.Construct(&pInputName));
++	CHKiRet(prop.SetString(pInputName, UCHAR_CONSTANT("omelasticsearch"), sizeof("omelasticsearch") - 1));
++	CHKiRet(prop.ConstructFinalize(pInputName));
+ ENDmodInit
+ 
+ /* vi:set ai:
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1582517-buffer-overflow-memcpy-in-parser.patch b/SOURCES/rsyslog-8.24.0-rhbz1582517-buffer-overflow-memcpy-in-parser.patch
new file mode 100644
index 0000000..96d0695
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1582517-buffer-overflow-memcpy-in-parser.patch
@@ -0,0 +1,46 @@
+From cc3098b63174b8aa875d1f2e9c6ea94407b211b8 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Thu, 16 Feb 2017 19:02:36 +0100
+Subject: [PATCH 04/11] Bug 1582517 - rsyslog: Buffer overflow in memcpy() in parser.c
+
+core: fix potential misadressing in parser message sanitizer
+
+misadressing could happen when an oversize message made it to the
+sanitizer AND contained a control character in the oversize part
+of the message. Note that it is an error in itself that such an
+oversize message enters the system, but we harden the sanitizer
+to handle this gracefully (it will truncate the message).
+
+Note that truncation may still - as previously - happen if the
+number of escape characters makes the string grow above the max
+message size.
+
+(cherry picked from commit 20f8237870eb5e971fa068e4dd4d296f1dbef329)
+---
+ runtime/parser.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+diff --git a/runtime/parser.c b/runtime/parser.c
+index 0574d982a..9645baa40 100644
+--- a/runtime/parser.c
++++ b/runtime/parser.c
+@@ -464,9 +464,15 @@ SanitizeMsg(smsg_t *pMsg)
+ 	if(maxDest < sizeof(szSanBuf))
+ 		pDst = szSanBuf;
+ 	else 
+-		CHKmalloc(pDst = MALLOC(iMaxLine + 1));
++		CHKmalloc(pDst = MALLOC(maxDest + 1));
+ 	if(iSrc > 0) {
+ 		iSrc--; /* go back to where everything is OK */
++		if(iSrc > maxDest) {
++			DBGPRINTF("parser.Sanitize: have oversize index %zd, "
++				"max %zd - corrected, but should not happen\n",
++				iSrc, maxDest);
++			iSrc = maxDest;
++		}
+ 		memcpy(pDst, pszMsg, iSrc); /* fast copy known good */
+ 	}
+ 	iDst = iSrc;
+-- 
+2.14.4
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1591819-msg-loss-shutdown.patch b/SOURCES/rsyslog-8.24.0-rhbz1591819-msg-loss-shutdown.patch
new file mode 100644
index 0000000..472823b
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1591819-msg-loss-shutdown.patch
@@ -0,0 +1,63 @@
+From 59627f23bee26f3acec19d491d5884bcd1fb672e Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Wed, 6 Jun 2018 17:30:21 +0200
+Subject: [PATCH] core: fix message loss on target unavailibility during
+ shutdown
+
+Triggering condition:
+- action queue in disk mode (or DA)
+- batch is being processed by failed action in retry mode
+- rsyslog is shut down without resuming action
+
+In these cases messages may be lost by not properly writing them
+back to the disk queue.
+
+closes https://github.com/rsyslog/rsyslog/issues/2760
+---
+ action.c        | 11 +++++++++--
+ runtime/queue.c |  3 +++
+ 2 files changed, 12 insertions(+), 2 deletions(-)
+
+diff --git a/action.c b/action.c
+index a9f886a43..39fcb1c19 100644
+--- a/action.c
++++ b/action.c
+@@ -1554,8 +1554,15 @@ processBatchMain(void *__restrict__ const pVoid,
+ 			/* we do not check error state below, because aborting would be
+ 			 * more harmful than continuing.
+ 			 */
+-			processMsgMain(pAction, pWti, pBatch->pElem[i].pMsg, &ttNow);
+-			batchSetElemState(pBatch, i, BATCH_STATE_COMM);
++			rsRetVal localRet = processMsgMain(pAction, pWti, pBatch->pElem[i].pMsg, &ttNow);
++			DBGPRINTF("processBatchMain: i %d, processMsgMain iRet %d\n", i, localRet);
++			if(   localRet == RS_RET_OK
++			   || localRet == RS_RET_DEFER_COMMIT
++			   || localRet == RS_RET_ACTION_FAILED
++			   || localRet == RS_RET_PREVIOUS_COMMITTED ) {
++				batchSetElemState(pBatch, i, BATCH_STATE_COMM);
++				DBGPRINTF("processBatchMain: i %d, COMM state set\n", i);
++			}
+ 		}
+ 	}
+ 
+diff --git a/runtime/queue.c b/runtime/queue.c
+index 74cc217d1..fd163a49f 100644
+--- a/runtime/queue.c
++++ b/runtime/queue.c
+@@ -1666,6 +1666,7 @@ DeleteProcessedBatch(qqueue_t *pThis, batch_t *pBatch)
+ 
+ 	for(i = 0 ; i < pBatch->nElem ; ++i) {
+ 		pMsg = pBatch->pElem[i].pMsg;
++		DBGPRINTF("DeleteProcessedBatch: etry %d state %d\n", i, pBatch->eltState[i]);
+ 		if(   pBatch->eltState[i] == BATCH_STATE_RDY
+ 		   || pBatch->eltState[i] == BATCH_STATE_SUB) {
+ 			localRet = doEnqSingleObj(pThis, eFLOWCTL_NO_DELAY, MsgAddRef(pMsg));
+@@ -1778,6 +1779,8 @@ DequeueConsumableElements(qqueue_t *pThis, wti_t *pWti, int *piRemainingQueueSiz
+ 	/* it is sufficient to persist only when the bulk of work is done */
+ 	qqueueChkPersist(pThis, nDequeued+nDiscarded+nDeleted);
+ 
++	DBGOPRINT((obj_t*) pThis, "dequeued %d consumable elements, szlog %d sz phys %d\n",
++		nDequeued, getLogicalQueueSize(pThis), getPhysicalQueueSize(pThis));
+ 	pWti->batch.nElem = nDequeued;
+ 	pWti->batch.nElemDeq = nDequeued + nDiscarded;
+ 	pWti->batch.deqID = getNextDeqID(pThis);
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1597264-man-page-fix.patch b/SOURCES/rsyslog-8.24.0-rhbz1597264-man-page-fix.patch
new file mode 100644
index 0000000..74d3395
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1597264-man-page-fix.patch
@@ -0,0 +1,30 @@
+diff --git a/tools/rsyslogd.8 b/tools/rsyslogd.8
+index 77d0f97..d9a2e32 100644
+--- a/tools/rsyslogd.8
++++ b/tools/rsyslogd.8
+@@ -127,14 +127,14 @@ reacts to a set of signals.  You may easily send a signal to
+ using the following:
+ .IP
+ .nf
+-kill -SIGNAL $(cat /var/run/rsyslogd.pid)
++kill -SIGNAL $(cat /var/run/syslogd.pid)
+ .fi
+ .PP
+ Note that -SIGNAL must be replaced with the actual signal
+ you are trying to send, e.g. with HUP. So it then becomes:
+ .IP
+ .nf
+-kill -HUP $(cat /var/run/rsyslogd.pid)
++kill -HUP $(cat /var/run/syslogd.pid)
+ .fi
+ .PP
+ .TP
+@@ -215,7 +215,7 @@ for exact information.
+ .I /dev/log
+ The Unix domain socket to from where local syslog messages are read.
+ .TP
+-.I /var/run/rsyslogd.pid
++.I /var/run/syslogd.pid
+ The file containing the process id of 
+ .BR rsyslogd .
+ .TP
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1600462-wrktable-realloc-null.patch b/SOURCES/rsyslog-8.24.0-rhbz1600462-wrktable-realloc-null.patch
new file mode 100644
index 0000000..81f58ee
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1600462-wrktable-realloc-null.patch
@@ -0,0 +1,64 @@
+From f2f67932a37539080b7ad3403dd073df3511a410 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Fri, 27 Oct 2017 08:36:19 +0200
+Subject: [PATCH] core/action: fix NULL pointer access under OOM condition
+
+If a new worker was started while the system ran out of memory
+a NULL pointer access could happen. The patch handles this more
+gracefully.
+
+Detected by Coverty Scan, CID 185342.
+---
+ action.c | 17 +++++++++++++----
+ 1 file changed, 13 insertions(+), 4 deletions(-)
+
+diff --git a/action.c b/action.c
+index 986501074..4e467ee8b 100644
+--- a/action.c
++++ b/action.c
+@@ -822,8 +822,9 @@ actionDoRetry(action_t * const pThis, wti_t * const pWti)
+ 
+ 
+ static rsRetVal
+-actionCheckAndCreateWrkrInstance(action_t * const pThis, wti_t * const pWti)
++actionCheckAndCreateWrkrInstance(action_t * const pThis, const wti_t *const pWti)
+ {
++	int locked = 0;
+ 	DEFiRet;
+ 	if(pWti->actWrkrInfo[pThis->iActionNbr].actWrkrData == NULL) {
+ 		DBGPRINTF("wti %p: we need to create a new action worker instance for "
+@@ -836,23 +837,31 @@ actionCheckAndCreateWrkrInstance(action_t * const pThis, wti_t * const pWti)
+ 		/* maintain worker data table -- only needed if wrkrHUP is requested! */
+ 
+ 		pthread_mutex_lock(&pThis->mutWrkrDataTable);
++		locked = 1;
+ 		int freeSpot;
+ 		for(freeSpot = 0 ; freeSpot < pThis->wrkrDataTableSize ; ++freeSpot)
+ 			if(pThis->wrkrDataTable[freeSpot] == NULL)
+ 				break;
+ 		if(pThis->nWrkr == pThis->wrkrDataTableSize) {
+-			// TODO: check realloc, fall back to old table if it fails. Better than nothing...
+-			pThis->wrkrDataTable = realloc(pThis->wrkrDataTable,
++			void *const newTable = realloc(pThis->wrkrDataTable,
+ 				(pThis->wrkrDataTableSize + 1) * sizeof(void*));
++			if(newTable == NULL) {
++				DBGPRINTF("actionCheckAndCreateWrkrInstance: out of "
++					"memory realloc wrkrDataTable\n")
++				ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++			}
++			pThis->wrkrDataTable = newTable;
+ 			pThis->wrkrDataTableSize++;
+ 		}
+ 		pThis->wrkrDataTable[freeSpot] = pWti->actWrkrInfo[pThis->iActionNbr].actWrkrData;
+ 		pThis->nWrkr++;
+-		pthread_mutex_unlock(&pThis->mutWrkrDataTable);
+ 		DBGPRINTF("wti %p: created action worker instance %d for "
+ 			  "action %d\n", pWti, pThis->nWrkr, pThis->iActionNbr);
+ 	}
+ finalize_it:
++	if(locked) {
++		pthread_mutex_unlock(&pThis->mutWrkrDataTable);
++	}
+ 	RETiRet;
+ }
+ 
diff --git a/SPECS/rsyslog.spec b/SPECS/rsyslog.spec
index 9eeefb8..7f5f89e 100644
--- a/SPECS/rsyslog.spec
+++ b/SPECS/rsyslog.spec
@@ -14,10 +14,9 @@
 Summary: Enhanced system logging and kernel message trapping daemon
 Name: rsyslog
 Version: 8.24.0
-Release: 16%{?dist}.4
+Release: 34%{?dist}
 License: (GPLv3+ and ASL 2.0)
 Group: System Environment/Daemons
-ExcludeArch: i686 ppc s390
 URL: http://www.rsyslog.com/
 Source0: http://www.rsyslog.com/files/download/rsyslog/%{name}-%{version}.tar.gz
 Source1: http://www.rsyslog.com/files/download/rsyslog/%{name}-doc-%{version}.tar.gz
@@ -63,7 +62,7 @@ Patch7: rsyslog-8.24.0-rhbz1401870-watermark.patch
 
 Patch8: rsyslog-8.24.0-rhbz1403831-missing-cmd-line-switches.patch
 Patch9: rsyslog-8.24.0-rhbz1245194-imjournal-ste-file.patch
-Patch10: rsyslog-8.24.0-rhbz1507028-recover_qi-doc.patch
+Patch10: rsyslog-8.24.0-doc-rhbz1507028-recover_qi.patch
 Patch11: rsyslog-8.24.0-rhbz1088021-systemd-time-backwards.patch
 Patch12: rsyslog-8.24.0-rhbz1403907-imudp-deprecated-parameter.patch
 Patch13: rsyslog-8.24.0-rhbz1196230-ratelimit-add-source.patch
@@ -81,13 +80,29 @@ Patch21: rsyslog-8.24.0-rhbz1431616-pmrfc3164sd-backport.patch
 Patch22: rsyslog-8.24.0-rhbz1056548-getaddrinfo.patch
 
 Patch23: rsyslog-8.24.0-rhbz1401456-sd-service-network.patch
-Patch24: rsyslog-8.24.0-rhbz1459896-queues-defaults-doc.patch
+Patch24: rsyslog-8.24.0-doc-rhbz1459896-queues-defaults.patch
 Patch25: rsyslog-8.24.0-rhbz1497985-journal-reloaded-message.patch
 Patch26: rsyslog-8.24.0-rhbz1462160-set.statement-crash.patch
 Patch27: rsyslog-8.24.0-rhbz1488186-fixed-nullptr-check.patch
 Patch28: rsyslog-8.24.0-rhbz1505103-omrelp-rebindinterval.patch
 
-Patch29: rsyslog-8.24.0-rhbz1545582-imjournal-duplicates.patch
+Patch29: rsyslog-8.24.0-rhbz1538372-imjournal-duplicates.patch
+Patch30: rsyslog-8.24.0-rhbz1511485-deserialize-property-name.patch
+
+Patch31: rsyslog-8.24.0-rhbz1512551-caching-sockaddr.patch
+Patch32: rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch
+Patch33: rsyslog-8.24.0-rhbz1582517-buffer-overflow-memcpy-in-parser.patch
+Patch34: rsyslog-8.24.0-rhbz1591819-msg-loss-shutdown.patch
+Patch35: rsyslog-8.24.0-rhbz1539193-mmkubernetes-new-plugin.patch
+Patch36: rsyslog-8.24.0-rhbz1507145-omelastic-client-cert.patch
+Patch37: rsyslog-8.24.0-doc-rhbz1507145-omelastic-client-cert-and-config.patch
+Patch38: rsyslog-8.24.0-rhbz1565214-omelasticsearch-replace-cJSON-with-libfastjson.patch
+Patch39: rsyslog-8.24.0-rhbz1565214-omelasticsearch-write-op-types-bulk-rejection-retries.patch
+Patch40: rsyslog-8.24.0-doc-rhbz1539193-mmkubernetes-new-plugin.patch
+Patch41: rsyslog-8.24.0-doc-rhbz1538372-imjournal-duplicates.patch
+Patch42: rsyslog-8.24.0-rhbz1597264-man-page-fix.patch
+Patch43: rsyslog-8.24.0-rhbz1559408-async-writer.patch
+Patch44: rsyslog-8.24.0-rhbz1600462-wrktable-realloc-null.patch
 
 %package crypto
 Summary: Encryption support
@@ -124,7 +139,8 @@ Requires: %name = %version-%release
 Summary: Log normalization support for rsyslog
 Group: System Environment/Daemons
 Requires: %name = %version-%release
-BuildRequires: libee-devel liblognorm-devel
+BuildRequires: libee-devel
+BuildRequires: liblognorm-devel
 
 %package mmaudit
 Summary: Message modification module supporting Linux audit format
@@ -202,6 +218,18 @@ Group: System Environment/Daemons
 Requires: %name = %version-%release
 BuildRequires: libnet-devel
 
+%package kafka
+Summary: Provides kafka support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: librdkafka-devel
+
+%package mmkubernetes
+Summary: Provides the mmkubernetes module
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: libcurl-devel
+
 %description
 Rsyslog is an enhanced, multi-threaded syslog daemon. It supports MySQL,
 syslog/TCP, RFC 3195, permitted sender lists, filtering on any message part,
@@ -289,12 +317,22 @@ This module is similar to the regular UDP forwarder, but permits to
 spoof the sender address. Also, it enables to circle through a number
 of source ports.
 
+%description kafka
+The rsyslog-kafka package provides module for Apache Kafka output.
+
+%description mmkubernetes
+The rsyslog-mmkubernetes package provides module for adding kubernetes 
+container metadata. 
+
 %prep
 # set up rsyslog-doc sources
 %setup -q -a 1 -T -c
 %patch4 -p1 
 %patch10 -p1
 %patch24 -p1
+%patch37 -p1
+%patch40 -p1
+%patch41 -p1
 #regenerate the docs
 mv build/searchindex.js searchindex_backup.js
 sphinx-build -b html source build
@@ -334,13 +372,29 @@ mv build doc
 %patch22 -p1 -b .getaddrinfo
 
 %patch23 -p1 -b .sd-service-network
-#%%patch24 is applied right after doc setup
+#%patch24 is applied right after doc setup
 %patch25 -p1 -b .journal-reloaded
 %patch26 -p1 -b .set-statement-crash
 %patch27 -p1 -b .nullptr-check
 %patch28 -p1 -b .rebindinterval
 
 %patch29 -p1 -b .imjournal-duplicates
+%patch30 -p1 -b .property-deserialize
+
+%patch31 -p1 -b .caching-sockaddr
+%patch32 -p1 -b .imfile-symlink
+%patch33 -p1 -b .buffer-overflow
+%patch34 -p1 -b .msg-loss-shutdown
+%patch35 -p1 -b .kubernetes-metadata
+%patch36 -p1 -b .omelasticsearch-cert
+#%patch37 is applied right after doc setup
+%patch38 -p1 -b .omelasticsearch-libfastjson
+%patch39 -p1 -b .omelasticsearch-bulk-rejection
+#%patch40 is applied right after doc setup
+#%patch41 is applied right after doc setup
+%patch42 -p1 -b .manpage
+%patch43 -p1 -b .async-writer
+%patch44 -p1 -b .null-realloc-chk
 
 autoreconf 
 
@@ -359,6 +413,7 @@ export LDFLAGS="-pie -Wl,-z,relro -Wl,-z,now"
 export HIREDIS_CFLAGS=-I/usr/include/hiredis
 export HIREDIS_LIBS=-L%{_libdir}
 %endif
+sed -i 's/%{version}/%{version}-%{release}/g' configure.ac
 %configure \
 	--prefix=/usr \
 	--disable-static \
@@ -382,6 +437,7 @@ export HIREDIS_LIBS=-L%{_libdir}
 	--enable-mmnormalize \
 	--enable-mmsnmptrapd \
 	--enable-mmutf8fix \
+	--enable-mmkubernetes \
 	--enable-mysql \
 %if %{want_hiredis}
 	--enable-omhiredis \
@@ -398,6 +454,7 @@ export HIREDIS_LIBS=-L%{_libdir}
 	--enable-omstdout \
 	--enable-omudpspoof \
 	--enable-omuxsock \
+	--enable-omkafka \
 	--enable-pgsql \
 	--enable-pmaixforwardedfrom \
 	--enable-pmcisconames \
@@ -590,11 +647,156 @@ done
 %defattr(-,root,root)
 %{_libdir}/rsyslog/omudpspoof.so
 
+%files kafka
+%{_libdir}/rsyslog/omkafka.so
+
+%files mmkubernetes
+%{_libdir}/rsyslog/mmkubernetes.so
+
 %changelog
-* Mon Apr 16 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-16.5
-RHEL 7.5.z ERRATUM
-- fixed imjournal duplicating msgs under some conditions
-  resolves: rhbz#1545582
+* Tue Aug 07 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-34
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with parent name bugfix
+  resolves: rhbz#1531295
+
+* Tue Aug 07 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-33
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with extended symlink watching
+  resolves: rhbz#1531295
+- updated mmkubernetes patch to accept dots in pod name
+  resolves: rhbz#1539193
+
+* Fri Aug 03 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-32
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with no log on EACCES
+  resolves: rhbz#1531295
+- removed now needless build-deps
+
+* Mon Jul 30 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-31
+RHEL 7.6 ERRATUM
+- added new patch fixing ompipe dropping messages when pipe full
+  resolves: rhbz#1591819
+- updated mmkubernetes patch to accept non-kubernetes containers
+  resolves: rhbz#1539193
+  resolves: rhbz#1609023
+- removed json-parsing patches as the bug is now fixed in liblognorm
+
+* Wed Jul 25 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-30
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with next bugfix
+  resolves: rhbz#1531295
+- updated imjournal duplicates patch making slower code optional
+  and added corresponding doc patch
+  resolves: rhbz#1538372
+
+* Mon Jul 23 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-29
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with another bugfix
+  resolves: rhbz#1531295
+
+* Fri Jul 20 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-28
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch fixing next round of regressions
+  resolves: rhbz#1531295
+  resolves: rhbz#1602156
+- updated mmkubernetes patch with NULL ret-check
+  resolves: rhbz#1539193
+
+* Tue Jul 17 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-27
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch fixing last update regressions
+  resolves: rhbz#1531295
+- added patch fixing deadlock in async logging
+  resolves: rhbz#1559408
+- added patch fixing NULL access in worktable create
+  resolves: rhbz#1600462
+- now putting release number into configure to have it present
+  in error messages
+
+* Mon Jul 09 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-26
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch according to early testing
+  resolves: rhbz#1531295
+- added patch fixing pid file name in manpage
+  resolves: rhbz#1597264
+- updated json-parsing patch with one more bugfix
+  resolves: rhbz#1565219
+
+* Fri Jun 29 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-24
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with fixes from covscan
+  resolves: rhbz#1531295
+- updated mmkubernetes patch with fixes from covscan
+  resolves: rhbz#1539193
+- updated imjournal duplicates patch with fixes from covscan
+  resolves: rhbz#1538372
+- updated omelastic enhancement patch with fixes from covscan
+  resolves: rhbz#1565214
+
+* Wed Jun 27 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-23
+RHEL 7.6 ERRATUM
+- added backport of leading $ support to json-parsing patch
+  resolves: rhbz#1565219
+- The required info is already contained in rsyslog-doc package
+  so there is no patch for this one
+  resolves: rhbz#1553700
+
+* Tue Jun 26 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-22
+RHEL 7.6 ERRATUM
+- edited patch for top-level json parsing with bugfix
+  resolves: rhbz#1565219
+- renamed doc patches and added/updated new ones for mmkubernetes
+  omelasticsearch and json parsing
+- renamed patch fixing buffer overflow in parser - memcpy()
+  resolves: rhbz#1582517
+
+* Mon Jun 25 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-21
+RHEL 7.6 ERRATUM
+- fixed imfile rewrite backport patch, added few more bugfixes
+  resolves: rhbz#1531295
+- added also doc patch for omelastic client certs
+  resolves: rhbz#1507145
+- cleaned and shortened patch for omelastic error handling
+  resolves: rhbz#1565214
+- enabled patch for json top-level parsing
+  resolves: rhbz#1565219
+- merged mmkubernetes patches into one and enabled the module
+  resolves: rhbz#1539193
+  resolves: rhbz#1589924
+  resolves: rhbz#1590582
+
+* Sun Jun 24 2018 Noriko Hosoi <nhosoi@redhat.com> - 8.24.0-21
+RHEL 7.6 ERRATUM
+resolves: rhbz#1582517 - Buffer overflow in memcpy() in parser.c
+resolves: rhbz#1539193 - RFE: Support for mm kubernetes plugin
+resolves: rhbz#1589924 - RFE: Several fixes for mmkubernetes
+resolves: rhbz#1590582 - mmkubernetes - use version=2 in rulebase files to avoid memory leak
+resolves: rhbz#1507145 - RFE: omelasticsearch support client cert authentication
+resolves: rhbz#1565214 - omelasticsearch needs better handling for bulk index rejections and other errors
+Disables Patch32: rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch
+Disables Patch34: rsyslog-8.24.0-rhbz1565219-parse-json-into-top-level-fields-in-mess.patch; It BuildRequires/Requires: libfastjson >= 0.99.4-3
+
+* Fri Jun 01 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-20
+RHEL 7.6 ERRATUM
+- added a patch backporting imfile module rewrite and 
+  adding symlink support
+	resolves: rhbz#1531295
+
+* Tue May 29 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-19
+RHEL 7.6 ERRATUM
+- added new kafka sub-package with enabling of omkafka module
+  resolves: rhbz#1482819
+
+* Thu May 17 2018 Radovan Sroka <rsroka@redhat.com> - 8.24.0-18
+- caching the whole sockaddr structure instead of sin_addr causing memory leak
+  resolves: rhbz#1512551
+
+* Fri Apr 27 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-17
+RHEL 7.6 ERRATUM
+- fixed imjournal duplicating messages on log rotation
+  resolves: rhbz#1538372
+- re-enabled 32-bit arches to not break dependent packages
+  resolves: rhbz#1571850
 
 * Thu Nov 09 2017 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-16
 RHEL 7.5 ERRATUM