diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..de3df7d
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+SOURCES/rsyslog-8.24.0.tar.gz
+SOURCES/rsyslog-doc-8.24.0.tar.gz
diff --git a/.rsyslog.metadata b/.rsyslog.metadata
new file mode 100644
index 0000000..289264b
--- /dev/null
+++ b/.rsyslog.metadata
@@ -0,0 +1,2 @@
+615ee5b47ca4c3a28de3c8ee4477c721c20f31aa SOURCES/rsyslog-8.24.0.tar.gz
+c0bbe5466738ac97575e0301cf26f0ec45d77b20 SOURCES/rsyslog-doc-8.24.0.tar.gz
diff --git a/SOURCES/rsyslog-8.24.0-doc-polling-by-default.patch b/SOURCES/rsyslog-8.24.0-doc-polling-by-default.patch
new file mode 100644
index 0000000..d7848c5
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-polling-by-default.patch
@@ -0,0 +1,18 @@
+--- a/source/configuration/modules/imfile.rst	2017-10-31 10:28:39.582763203 +0100
++++ b/source/configuration/modules/imfile.rst	2017-10-31 10:29:52.961142905 +0100
+@@ -100,12 +100,13 @@
+    single: imfile; mode
+ .. function:: mode ["inotify"/"polling"]
+ 
+-   *Default: "inotify"*
++   *Default: "polling"*
+ 
+    *Available since: 8.1.5*
+ 
+   This specifies if imfile is shall run in inotify ("inotify") or polling
+-  ("polling") mode. Traditionally, imfile used polling mode, which is
++  ("polling") mode. Traditionally, imfile used polling mode (and this option 
++  is ON by default in RHEL because of backwards compatibility), which is
+   much more resource-intense (and slower) than inotify mode. It is
+   suggested that users turn on "polling" mode only if they experience
+   strange problems in inotify mode. In theory, there should never be a
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1309698-imudp-case-sensitive-option.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1309698-imudp-case-sensitive-option.patch
new file mode 100644
index 0000000..58f4b57
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1309698-imudp-case-sensitive-option.patch
@@ -0,0 +1,64 @@
+From 27eda7938d678bc69b46cfcb8351e871161ba526 Mon Sep 17 00:00:00 2001
+From: Noriko Hosoi <nhosoi@momo7.localdomain>
+Date: Fri, 13 Jul 2018 10:44:13 -0700
+Subject: [PATCH] Introducing an option preservecase to imudp and imtcp module
+ for managing the case of FROMHOST value.
+
+Usage:
+  module(load="imudp" [preservecase="on"|"off"])
+  module(load="imtdp" [preservecase="on"|"off"])
+
+If preservecase="on", FROMHOST value is handled in the case sensitive manner.
+If preservecase="off", FROMHOST value is handled in the case insensitive manner.
+
+To maintain the current behaviour, the default value of preservecase is
+"on" for imtcp and "off" for imudp.
+
+References:
+  https://github.com/rsyslog/rsyslog/pull/2774
+  https://bugzilla.redhat.com/show_bug.cgi?id=1309698
+---
+ source/configuration/modules/imtcp.rst | 9 +++++++++
+ source/configuration/modules/imudp.rst | 9 +++++++++
+ 2 files changed, 18 insertions(+)
+
+diff --git a/source/configuration/modules/imtcp.rst b/source/configuration/modules/imtcp.rst
+index 2ddb7e9a..b9fe0adb 100644
+--- a/source/configuration/modules/imtcp.rst
++++ b/source/configuration/modules/imtcp.rst
+@@ -138,6 +138,15 @@
+    Array of peers:
+    PermittedPeer=["test1.example.net","10.1.2.3","test2.example.net","..."]
+ 
++.. function::  PreserveCase <on/off>
++
++   *Default: off*
++
++   This parameter is for controlling the case in fromhost.  If set to "on", 
++   the case in fromhost is preserved.  E.g., 'Host1.Example.Org' when the 
++   message was received from 'Host1.Example.Org'.  Defaults to "off" for 
++   backwards compatibility.
++
+ Input Parameters
+ ^^^^^^^^^^^^^^^^
+ 
+diff --git a/source/configuration/modules/imudp.rst b/source/configuration/modules/imudp.rst
+index 487853f6..b92f0810 100644
+--- a/source/configuration/modules/imudp.rst
++++ b/source/configuration/modules/imudp.rst
+@@ -97,6 +97,15 @@
+    set to 32. It may increase in the future when massive multicore
+    processors become available.
+ 
++.. function::  PreserveCase <on/off>
++
++   *Default: off*
++
++   This parameter is for controlling the case in fromhost.  If set to "on", 
++   the case in fromhost is preserved.  E.g., 'Host1.Example.Org' when the 
++   message was received from 'Host1.Example.Org'.  Defaults to "off" for 
++   backwards compatibility.
++
+ .. index:: imudp; input parameters
+ 
+ Input Parameters
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1459896-queues-defaults.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1459896-queues-defaults.patch
new file mode 100644
index 0000000..e7bdc91
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1459896-queues-defaults.patch
@@ -0,0 +1,138 @@
+From c8be9a713a57f07311560af50c24267b30bef21b Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Tue, 29 Aug 2017 16:32:15 +0200
+Subject: [PATCH] fixed queue default values
+
+---
+ source/concepts/queues.rst                                       | 7 +++----
+ source/configuration/global/index.rst                            | 6 +++---
+ source/configuration/global/options/rsconf1_mainmsgqueuesize.rst | 2 +-
+ source/rainerscript/queue_parameters.rst                         | 15 ++++++++++++---
+ source/configuration/action/index.rst                            | 12 ++++++------
+ 5 files changed, 24 insertions(+), 16 deletions(-)
+
+diff --git a/source/concepts/queues.rst b/source/concepts/queues.rst
+index c71413c..9b41128 100644
+--- a/source/concepts/queues.rst
++++ b/source/concepts/queues.rst
+@@ -273,10 +273,9 @@ actually needed.
+ The water marks can be set via the "*$<object>QueueHighWatermark*\ "
+ and  "*$<object>QueueLowWatermark*\ " configuration file directives.
+ Note that these are actual numbers, not percentages. Be sure they make
+-sense (also in respect to "*$<object>QueueSize*\ "), as rsyslodg does
+-currently not perform any checks on the numbers provided. It is easy to
+-screw up the system here (yes, a feature enhancement request is filed
+-;)).
++sense (also in respect to "*$<object>QueueSize*\ "). Rsyslodg does
++perform some checks on the numbers provided, and issues warning when
++numbers are "suspicious".
+ 
+ Limiting the Queue Size
+ -----------------------
+diff --git a/source/configuration/global/index.rst b/source/configuration/global/index.rst
+index 2738f21..a53ef23 100644
+--- a/source/configuration/global/index.rst
++++ b/source/configuration/global/index.rst
+@@ -137,13 +137,13 @@ To understand queue parameters, read
+ -  **$MainMsgQueueDequeueSlowdown** <number> [number is timeout in
+    *micro*\ seconds (1000000us is 1sec!), default 0 (no delay). Simple
+    rate-limiting!]
+--  **$MainMsgQueueDiscardMark** <number> [default 9750]
++-  **$MainMsgQueueDiscardMark** <number> [default 98000]
+ -  **$MainMsgQueueDiscardSeverity** <severity> [either a textual or
+    numerical severity! default 4 (warning)]
+ -  **$MainMsgQueueFileName** <name>
+--  **$MainMsgQueueHighWaterMark** <number> [default 8000]
++-  **$MainMsgQueueHighWaterMark** <number> [default 80000]
+ -  **$MainMsgQueueImmediateShutdown** [on/**off**]
+--  **$MainMsgQueueLowWaterMark** <number> [default 2000]
++-  **$MainMsgQueueLowWaterMark** <number> [default 20000]
+ -  **$MainMsgQueueMaxFileSize** <size\_nbr>, default 1m
+ -  **$MainMsgQueueTimeoutActionCompletion** <number> [number is timeout in
+    ms (1000ms is 1sec!), default 1000, 0 means immediate!]
+diff --git a/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst b/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst
+index 050407c..3e902cf 100644
+--- a/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst
++++ b/source/configuration/global/options/rsconf1_mainmsgqueuesize.rst
+@@ -3,7 +3,7 @@ $MainMsgQueueSize
+ 
+ **Type:** global configuration directive
+ 
+-**Default:** 10000
++**Default:** 100000
+ 
+ **Description:**
+ 
+diff --git a/source/rainerscript/queue_parameters.rst b/source/rainerscript/queue_parameters.rst
+index 4453721..3f2b7a2 100644
+--- a/source/rainerscript/queue_parameters.rst
++++ b/source/rainerscript/queue_parameters.rst
+@@ -33,8 +33,14 @@ read the :doc:`queues <../concepts/queues>` documentation.
+    For more information on the current status of this restriction see
+    the `rsyslog FAQ: "lower bound for queue
+    sizes" <http://www.rsyslog.com/lower-bound-for-queue-sizes/>`_.
++
++   The default depends on queue type and, if you need
++   a specific value, please specify it. Otherwise rsyslog selects what
++   it consideres appropriate. For example, ruleset queues have a default
++   size of 50000 and action queues which are configured to be non-direct
++   have a size of 1000.
+ -  **queue.dequeuebatchsize** number
+-   default 16
++   default 128
+ -  **queue.maxdiskspace** number
+    The maximum size that all queue files together will use on disk. Note
+    that the actual size may be slightly larger than the configured max,
+@@ -46,8 +47,9 @@ read the :doc:`queues <../concepts/queues>` documentation.
+    processing, because disk queue mode is very considerably slower than
+    in-memory queue mode. Going to disk should be reserved for cases
+    where an output action destination is offline for some period.
++   default 90% of queue size
+ -  **queue.lowwatermark** number
+-   default 2000
++   default 70% of queue size
+ -  **queue.fulldelaymark** number 
+    Number of messages when the queue should block delayable messages. 
+    Messages are NO LONGER PROCESSED until the queue has sufficient space 
+@@ -59,9 +61,11 @@ read the :doc:`queues <../concepts/queues>` documentation.
+    out of space. Please note that if you use a DA queue, setting the 
+    fulldelaymark BELOW the highwatermark makes the queue never activate 
+    disk mode for delayable inputs. So this is probably not what you want.
++   default 97% of queue size
+ -  **queue.lightdelaymark** number
++   default 70% of queue size
+ -  **queue.discardmark** number
+-   default 9750
++   default 80% of queue size
+ -  **queue.discardseverity** number
+    \*numerical\* severity! default 8 (nothing discarded)
+ -  **queue.checkpointinterval** number
+diff --git a/source/configuration/action/index.rst b/source/configuration/action/index.rst
+index 3e7cd24..9352866 100644
+--- a/source/configuration/action/index.rst
++++ b/source/configuration/action/index.rst
+@@ -163,18 +163,18 @@ following action, only. The next and all other actions will be
+ in "direct" mode (no real queue) if not explicitely specified otherwise.
+
+ -  **$ActionQueueCheckpointInterval** <number>
+--  **$ActionQueueDequeueBatchSize** <number> [default 16]
++-  **$ActionQueueDequeueBatchSize** <number> [default 128]
+ -  **$ActionQueueDequeueSlowdown** <number> [number is timeout in
+    *micro*\ seconds (1000000us is 1sec!), default 0 (no delay). Simple
+    rate-limiting!]
+--  **$ActionQueueDiscardMark** <number> [default 9750]
+--  **$ActionQueueDiscardSeverity** <number> [\*numerical\* severity! default
+-   4 (warning)]
++-  **$ActionQueueDiscardMark** <number> [default 80% of queue size]
++-  **$ActionQueueDiscardSeverity** <number> [\*numerical\* severity! default
++   8 (nothing discarded)]
+ -  **$ActionQueueFileName** <name>
+--  **$ActionQueueHighWaterMark** <number> [default 8000]
++-  **$ActionQueueHighWaterMark** <number> [default 90% of queue size]
+ -  **$ActionQueueImmediateShutdown** [on/**off**]
+ -  **$ActionQueueSize** <number>
+--  **$ActionQueueLowWaterMark** <number> [default 2000]
++-  **$ActionQueueLowWaterMark** <number> [default 70% of queue size]
+ -  **$ActionQueueMaxFileSize** <size\_nbr>, default 1m
+ -  **$ActionQueueTimeoutActionCompletion** <number> [number is timeout in ms
+    (1000ms is 1sec!), default 1000, 0 means immediate!]
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1507028-recover_qi.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1507028-recover_qi.patch
new file mode 100644
index 0000000..42a69b1
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1507028-recover_qi.patch
@@ -0,0 +1,27 @@
+From ff07a7cfc171dc2151cc8afe44776525d34a9e01 Mon Sep 17 00:00:00 2001
+From: jvymazal <jvymazal@redhat.com>
+Date: Tue, 3 Jan 2017 10:24:26 +0100
+Subject: [PATCH] Update queues.rst
+
+Update queues.rst
+---
+ source/concepts/queues.rst | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/source/concepts/queues.rst b/source/concepts/queues.rst
+index eb394e8..c71413c 100644
+--- a/source/concepts/queues.rst
++++ b/source/concepts/queues.rst
+@@ -153,6 +153,12 @@ can be requested via "*<object>QueueSyncQueueFiles on/off* with the
+ default being off. Activating this option has a performance penalty, so
+ it should not be turned on without reason.
+ 
++If you happen to lose or otherwise need the housekeeping structures and 
++have all yours queue chunks you can use perl script included in rsyslog
++package to generate it. 
++Usage: recover_qi.pl -w *$WorkDirectory* -f QueueFileName -d 8 > QueueFileName.qi
++
++
+ In-Memory Queues
+ ~~~~~~~~~~~~~~~~
+ 
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1507145-omelastic-client-cert-and-config.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1507145-omelastic-client-cert-and-config.patch
new file mode 100644
index 0000000..ae24862
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1507145-omelastic-client-cert-and-config.patch
@@ -0,0 +1,458 @@
+diff --git a/source/configuration/modules/omelasticsearch.rst b/source/configuration/modules/omelasticsearch.rst
+index 914fd67..4aee1ac 100644
+--- a/source/configuration/modules/omelasticsearch.rst
++++ b/source/configuration/modules/omelasticsearch.rst
+@@ -208,18 +208,354 @@ readability):
+   reconfiguration (e.g. dropping the mandatory attribute) a resubmit may
+   be succesful.
+ 
+-**Samples:**
++.. _tls.cacert:
++
++tls.cacert
++^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++This is the full path and file name of the file containing the CA cert for the
++CA that issued the Elasticsearch server cert.  This file is in PEM format.  For
++example: `/etc/rsyslog.d/es-ca.crt`
++
++.. _tls.mycert:
++
++tls.mycert
++^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++This is the full path and file name of the file containing the client cert for
++doing client cert auth against Elasticsearch.  This file is in PEM format.  For
++example: `/etc/rsyslog.d/es-client-cert.pem`
++
++.. _tls.myprivkey:
++
++tls.myprivkey
++^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++This is the full path and file name of the file containing the private key
++corresponding to the cert `tls.mycert` used for doing client cert auth against
++Elasticsearch.  This file is in PEM format, and must be unencrypted, so take
++care to secure it properly.  For example: `/etc/rsyslog.d/es-client-key.pem`
++
++.. _omelasticsearch-bulkid:
++
++bulkid
++^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++This is the unique id to assign to the record.  The `bulk` part is misleading - this
++can be used in both bulk mode or in index
++(record at a time) mode.  Although you can specify a static value for this
++parameter, you will almost always want to specify a *template* for the value of
++this parameter, and set `dynbulkid="on"` :ref:`omelasticsearch-dynbulkid`.  NOTE:
++you must use `bulkid` and `dynbulkid` in order to use `writeoperation="create"`
++:ref:`omelasticsearch-writeoperation`.
++
++.. _omelasticsearch-dynbulkid:
++
++dynbulkid
++^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "binary", "off", "no", "none"
++
++If this parameter is set to `"on"`, then the `bulkid` parameter :ref:`omelasticsearch-bulkid`
++specifies a *template* to use to generate the unique id value to assign to the record.  If
++using `bulkid` you will almost always want to set this parameter to `"on"` to assign
++a different unique id value to each record.  NOTE:
++you must use `bulkid` and `dynbulkid` in order to use `writeoperation="create"`
++:ref:`omelasticsearch-writeoperation`.
++
++.. _omelasticsearch-writeoperation:
++
++writeoperation
++^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "index", "no", "none"
++
++The value of this parameter is either `"index"` (the default) or `"create"`.  If `"create"` is
++used, this means the bulk action/operation will be `create` - create a document only if the
++document does not already exist.  The record must have a unique id in order to use `create`.
++See :ref:`omelasticsearch-bulkid` and :ref:`omelasticsearch-dynbulkid`.  See
++:ref:`omelasticsearch-writeoperation-example` for an example.
++
++.. _omelasticsearch-retryfailures:
++
++retryfailures
++^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "binary", "off", "no", "none"
++
++If this parameter is set to `"on"`, then the module will look for an
++`"errors":true` in the bulk index response.  If found, each element in the
++response will be parsed to look for errors, since a bulk request may have some
++records which are successful and some which are failures.  Failed requests will
++be converted back into records and resubmitted back to rsyslog for
++reprocessing.  Each failed request will be resubmitted with a local variable
++called `$.omes`.  This is a hash consisting of the fields from the response.
++See below :ref:`omelasticsearch-retry-example` for an example of how retry
++processing works.
++*NOTE* The retried record will be resubmitted at the "top" of your processing
++pipeline.  If your processing pipeline is not idempotent (that is, your
++processing pipeline expects "raw" records), then you can specify a ruleset to
++redirect retries to.  See :ref:`omelasticsearch-retryruleset` below.
++
++`$.omes` fields:
++
++* writeoperation - the operation used to submit the request - for rsyslog
++  omelasticsearch this currently means either `"index"` or `"create"`
++* status - the HTTP status code - typically an error will have a `4xx` or `5xx`
++  code - of particular note is `429` - this means Elasticsearch was unable to
++  process this bulk record request due to a temporary condition e.g. the bulk
++  index thread pool queue is full, and rsyslog should retry the operation.
++* _index, _type, _id - the metadata associated with the request
++* error - a hash containing one or more, possibly nested, fields containing
++  more detailed information about a failure.  Typically there will be fields
++  `$.omes!error!type` (a keyword) and `$.omes!error!reason` (a longer string)
++  with more detailed information about the rejection.  NOTE: The format is
++  apparently not described in great detail, so code must not make any
++  assumption about the availability of `error` or any specific sub-field.
++
++There may be other fields too - the code just copies everything in the
++response.  Here is an example of a detailed error response, in JSON format, from
++Elasticsearch 5.6.9:
++
++.. code-block:: json
++
++    {"omes":
++      {"writeoperation": "create",
++       "_index": "rsyslog_testbench",
++       "_type": "test-type",
++       "_id": "92BE7AF79CD44305914C7658AF846A08",
++       "status": 400,
++       "error":
++         {"type": "mapper_parsing_exception",
++          "reason": "failed to parse [msgnum]",
++          "caused_by":
++            {"type": "number_format_exception",
++             "reason": "For input string: \"x00000025\""}}}}
++
++Reference: https://www.elastic.co/guide/en/elasticsearch/guide/current/bulk.html#bulk
++
++.. _omelasticsearch-retryruleset:
++
++retryruleset
++^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "", "no", "none"
++
++If `retryfailures` is not `"on"` (:ref:`omelasticsearch-retryfailures`) then
++this parameter has no effect.  This parameter specifies the name of a ruleset
++to use to route retries.  This is useful if you do not want retried messages to
++be processed starting from the top of your processing pipeline, or if you have
++multiple outputs but do not want to send retried Elasticsearch failures to all
++of your outputs, and you do not want to clutter your processing pipeline with a
++lot of conditionals.  See below :ref:`omelasticsearch-retry-example` for an
++example of how retry processing works.
++
++.. _omelasticsearch-ratelimit.interval:
++
++ratelimit.interval
++^^^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "integer", "600", "no", "none"
++
++If `retryfailures` is not `"on"` (:ref:`omelasticsearch-retryfailures`) then
++this parameter has no effect.  Specifies the interval in seconds onto which
++rate-limiting is to be applied. If more than ratelimit.burst messages are read
++during that interval, further messages up to the end of the interval are
++discarded. The number of messages discarded is emitted at the end of the
++interval (if there were any discards).
++Setting this to value zero turns off ratelimiting.
++
++.. _omelasticsearch-ratelimit.burst:
++
++ratelimit.burst
++^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "integer", "20000", "no", "none"
++
++If `retryfailures` is not `"on"` (:ref:`omelasticsearch-retryfailures`) then
++this parameter has no effect.  Specifies the maximum number of messages that
++can be emitted within the ratelimit.interval interval. For futher information,
++see description there.
++
++.. _omelasticsearch-statistic-counter:
++
++Statistic Counter
++=================
++
++This plugin maintains global statistics ,
++which accumulate all action instances. The statistic is named "omelasticsearch".
++Parameters are:
++
++-  **submitted** - number of messages submitted for processing (with both
++   success and error result)
++
++-  **fail.httprequests** - the number of times a http request failed. Note
++   that a single http request may be used to submit multiple messages, so this
++   number may be (much) lower than fail.http.
++
++-  **fail.http** - number of message failures due to connection like-problems
++   (things like remote server down, broken link etc)
++
++-  **fail.es** - number of failures due to elasticsearch error reply; Note that
++   this counter does NOT count the number of failed messages but the number of
++   times a failure occured (a potentially much smaller number). Counting messages
++   would be quite performance-intense and is thus not done.
++
++The following counters are available when `retryfailures="on"` is used:
++
++-  **response.success** - number of records successfully sent in bulk index
++   requests - counts the number of successful responses
++
++-  **response.bad** - number of times omelasticsearch received a response in a
++   bulk index response that was unrecognized or unable to be parsed.  This may
++   indicate that omelasticsearch is attempting to communicate with a version of
++   Elasticsearch that is incompatible, or is otherwise sending back data in the
++   response that cannot be handled
++
++-  **response.duplicate** - number of records in the bulk index request that
++   were duplicates of already existing records - this will only be reported if
++   using `writeoperation="create"` and `bulkid` to assign each record a unique
++   ID
++
++-  **response.badargument** - number of times omelasticsearch received a
++   response that had a status indicating omelasticsearch sent bad data to
++   Elasticsearch.  For example, status `400` and an error message indicating
++   omelasticsearch attempted to store a non-numeric string value in a numeric
++   field.
++
++-  **response.bulkrejection** - number of times omelasticsearch received a
++   response that had a status indicating Elasticsearch was unable to process
++   the record at this time - status `429`.  The record can be retried.
++
++-  **response.other** - number of times omelasticsearch received a
++   response not recognized as one of the above responses, typically some other
++   `4xx` or `5xx` http status.
++
++**The fail.httprequests and fail.http counters reflect only failures that
++omelasticsearch detected.** Once it detects problems, it (usually, depends on
++circumstances) tell the rsyslog core that it wants to be suspended until the
++situation clears (this is a requirement for rsyslog output modules). Once it is
++suspended, it does NOT receive any further messages. Depending on the user
++configuration, messages will be lost during this period. Those lost messages will
++NOT be counted by impstats (as it does not see them).
++
++Note that some previous (pre 7.4.5) versions of this plugin had different counters.
++These were experimental and confusing. The only ones really used were "submits",
++which were the number of successfully processed messages and "connfail" which were
++equivalent to "failed.http".
++
++How Retries Are Handled
++=======================
++
++When using `retryfailures="on"` (:ref:`omelasticsearch-retryfailures`), the
++original `Message` object (that is, the original `smsg_t *msg` object) **is not
++available**.  This means none of the metadata associated with that object, such
++as various timestamps, hosts/ip addresses, etc. are not available for the retry
++operation.  The only thing available is the original JSON string sent in the
++original request, and whatever data is returned in the error response, which
++will contain the Elasticsearch metadata about the index, type, and id, and will
++be made available in the `$.omes` fields.  For the message to retry, the code
++will take the original JSON string and parse it back into an internal `Message`
++object.  This means you **may need to use a different template** to output
++messages for your retry ruleset.  For example, if you used the following
++template to format the Elasticsearch message for the initial submission:
++
++.. code-block:: none
++
++    template(name="es_output_template"
++             type="list"
++             option.json="on") {
++               constant(value="{")
++                 constant(value="\"timestamp\":\"")      property(name="timereported" dateFormat="rfc3339")
++                 constant(value="\",\"message\":\"")     property(name="msg")
++                 constant(value="\",\"host\":\"")        property(name="hostname")
++                 constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
++                 constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
++                 constant(value="\",\"syslogtag\":\"")   property(name="syslogtag")
++               constant(value="\"}")
++             }
++
++You would have to use a different template for the retry, since none of the
++`timereported`, `msg`, etc. fields will have the same values for the retry as
++for the initial try.
++
++Examples
++========
++
++Example 1
++^^^^^^^^^
+ 
+ The following sample does the following:
+ 
+ -  loads the omelasticsearch module
+ -  outputs all logs to Elasticsearch using the default settings
+ 
+-::
++.. code-block:: none
+ 
+     module(load="omelasticsearch")
+     *.*     action(type="omelasticsearch")
+ 
++Example 2
++^^^^^^^^^
++
+ The following sample does the following:
+ 
+ -  loads the omelasticsearch module
+@@ -246,7 +582,7 @@ The following sample does the following:
+    -  retry indefinitely if the HTTP request failed (eg: if the target
+       server is down)
+ 
+-::
++.. code-block:: none
+ 
+     module(load="omelasticsearch")
+     template(name="testTemplate"
+@@ -274,6 +610,87 @@ The following sample does the following:
+            queue.dequeuebatchsize="300"
+            action.resumeretrycount="-1")
+ 
++.. _omelasticsearch-writeoperation-example:
++
++Example 3
++^^^^^^^^^
++
++The following sample shows how to use :ref:`omelasticsearch-writeoperation`
++with :ref:`omelasticsearch-dynbulkid` and :ref:`omelasticsearch-bulkid`.  For
++simplicity, it assumes rsyslog has been built with `--enable-libuuid` which
++provides the `uuid` property for each record:
++
++.. code-block:: none
++
++    module(load="omelasticsearch")
++    set $!es_record_id = $uuid;
++    template(name="bulkid-template" type="list") { property(name="$!es_record_id") }
++    action(type="omelasticsearch"
++           ...
++           bulkmode="on"
++           bulkid="bulkid-template"
++           dynbulkid="on"
++           writeoperation="create")
++
++
++.. _omelasticsearch-retry-example:
++
++Example 4
++^^^^^^^^^
++
++The following sample shows how to use :ref:`omelasticsearch-retryfailures` to
++process, discard, or retry failed operations.  This uses
++`writeoperation="create"` with a unique `bulkid` so that we can check for and
++discard duplicate messages as successful.  The `try_es` ruleset is used both
++for the initial attempt and any subsequent retries.  The code in the ruleset
++assumes that if `$.omes!status` is set and is non-zero, this is a retry for a
++previously failed operation.  If the status was successful, or Elasticsearch
++said this was a duplicate, the record is already in Elasticsearch, so we can
++drop the record.  If there was some error processing the response
++e.g. Elasticsearch sent a response formatted in some way that we did not know
++how to process, then submit the record to the `error_es` ruleset.  If the
++response was a "hard" error like `400`, then submit the record to the
++`error_es` ruleset.  In any other case, such as a status `429` or `5xx`, the
++record will be resubmitted to Elasticsearch. In the example, the `error_es`
++ruleset just dumps the records to a file.
++
++.. code-block:: none
++
++    module(load="omelasticsearch")
++    module(load="omfile")
++    set $!es_record_id = $uuid;
++    template(name="bulkid-template" type="list") { property(name="$!es_record_id") }
++
++    ruleset(name="error_es") {
++	    action(type="omfile" template="RSYSLOG_DebugFormat" file="es-bulk-errors.log")
++    }
++
++    ruleset(name="try_es") {
++        if strlen($.omes!status) > 0 then {
++            # retry case
++            if ($.omes!status == 200) or ($.omes!status == 201) or (($.omes!status == 409) and ($.omes!writeoperation == "create")) then {
++                stop # successful
++            }
++            if ($.omes!writeoperation == "unknown") or (strlen($.omes!error!type) == 0) or (strlen($.omes!error!reason) == 0) then {
++                call error_es
++                stop
++            }
++            if ($.omes!status == 400) or ($.omes!status < 200) then {
++                call error_es
++                stop
++            }
++            # else fall through to retry operation
++        }
++        action(type="omelasticsearch"
++                  ...
++                  bulkmode="on"
++                  bulkid="bulkid-template"
++                  dynbulkid="on"
++                  writeoperation="create"
++                  retryfailures="on"
++                  retryruleset="try_es")
++    }
++    call try_es
+ 
+ This documentation is part of the `rsyslog <http://www.rsyslog.com/>`_
+ project.
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1538372-imjournal-duplicates.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1538372-imjournal-duplicates.patch
new file mode 100644
index 0000000..88e4859
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1538372-imjournal-duplicates.patch
@@ -0,0 +1,28 @@
+From 1dbb68f3dc5c7ae94bdea5ad37296cbc2224e92b Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Wed, 25 Jul 2018 14:24:57 +0200
+Subject: [PATCH] Added WorkAroundJournalBug parameter
+
+this is documentation for rsyslog/rsyslog#2543
+---
+ source/configuration/modules/imjournal.rst | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/source/configuration/modules/imjournal.rst b/source/configuration/modules/imjournal.rst
+index 2530ddfe..85ca9e7d 100644
+--- a/source/configuration/modules/imjournal.rst
++++ b/source/configuration/modules/imjournal.rst
+@@ -99,6 +99,13 @@ -  **usepidfromsystem** [**off**/on]
+    Retrieves the trusted systemd parameter, _PID, instead of the user 
+    systemd parameter, SYSLOG_PID, which is the default.
+ 
++-  **WorkAroundJournalBug** [**off**/on]
++
++    When journald instance rotates its files it is possible that duplicate records 
++    appear in rsyslog. If you turn on this option imjournal will keep track of cursor
++    with each message to work around this problem. Be aware that in some cases this
++    might result in imjournal performance hit.
++
+ **Caveats/Known Bugs:**
+ 
+ - As stated above, a corrupted systemd journal database can cause major
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1539193-mmkubernetes-new-plugin.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1539193-mmkubernetes-new-plugin.patch
new file mode 100644
index 0000000..f2ad9f7
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1539193-mmkubernetes-new-plugin.patch
@@ -0,0 +1,384 @@
+diff --git a/source/configuration/modules/mmkubernetes.rst b/source/configuration/modules/mmkubernetes.rst
+new file mode 100644
+index 0000000..1cd3d2a
+--- /dev/null
++++ b/source/configuration/modules/mmkubernetes.rst
+@@ -0,0 +1,378 @@
++*****************************************
++Kubernetes Metadata Module (mmkubernetes)
++*****************************************
++
++===========================  ===========================================================================
++**Module Name:**             **mmkubernetes**
++**Author:**                  `Tomáš Heinrich`
++                             `Rich Megginson` <rmeggins@redhat.com>
++===========================  ===========================================================================
++
++Purpose
++=======
++
++This module is used to add `Kubernetes <https://kubernetes.io/>`
++metadata to log messages logged by containers running in Kubernetes.
++It will add the namespace uuid, pod uuid, pod and namespace labels and
++annotations, and other metadata associated with the pod and
++namespace.
++
++.. note::
++
++   This **only** works with log files in `/var/log/containers/*.log`
++   (docker `--log-driver=json-file`), or with journald entries with
++   message properties `CONTAINER_NAME` and `CONTAINER_ID_FULL` (docker
++   `--log-driver=journald`), and when the application running inside
++   the container writes logs to `stdout`/`stderr`.  This **does not**
++   currently work with other log drivers.
++
++For json-file logs, you must use the `imfile` module with the
++`addmetadata="on"` parameter, and the filename must match the
++liblognorm rules specified by the `filenamerules`
++(:ref:`filenamerules`) or `filenamerulebase` (:ref:`filenamerulebase`)
++parameter values.
++
++For journald logs, there must be a message property `CONTAINER_NAME`
++which matches the liblognorm rules specified by the `containerrules`
++(:ref:`containerrules`) or `containerrulebase`
++(:ref:`containerrulebase`) parameter values. The record must also have
++the message property `CONTAINER_ID_FULL`.
++
++This module is implemented via the output module interface. This means
++that mmkubernetes should be called just like an action. After it has
++been called, there will be two new message properties: `kubernetes`
++and `docker`.  There will be subfields of each one for the various
++metadata items: `$!kubernetes!namespace_name`
++`$!kubernetes!labels!this-is-my-label`, etc.  There is currently only
++1 docker subfield: `$!docker!container_id`.  See
++https://github.com/ViaQ/elasticsearch-templates/blob/master/namespaces/kubernetes.yml
++and
++https://github.com/ViaQ/elasticsearch-templates/blob/master/namespaces/docker.yml
++for more details.
++
++Configuration Parameters
++========================
++
++.. note::
++
++   Parameter names are case-insensitive.
++
++Module Parameters and Action Parameters
++---------------------------------------
++
++.. _kubernetesurl:
++
++KubernetesURL
++^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "https://kubernetes.default.svc.cluster.local:443", "yes", "none"
++
++The URL of the Kubernetes API server.  Example: `https://localhost:8443`.
++
++.. _mmkubernetes-tls.cacert:
++
++tls.cacert
++^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++Full path and file name of file containing the CA cert of the
++Kubernetes API server cert issuer.  Example: `/etc/rsyslog.d/mmk8s-ca.crt`.
++This parameter is not mandatory if using an `http` scheme instead of `https` in
++`kubernetesurl`, or if using `allowunsignedcerts="yes"`.
++
++.. _tokenfile:
++
++tokenfile
++^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++The file containing the token to use to authenticate to the Kubernetes API
++server.  One of `tokenfile` or `token` is required if Kubernetes is configured
++with access control.  Example: `/etc/rsyslog.d/mmk8s.token`
++
++.. _token:
++
++token
++^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "none", "no", "none"
++
++The token to use to authenticate to the Kubernetes API server.  One of `token`
++or `tokenfile` is required if Kubernetes is configured with access control.
++Example: `UxMU46ptoEWOSqLNa1bFmH`
++
++.. _annotation_match:
++
++annotation_match
++^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "array", "none", "no", "none"
++
++By default no pod or namespace annotations will be added to the
++messages.  This parameter is an array of patterns to match the keys of
++the `annotations` field in the pod and namespace metadata to include
++in the `$!kubernetes!annotations` (for pod annotations) or the
++`$!kubernetes!namespace_annotations` (for namespace annotations)
++message properties.  Example: `["k8s.*master","k8s.*node"]`
++
++.. _srcmetadatapath:
++
++srcmetadatapath
++^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "$!metadata!filename", "no", "none"
++
++When reading json-file logs, with `imfile` and `addmetadata="on"`,
++this is the property where the filename is stored.
++
++.. _dstmetadatapath:
++
++dstmetadatapath
++^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "$!", "no", "none"
++
++This is the where the `kubernetes` and `docker` properties will be
++written.  By default, the module will add `$!kubernetes` and
++`$!docker`.
++
++.. _allowunsignedcerts:
++
++allowunsignedcerts
++^^^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "boolean", "off", "no", "none"
++
++If `"on"`, this will set the curl `CURLOPT_SSL_VERIFYPEER` option to
++`0`.  You are strongly discouraged to set this to `"on"`.  It is
++primarily useful only for debugging or testing.
++
++.. _de_dot:
++
++de_dot
++^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "boolean", "on", "no", "none"
++
++When processing labels and annotations, if this parameter is set to
++`"on"`, the key strings will have their `.` characters replaced with
++the string specified by the `de_dot_separator` parameter.
++
++.. _de_dot_separator:
++
++de_dot_separator
++^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "_", "no", "none"
++
++When processing labels and annotations, if the `de_dot` parameter is
++set to `"on"`, the key strings will have their `.` characters replaced
++with the string specified by the string value of this parameter.
++
++.. _filenamerules:
++
++filenamerules
++^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "SEE BELOW", "no", "none"
++
++.. note::
++
++    This directive is not supported with liblognorm 2.0.2 and earlier.
++
++When processing json-file logs, these are the lognorm rules to use to
++match the filename and extract metadata.  The default value is::
++
++    rule=:/var/log/containers/%pod_name:char-to:_%_%namespace_name:char-to:_%_%conta\
++    iner_name:char-to:-%-%container_id:char-to:.%.log
++
++.. note::
++
++    In the above rules, the slashes ``\`` ending each line indicate
++    line wrapping - they are not part of the rule.
++
++There are two rules because the `container_hash` is optional.
++
++.. _filenamerulebase:
++
++filenamerulebase
++^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "/etc/rsyslog.d/k8s_filename.rulebase", "no", "none"
++
++When processing json-file logs, this is the rulebase used to
++match the filename and extract metadata.  For the actual rules, see
++below `filenamerules`.
++
++.. _containerrules:
++
++containerrules
++^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "SEE BELOW", "no", "none"
++
++.. note::
++
++    This directive is not supported with liblognorm 2.0.2 and earlier.
++
++For journald logs, there must be a message property `CONTAINER_NAME`
++which has a value matching these rules specified by this parameter.
++The default value is::
++
++    rule=:%k8s_prefix:char-to:_%_%container_name:char-to:.%.%container_hash:char-to:\
++    _%_%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_u\
++    sed_2:rest%
++    rule=:%k8s_prefix:char-to:_%_%container_name:char-to:_%_%pod_name:char-to:_%_%na\
++    mespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%
++
++.. note::
++
++    In the above rules, the slashes ``\`` ending each line indicate
++    line wrapping - they are not part of the rule.
++
++There are two rules because the `container_hash` is optional.
++
++.. _containerrulebase:
++
++containerrulebase
++^^^^^^^^^^^^^^^^^
++
++.. csv-table::
++   :header: "type", "default", "mandatory", "obsolete legacy directive"
++   :widths: auto
++   :class: parameter-table
++
++   "word", "/etc/rsyslog.d/k8s_container_name.rulebase", "no", "none"
++
++When processing json-file logs, this is the rulebase used to
++match the CONTAINER_NAME property value and extract metadata.  For the
++actual rules, see `containerrules`.
++
++Fields
++------
++
++These are the fields added from the metadata in the json-file filename, or from
++the `CONTAINER_NAME` and `CONTAINER_ID_FULL` fields from the `imjournal` input:
++
++`$!kubernetes!namespace_name`, `$!kubernetes!pod_name`,
++`$!kubernetes!container_name`, `$!docker!id`, `$!kubernetes!master_url`.
++
++If mmkubernetes can extract the above fields from the input, the following
++fields will always be present.  If they are not present, mmkubernetes
++failed to look up the namespace or pod in Kubernetes:
++
++`$!kubernetes!namespace_id`, `$!kubernetes!pod_id`,
++`$!kubernetes!creation_timestamp`, `$!kubernetes!host`
++
++The following fields may be present, depending on how the namespace and pod are
++defined in Kubernetes, and depending on the value of the directive
++`annotation_match`:
++
++`$!kubernetes!labels`, `$!kubernetes!annotations`, `$!kubernetes!namespace_labels`,
++`$!kubernetes!namespace_annotations`
++
++More fields may be added in the future.
++
++Example
++-------
++
++Assuming you have an `imfile` input reading from docker json-file container
++logs managed by Kubernetes, with `addmetadata="on"` so that mmkubernetes can
++get the basic necessary Kubernetes metadata from the filename:
++
++.. code-block:: none
++
++    input(type="imfile" file="/var/log/containers/*.log"
++          tag="kubernetes" addmetadata="on")
++
++and/or an `imjournal` input for docker journald container logs annotated by
++Kubernetes:
++
++.. code-block:: none
++
++    input(type="imjournal")
++
++Then mmkubernetes can be used to annotate log records like this:
++
++.. code-block:: none
++
++    module(load="mmkubernetes")
++
++    action(type="mmkubernetes")
++
++After this, you should have log records with fields described in the `Fields`
++section above.
++
++Credits
++-------
++
++This work is based on
++https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter
++and has many of the same features.
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1625935-mmkubernetes-CRI-O.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1625935-mmkubernetes-CRI-O.patch
new file mode 100644
index 0000000..ac5d8bc
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1625935-mmkubernetes-CRI-O.patch
@@ -0,0 +1,65 @@
+From 7b09e9782c4e6892a8d16fd4e3aa2cca440a41e6 Mon Sep 17 00:00:00 2001
+From: Rich Megginson <rmeggins@redhat.com>
+Date: Wed, 22 Aug 2018 08:37:03 -0600
+Subject: [PATCH] url not required - add information about CRI-O
+
+The KubernetesURL parameter is not mandatory since it has
+a useful default value.
+Add information about CRI-O.
+Minor cleanup.
+---
+ source/configuration/modules/mmkubernetes.rst | 18 +++++++++---------
+ 1 file changed, 9 insertions(+), 9 deletions(-)
+
+diff --git a/source/configuration/modules/mmkubernetes.rst b/source/configuration/modules/mmkubernetes.rst
+index be9e710f..a3cd4d49 100644
+--- a/source/configuration/modules/mmkubernetes.rst
++++ b/source/configuration/modules/mmkubernetes.rst
+@@ -19,14 +19,14 @@ namespace.
+ 
+ .. note::
+ 
+-   This **only** works with log files in `/var/log/containers/*.log`
+-   (docker `--log-driver=json-file`), or with journald entries with
++   This **only** works with log files in `/var/log/containers/*.log` (docker
++   `--log-driver=json-file`, or CRI-O log files), or with journald entries with
+    message properties `CONTAINER_NAME` and `CONTAINER_ID_FULL` (docker
+-   `--log-driver=journald`), and when the application running inside
+-   the container writes logs to `stdout`/`stderr`.  This **does not**
+-   currently work with other log drivers.
++   `--log-driver=journald`), and when the application running inside the
++   container writes logs to `stdout`/`stderr`.  This **does not** currently
++   work with other log drivers.
+ 
+-For json-file logs, you must use the `imfile` module with the
++For json-file and CRI-O logs, you must use the `imfile` module with the
+ `addmetadata="on"` parameter, and the filename must match the
+ liblognorm rules specified by the `filenamerules`
+ (:ref:`filenamerules`) or `filenamerulebase` (:ref:`filenamerulebase`)
+@@ -70,7 +70,7 @@ KubernetesURL
+    :widths: auto
+    :class: parameter-table
+ 
+-   "word", "https://kubernetes.default.svc.cluster.local:443", "yes", "none"
++   "word", "https://kubernetes.default.svc.cluster.local:443", "no", "none"
+ 
+ The URL of the Kubernetes API server.  Example: `https://localhost:8443`.
+ 
+@@ -248,8 +248,6 @@ match the filename and extract metadata.  The default value is::
+     In the above rules, the slashes ``\`` ending each line indicate
+     line wrapping - they are not part of the rule.
+ 
+-There are two rules because the `container_hash` is optional.
+-
+ .. _filenamerulebase:
+ 
+ filenamerulebase
+@@ -351,6 +349,8 @@ get the basic necessary Kubernetes metadata from the filename:
+     input(type="imfile" file="/var/log/containers/*.log"
+           tag="kubernetes" addmetadata="on")
+ 
++(Add `reopenOnTruncate="on"` if using Docker, not required by CRI-O).
++
+ and/or an `imjournal` input for docker journald container logs annotated by
+ Kubernetes:
+ 
diff --git a/SOURCES/rsyslog-8.24.0-doc-rhbz1696686-imjournal-fsync.patch b/SOURCES/rsyslog-8.24.0-doc-rhbz1696686-imjournal-fsync.patch
new file mode 100644
index 0000000..ef68b32
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-doc-rhbz1696686-imjournal-fsync.patch
@@ -0,0 +1,41 @@
+From 3ba46e563a3a5384fd6d783a8315273c237cb6af Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Tue, 23 Jul 2019 12:47:09 +0200
+Subject: [PATCH] Documetation for new 'fsync' imjournal option
+
+related to #3762 main repo PR
+---
+ source/configuration/modules/imjournal.rst | 13 +++++++++++--
+ 1 file changed, 11 insertions(+), 2 deletitions (-)
+
+diff --git a/source/configuration/modules/imjournal.rst b/source/configuration/modules/imjournal.rst
+index d8523ae8..47428fc6 100644
+--- a/source/configuration/modules/imjournal.rst
++++ b/source/configuration/modules/imjournal.rst
+@@ -3,8 +3,7 @@
+ 
+ **Module Name:** imjournal
+ 
+-**Author:** Milan Bartos <mbartos@redhat.com> (This module is **not**
+-project-supported)
++**Author:** Milan Bartos <mbartos@redhat.com>
+ 
+ **Available since**: 7.3.11
+ 
+@@ -107,6 +106,16 @@
+     with each message to work around this problem. Be aware that in some cases this
+     might result in imjournal performance hit.
+ 
++-  **FSync** [**off**/on]
++
++    When there is a hard crash, power loss or similar abrupt end of rsyslog process,
++    there is a risk of state file not being written to persistent storage or possibly
++    being corrupted. This then results in imjournal starting reading elsewhere then 
++    desired and most probably message duplication. To mitigate this problem you can 
++    turn this option on which will force state file writes to persistent physical 
++    storage. Please note that fsync calls are costly, so especially with lower 
++    PersistStateInterval value, this may present considerable performance hit.
++
+ **Caveats/Known Bugs:**
+ 
+ - As stated above, a corrupted systemd journal database can cause major
diff --git a/SOURCES/rsyslog-8.24.0-msg_c_nonoverwrite_merge.patch b/SOURCES/rsyslog-8.24.0-msg_c_nonoverwrite_merge.patch
new file mode 100644
index 0000000..1ca26ff
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-msg_c_nonoverwrite_merge.patch
@@ -0,0 +1,65 @@
+From fa7d98b0cb0512d84355e3aafdc5a3e366842f2a Mon Sep 17 00:00:00 2001
+From: Radovan Sroka <rsroka@redhat.com>
+Date: Mon, 21 Nov 2016 13:38:18 +0100
+Subject: [PATCH 2/4] Rebased from: Patch2:
+ rsyslog-7.2.1-msg_c_nonoverwrite_merge.patch
+
+Resolves:
+	no adressed bugzila
+---
+ runtime/msg.c | 25 +++++++++++++++++++++++--
+ 1 file changed, 23 insertions(+), 2 deletions(-)
+
+diff --git a/runtime/msg.c b/runtime/msg.c
+index f6e017b..5430331 100644
+--- a/runtime/msg.c
++++ b/runtime/msg.c
+@@ -4632,6 +4632,27 @@ finalize_it:
+ 	RETiRet;
+ }
+ 
++static rsRetVal jsonMerge(struct json_object *existing, struct json_object *json);
++
++static rsRetVal
++jsonMergeNonOverwrite(struct json_object *existing, struct json_object *json)
++{
++	DEFiRet;
++
++	struct json_object_iterator it = json_object_iter_begin(existing);
++	struct json_object_iterator itEnd = json_object_iter_end(existing);
++	while (!json_object_iter_equal(&it, &itEnd)) {
++		json_object_object_add(json, json_object_iter_peek_name(&it),
++			json_object_get(json_object_iter_peek_value(&it)));
++		json_object_iter_next(&it);
++	}
++
++	CHKiRet(jsonMerge(existing, json));
++finalize_it:
++	RETiRet;
++}
++
++
+ static rsRetVal
+ jsonMerge(struct json_object *existing, struct json_object *json)
+ {
+@@ -4714,7 +4735,7 @@ msgAddJSON(msg_t * const pM, uchar *name, struct json_object *json, int force_re
+ 		if(*pjroot == NULL)
+ 			*pjroot = json;
+ 		else
+-			CHKiRet(jsonMerge(*pjroot, json));
++			CHKiRet(jsonMergeNonOverwrite(*pjroot, json));
+ 	} else {
+ 		if(*pjroot == NULL) {
+ 			/* now we need a root obj */
+@@ -4742,7 +4763,7 @@ msgAddJSON(msg_t * const pM, uchar *name, struct json_object *json, int force_re
+ 			json_object_object_add(parent, (char*)leaf, json);
+ 		} else {
+ 			if(json_object_get_type(json) == json_type_object) {
+-				CHKiRet(jsonMerge(*pjroot, json));
++				CHKiRet(jsonMergeNonOverwrite(*pjroot, json));
+ 			} else {
+ 				/* TODO: improve the code below, however, the current
+ 				 *       state is not really bad */
+-- 
+2.7.4
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1056548-getaddrinfo.patch b/SOURCES/rsyslog-8.24.0-rhbz1056548-getaddrinfo.patch
new file mode 100644
index 0000000..82012cd
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1056548-getaddrinfo.patch
@@ -0,0 +1,56 @@
+diff --git a/runtime/net.c b/runtime/net.c
+index 3610fc5..2d8de94 100644
+--- a/runtime/net.c
++++ b/runtime/net.c
+@@ -1181,26 +1181,24 @@ getLocalHostname(uchar **ppName)
+ 	}
+ 
+ 	char *dot = strstr(hnbuf, ".");
++	struct addrinfo *res = NULL;
+ 	if(!empty_hostname && dot == NULL) {
+ 		/* we need to (try) to find the real name via resolver */
+-		struct hostent *hent = gethostbyname((char*)hnbuf);
+-		if(hent) {
+-			int i = 0;
+-			if(hent->h_aliases) {
+-				const size_t hnlen = strlen(hnbuf);
+-				for(i = 0; hent->h_aliases[i]; i++) {
+-					if(!strncmp(hent->h_aliases[i], hnbuf, hnlen)
+-					   && hent->h_aliases[i][hnlen] == '.') {
+-						break; /* match! */
+-					}
+-				}
+-			}
+-			if(hent->h_aliases && hent->h_aliases[i]) {
+-				CHKmalloc(fqdn = (uchar*)strdup(hent->h_aliases[i]));
+-			} else {
+-				CHKmalloc(fqdn = (uchar*)strdup(hent->h_name));
++		struct addrinfo flags;
++		memset(&flags, 0, sizeof(flags));
++		flags.ai_flags = AI_CANONNAME;
++		int error = getaddrinfo((char*)hnbuf, NULL, &flags, &res);
++		if (error != 0) {
++			dbgprintf("getaddrinfo: %s\n", gai_strerror(error));
++			ABORT_FINALIZE(RS_RET_IO_ERROR);
++		}
++		if (res != NULL) {
++			/* When AI_CANONNAME is set first member of res linked-list */
++			/* should contain what we need */
++			if (res->ai_canonname != NULL && res->ai_canonname[0] != '\0') {
++				CHKmalloc(fqdn = (uchar*)strdup(res->ai_canonname));
++				dot = strstr((char*)fqdn, ".");
+ 			}
+-			dot = strstr((char*)fqdn, ".");
+ 		}
+ 	}
+ 
+@@ -1215,6 +1213,9 @@ getLocalHostname(uchar **ppName)
+ 
+ 	*ppName = fqdn;
+ finalize_it:
++	if (res != NULL) {
++		freeaddrinfo(res);
++	}
+ 	RETiRet;
+ }
+ 
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1088021-systemd-time-backwards.patch b/SOURCES/rsyslog-8.24.0-rhbz1088021-systemd-time-backwards.patch
new file mode 100644
index 0000000..2800f03
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1088021-systemd-time-backwards.patch
@@ -0,0 +1,28 @@
+diff -up ./plugins/imjournal/imjournal.c.time ./plugins/imjournal/imjournal.c
+--- ./plugins/imjournal/imjournal.c.time	2016-12-21 17:50:13.849000000 +0100
++++ ./plugins/imjournal/imjournal.c	2016-12-21 18:20:03.908000000 +0100
+@@ -538,7 +538,24 @@ loadJournalState(void)
+ 						"couldn't seek to cursor `%s'\n", readCursor);
+ 					iRet = RS_RET_ERR;
+ 				} else {
++					char * tmp_cursor = NULL;
+ 					sd_journal_next(j);
++					/*
++ 					* This is resolving the situation when system is after reboot and boot_id doesn't match
++ 					* so cursor pointing into "future". Usually sd_journal_next jump to head of journal due to journal aproximation,
++ 					* but when system time goes backwards and cursor is still invalid, rsyslog stops logging. We use
++ 					* sd_journal_get_cursor to validate our cursor. When cursor is invalid we are trying to jump to the head of journal
++ 					* This problem with time is not affecting persistent journal.
++ 					* */
++					if (sd_journal_get_cursor(j, &tmp_cursor) < 0 && sd_journal_has_persistent_files(j) == 0) {
++						errmsg.LogError(0, RS_RET_IO_ERROR, "imjournal: "
++                                        	"loaded invalid cursor, seeking to the head of journal\n");
++						if (sd_journal_seek_head(j) < 0) {
++							errmsg.LogError(0, RS_RET_ERR, "imjournal: "
++                                                	"sd_journal_seek_head() failed, when cursor is invalid\n");
++							iRet = RS_RET_ERR;
++						}
++					} 
+ 				}
+ 			} else {
+ 				errmsg.LogError(0, RS_RET_IO_ERROR, "imjournal: "
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1165236-snmp-mib.patch b/SOURCES/rsyslog-8.24.0-rhbz1165236-snmp-mib.patch
new file mode 100644
index 0000000..efdae89
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1165236-snmp-mib.patch
@@ -0,0 +1,20 @@
+@@ -, +, @@ 
+---
+ plugins/omsnmp/omsnmp.c | 6 ++++++
+ 1 file changed, 6 insertions(+)
+--- a/plugins/omsnmp/omsnmp.c	
++++ a/plugins/omsnmp/omsnmp.c	
+@@ -454,6 +454,12 @@ CODESTARTnewActInst
+ 		}
+ 	}
+ 
++	/* Init NetSNMP library and read in MIB database */
++	init_snmp("rsyslog");
++
++	/* Set some defaults in the NetSNMP library */
++	netsnmp_ds_set_int(NETSNMP_DS_LIBRARY_ID, NETSNMP_DS_LIB_DEFAULT_PORT, pData->iPort );
++
+ 	CHKiRet(OMSRsetEntry(*ppOMSR, 0, (uchar*)strdup((pData->tplName == NULL) ? 
+ 						"RSYSLOG_FileFormat" : (char*)pData->tplName),
+ 						OMSR_NO_RQD_TPL_OPTS));
+-- 
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1196230-ratelimit-add-source.patch b/SOURCES/rsyslog-8.24.0-rhbz1196230-ratelimit-add-source.patch
new file mode 100644
index 0000000..b1d140c
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1196230-ratelimit-add-source.patch
@@ -0,0 +1,52 @@
+From f4958b548776e8b9c9c5ef211116eb503aff8e5b Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Tue, 18 Apr 2017 16:42:22 +0200
+Subject: [PATCH] Putting process name into ratelimiter with imuxsock
+
+---
+ plugins/imuxsock/imuxsock.c | 25 ++++++++++++++++++++-----
+ 1 file changed, 20 insertions(+), 5 deletions(-)
+
+diff --git a/plugins/imuxsock/imuxsock.c b/plugins/imuxsock/imuxsock.c
+index 277c30e..1a57f6e 100644
+--- a/plugins/imuxsock/imuxsock.c
++++ b/plugins/imuxsock/imuxsock.c
+@@ -594,7 +594,7 @@ findRatelimiter(lstn_t *pLstn, struct ucred *cred, ratelimit_t **prl)
+ 	ratelimit_t *rl = NULL;
+ 	int r;
+ 	pid_t *keybuf;
+-	char pidbuf[256];
++	char pinfobuf[512];
+ 	DEFiRet;
+ 
+ 	if(cred == NULL)
+@@ -616,10 +616,25 @@ findRatelimiter(lstn_t *pLstn, struct ucred *cred, ratelimit_t **prl)
+ 		DBGPRINTF("imuxsock: no ratelimiter for pid %lu, creating one\n",
+ 			  (unsigned long) cred->pid);
+ 		STATSCOUNTER_INC(ctrNumRatelimiters, mutCtrNumRatelimiters);
+-		snprintf(pidbuf, sizeof(pidbuf), "pid %lu",
+-			(unsigned long) cred->pid);
+-		pidbuf[sizeof(pidbuf)-1] = '\0'; /* to be on safe side */
+-		CHKiRet(ratelimitNew(&rl, "imuxsock", pidbuf));
++		/* read process name from system  */
++		char procName[256]; /* enough for any sane process name  */
++		snprintf(procName, sizeof(procName), "/proc/%lu/cmdline", (unsigned long) cred->pid);
++		FILE *f = fopen(procName, "r");
++		if (f) {
++			size_t len;
++			len = fread(procName, sizeof(char), 256, f);
++			if (len > 0) {
++				snprintf(pinfobuf, sizeof(pinfobuf), "pid: %lu, name: %s",
++					(unsigned long) cred->pid, procName);
++			}
++			fclose(f);
++		}
++		else {
++			snprintf(pinfobuf, sizeof(pinfobuf), "pid: %lu",
++				(unsigned long) cred->pid);
++		}
++		pinfobuf[sizeof(pinfobuf)-1] = '\0'; /* to be on safe side */
++		CHKiRet(ratelimitNew(&rl, "imuxsock", pinfobuf));
+ 		ratelimitSetLinuxLike(rl, pLstn->ratelimitInterval, pLstn->ratelimitBurst);
+ 		ratelimitSetSeverity(rl, pLstn->ratelimitSev);
+ 		CHKmalloc(keybuf = malloc(sizeof(pid_t)));
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1245194-imjournal-ste-file.patch b/SOURCES/rsyslog-8.24.0-rhbz1245194-imjournal-ste-file.patch
new file mode 100644
index 0000000..cd8b910
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1245194-imjournal-ste-file.patch
@@ -0,0 +1,24 @@
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Fri, 13 Jan 2017 13:38:53 -0500
+Subject: [PATCH 1/1] Rebased from: Patch33:
+ rsyslog-7.4.7-rhbz1245194-imjournal-ste-file.patch
+
+Resolves:
+rhbz#1245194
+---
+ plugins/imjournal/imjournal.c | 2 ++
+ 1 file changed, 2 insertions(+), 0 deletions(-)
+
+diff --git a/plugins/imjournal/imjournal.c b/plugins/imjournal/imjournal.c
+index d4ea0d5..9d40e04 100644
+--- a/plugins/imjournal/imjournal.c
++++ b/plugins/imjournal/imjournal.c
+@@ -793,6 +793,8 @@
+ 		NULL, &cs.stateFile, STD_LOADABLE_MODULE_ID));
+ 	CHKiRet(omsdRegCFSLineHdlr((uchar *)"imjournalignorepreviousmessages", 0, eCmdHdlrBinary,
+ 		NULL, &cs.bIgnorePrevious, STD_LOADABLE_MODULE_ID));
++	CHKiRet(omsdRegCFSLineHdlr((uchar *)"imjournalignorenonvalidstatefile", 0, eCmdHdlrBinary,
++		NULL, &cs.bIgnoreNonValidStatefile, STD_LOADABLE_MODULE_ID)); 
+ 	CHKiRet(omsdRegCFSLineHdlr((uchar *)"imjournaldefaultseverity", 0, eCmdHdlrSeverity,
+ 		NULL, &cs.iDfltSeverity, STD_LOADABLE_MODULE_ID));
+ 	CHKiRet(omsdRegCFSLineHdlr((uchar *)"imjournaldefaultfacility", 0, eCmdHdlrCustomHandler,
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1303617-imfile-wildcards.patch b/SOURCES/rsyslog-8.24.0-rhbz1303617-imfile-wildcards.patch
new file mode 100644
index 0000000..be1f0ce
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1303617-imfile-wildcards.patch
@@ -0,0 +1,30 @@
+From 75dc28c1cb4d3988608352f83f7dc420c17d7c64 Mon Sep 17 00:00:00 2001
+From: Tomas Sykora <tosykora@redhat.com>
+Date: Thu, 24 Nov 2016 10:27:48 -0500
+Subject: [PATCH 2/4] Rebased from:
+ rsyslog-7.4.7-rhbz1303617-imfile-wildcards.patch
+
+Resolves:
+rhbz#1303617
+
+-this patch was already upstreamed, we just have different default option (polling) than upstream
+---
+ plugins/imfile/imfile.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/plugins/imfile/imfile.c b/plugins/imfile/imfile.c
+index 519768c..09a321e 100644
+--- a/plugins/imfile/imfile.c
++++ b/plugins/imfile/imfile.c
+@@ -1121,7 +1121,7 @@ BEGINsetModCnf
+ 	struct cnfparamvals *pvals = NULL;
+ 	int i;
+ CODESTARTsetModCnf
+-	loadModConf->opMode = OPMODE_INOTIFY; /* new style config has different default! */
++	loadModConf->opMode = OPMODE_POLLING; /* Difference from upstream, upstream has default option INOTIFY */
+ 	pvals = nvlstGetParams(lst, &modpblk, NULL);
+ 	if(pvals == NULL) {
+ 		errmsg.LogError(0, RS_RET_MISSING_CNFPARAMS, "imfile: error processing module "
+-- 
+2.7.4
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1309698-imudp-case-sensitive-option.patch b/SOURCES/rsyslog-8.24.0-rhbz1309698-imudp-case-sensitive-option.patch
new file mode 100644
index 0000000..900ce2f
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1309698-imudp-case-sensitive-option.patch
@@ -0,0 +1,285 @@
+From 9ac54f0d7d70b8a9879889b4522a1d552fca1100 Mon Sep 17 00:00:00 2001
+From: Noriko Hosoi <nhosoi@momo7.localdomain>
+Date: Thu, 12 Jul 2018 11:52:04 -0700
+Subject: [PATCH] Introducing an option preservecase to imudp and imtcp module
+ for managing the case of FROMHOST value.
+
+Usage:
+module(load="imudp" [preservecase="on"|"off"])
+module(load="imtdp" [preservecase="on"|"off"])
+
+If preservecase="on", FROMHOST value is handled in the case sensitive manner.
+If preservecase="off", FROMHOST value is handled in the case insensitive manner.
+
+To maintain the current behaviour, the default value of preservecase is
+"on" for imtcp and "off" for imudp.
+
+Incremented tcpsrvCURR_IF_VERSION by 1.
+
+References:
+https://github.com/rsyslog/rsyslog/pull/2774
+https://bugzilla.redhat.com/show_bug.cgi?id=1309698
+---
+ plugins/imtcp/imtcp.c | 14 ++++++++++++--
+ plugins/imudp/imudp.c | 15 ++++++++++++---
+ runtime/msg.c         |  6 +++++-
+ runtime/msg.h         |  2 ++
+ runtime/net.c         |  2 +-
+ runtime/tcpsrv.c      | 21 +++++++++++++++++++++
+ runtime/tcpsrv.h      |  5 ++++-
+ 7 files changed, 57 insertions(+), 8 deletions(-)
+
+diff --git a/plugins/imtcp/imtcp.c b/plugins/imtcp/imtcp.c
+index 8e3dcc0a21..45fa240b59 100644
+--- a/plugins/imtcp/imtcp.c
++++ b/plugins/imtcp/imtcp.c
+@@ -100,6 +100,7 @@ static struct configSettings_s {
+ 	int iAddtlFrameDelim;
+ 	int bDisableLFDelim;
+ 	int bUseFlowControl;
++	int bPreserveCase;
+ 	uchar *pszStrmDrvrAuthMode;
+ 	uchar *pszInputName;
+ 	uchar *pszBindRuleset;
+@@ -144,6 +145,7 @@ struct modConfData_s {
+ 	uchar *pszStrmDrvrAuthMode; /* authentication mode to use */
+ 	struct cnfarray *permittedPeers;
+ 	sbool configSetViaV2Method;
++	sbool bPreserveCase; /* preserve case of fromhost; true by default */
+ };
+ 
+ static modConfData_t *loadModConf = NULL;/* modConf ptr to use for the current load process */
+@@ -169,7 +171,8 @@ static struct cnfparamdescr modpdescr[] = {
+	{ "keepalive", eCmdHdlrBinary, 0 },
+ 	{ "keepalive.probes", eCmdHdlrPositiveInt, 0 },
+ 	{ "keepalive.time", eCmdHdlrPositiveInt, 0 },
+-	{ "keepalive.interval", eCmdHdlrPositiveInt, 0 }
++	{ "keepalive.interval", eCmdHdlrPositiveInt, 0 },
++	{ "preservecase", eCmdHdlrBinary, 0 }
+ };
+ static struct cnfparamblk modpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -375,6 +378,7 @@ addListner(modConfData_t *modConf, instanceConf_t *inst)
+ 		if(pPermPeersRoot != NULL) {
+ 			CHKiRet(tcpsrv.SetDrvrPermPeers(pOurTcpsrv, pPermPeersRoot));
+ 		}
++		CHKiRet(tcpsrv.SetPreserveCase(pOurTcpsrv, modConf->bPreserveCase));
+ 	}
+ 
+ 	/* initialized, now add socket and listener params */
+@@ -473,6 +477,7 @@ CODESTARTbeginCnfLoad
+ 	loadModConf->pszStrmDrvrAuthMode = NULL;
+ 	loadModConf->permittedPeers = NULL;
+ 	loadModConf->configSetViaV2Method = 0;
++	loadModConf->bPreserveCase = 1; /* default to true */
+ 	bLegacyCnfModGlobalsPermitted = 1;
+ 	/* init legacy config variables */
+ 	cs.pszStrmDrvrAuthMode = NULL;
+@@ -543,6 +548,8 @@ CODESTARTsetModCnf
+ 			loadModConf->pszStrmDrvrName = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else if(!strcmp(modpblk.descr[i].name, "permittedpeer")) {
+ 			loadModConf->permittedPeers = cnfarrayDup(pvals[i].val.d.ar);
++		} else if(!strcmp(modpblk.descr[i].name, "preservecase")) {
++			loadModConf->bPreserveCase = (int) pvals[i].val.d.n;
+ 		} else {
+ 			dbgprintf("imtcp: program error, non-handled "
+ 			  "param '%s' in beginCnfLoad\n", modpblk.descr[i].name);
+@@ -584,6 +591,7 @@ CODESTARTendCnfLoad
+ 			loadModConf->pszStrmDrvrAuthMode = cs.pszStrmDrvrAuthMode;
+ 			cs.pszStrmDrvrAuthMode = NULL;
+ 		}
++		pModConf->bPreserveCase = cs.bPreserveCase;
+ 	}
+ 	free(cs.pszStrmDrvrAuthMode);
+ 	cs.pszStrmDrvrAuthMode = NULL;
+@@ -731,6 +739,7 @@ resetConfigVariables(uchar __attribute__((unused)) *pp, void __attribute__((unus
+ 	cs.pszInputName = NULL;
+ 	free(cs.pszStrmDrvrAuthMode);
+ 	cs.pszStrmDrvrAuthMode = NULL;
++	cs.bPreserveCase = 1;
+ 	return RS_RET_OK;
+ }
+ 
+@@ -797,7 +806,8 @@ CODEmodInit_QueryRegCFSLineHdlr
+ 			   NULL, &cs.bEmitMsgOnClose, STD_LOADABLE_MODULE_ID, &bLegacyCnfModGlobalsPermitted));
+ 	CHKiRet(regCfSysLineHdlr2(UCHAR_CONSTANT("inputtcpserverstreamdrivermode"), 0, eCmdHdlrInt,
+ 			   NULL, &cs.iStrmDrvrMode, STD_LOADABLE_MODULE_ID, &bLegacyCnfModGlobalsPermitted));
+-
++	CHKiRet(regCfSysLineHdlr2(UCHAR_CONSTANT("inputtcpserverpreservecase"), 1, eCmdHdlrBinary,
++			   NULL, &cs.bPreserveCase, STD_LOADABLE_MODULE_ID, &bLegacyCnfModGlobalsPermitted));
+ 	CHKiRet(omsdRegCFSLineHdlr(UCHAR_CONSTANT("resetconfigvariables"), 1, eCmdHdlrCustomHandler,
+ 				   resetConfigVariables, NULL, STD_LOADABLE_MODULE_ID));
+ ENDmodInit
+diff --git a/plugins/imudp/imudp.c b/plugins/imudp/imudp.c
+index 51a9d712a0..74437781ca 100644
+--- a/plugins/imudp/imudp.c
++++ b/plugins/imudp/imudp.c
+@@ -152,6 +152,7 @@ struct modConfData_s {
+ 	int batchSize;			/* max nbr of input batch --> also recvmmsg() max count */
+ 	int8_t wrkrMax;			/* max nbr of worker threads */
+ 	sbool configSetViaV2Method;
++	sbool bPreserveCase;	/* preserves the case of fromhost; "off" by default */
+ };
+ static modConfData_t *loadModConf = NULL;/* modConf ptr to use for the current load process */
+ static modConfData_t *runModConf = NULL;/* modConf ptr to use for the current load process */
+@@ -162,7 +163,8 @@ static struct cnfparamdescr modpdescr[] = {
+ 	{ "schedulingpriority", eCmdHdlrInt, 0 },
+ 	{ "batchsize", eCmdHdlrInt, 0 },
+ 	{ "threads", eCmdHdlrPositiveInt, 0 },
+-	{ "timerequery", eCmdHdlrInt, 0 }
++	{ "timerequery", eCmdHdlrInt, 0 },
++	{ "preservecase", eCmdHdlrBinary, 0 }
+ };
+ static struct cnfparamblk modpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -447,8 +449,12 @@ processPacket(struct lstn_s *lstn, struct sockaddr_storage *frominetPrev, int *p
+ 		if(lstn->dfltTZ != NULL)
+ 			MsgSetDfltTZ(pMsg, (char*) lstn->dfltTZ);
+ 		pMsg->msgFlags  = NEEDS_PARSING | PARSE_HOSTNAME | NEEDS_DNSRESOL;
+-		if(*pbIsPermitted == 2)
+-			pMsg->msgFlags  |= NEEDS_ACLCHK_U; /* request ACL check after resolution */
++		if(*pbIsPermitted == 2) {
++			pMsg->msgFlags |= NEEDS_ACLCHK_U; /* request ACL check after resolution */
++		}
++		if(runModConf->bPreserveCase) {
++			pMsg->msgFlags |= PRESERVE_CASE; /* preserve case of fromhost */
++		}
+ 		CHKiRet(msgSetFromSockinfo(pMsg, frominet));
+ 		CHKiRet(ratelimitAddMsg(lstn->ratelimiter, multiSub, pMsg));
+ 		STATSCOUNTER_INC(lstn->ctrSubmit, lstn->mutCtrSubmit);
+@@ -1030,6 +1036,7 @@ CODESTARTbeginCnfLoad
+ 	loadModConf->iTimeRequery = TIME_REQUERY_DFLT;
+ 	loadModConf->iSchedPrio = SCHED_PRIO_UNSET;
+ 	loadModConf->pszSchedPolicy = NULL;
++	loadModConf->bPreserveCase = 0; /* off */
+ 	bLegacyCnfModGlobalsPermitted = 1;
+ 	/* init legacy config vars */
+ 	cs.pszBindRuleset = NULL;
+@@ -1079,6 +1086,8 @@ CODESTARTsetModCnf
+ 			} else {
+ 				loadModConf->wrkrMax = wrkrMax;
+ 			}
++		} else if(!strcmp(modpblk.descr[i].name, "preservecase")) {
++			loadModConf->bPreserveCase = (int) pvals[i].val.d.n;
+ 		} else {
+ 			dbgprintf("imudp: program error, non-handled "
+ 			  "param '%s' in beginCnfLoad\n", modpblk.descr[i].name);
+diff --git a/runtime/msg.c b/runtime/msg.c
+index c43f813142..9ed4eaf84d 100644
+--- a/runtime/msg.c
++++ b/runtime/msg.c
+@@ -506,7 +506,11 @@ resolveDNS(smsg_t * const pMsg) {
+ 	MsgLock(pMsg);
+ 	CHKiRet(objUse(net, CORE_COMPONENT));
+ 	if(pMsg->msgFlags & NEEDS_DNSRESOL) {
+-		localRet = net.cvthname(pMsg->rcvFrom.pfrominet, &localName, NULL, &ip);
++		if (pMsg->msgFlags & PRESERVE_CASE) {
++			localRet = net.cvthname(pMsg->rcvFrom.pfrominet, NULL, &localName, &ip);
++		} else {
++			localRet = net.cvthname(pMsg->rcvFrom.pfrominet, &localName, NULL, &ip);
++		}
+ 		if(localRet == RS_RET_OK) {
+ 			/* we pass down the props, so no need for AddRef */
+ 			MsgSetRcvFromWithoutAddRef(pMsg, localName);
+diff --git a/runtime/msg.h b/runtime/msg.h
+index cd530aca38..1287cb7a4b 100644
+--- a/runtime/msg.h
++++ b/runtime/msg.h
+@@ -144,6 +144,7 @@  struct msg {
+ #define NEEDS_DNSRESOL	0x040	/* fromhost address is unresolved and must be locked up via DNS reverse lookup first */
+ #define NEEDS_ACLCHK_U	0x080	/* check UDP ACLs after DNS resolution has been done in main queue consumer */
+ #define NO_PRI_IN_RAW	0x100	/* rawmsg does not include a PRI (Solaris!), but PRI is already set correctly in the msg object */
++#define PRESERVE_CASE	0x200	/* preserve case in fromhost */
+ 
+ /* (syslog) protocol types */
+ #define MSG_LEGACY_PROTOCOL 0
+diff --git a/runtime/net.c b/runtime/net.c
+index d6ff8a3d4d..aef906601c 100644
+--- a/runtime/net.c
++++ b/runtime/net.c
+@@ -1152,7 +1152,7 @@ cvthname(struct sockaddr_storage *f, prop_t **localName, prop_t **fqdn, prop_t *
+ {
+ 	DEFiRet;
+ 	assert(f != NULL);
+-	iRet = dnscacheLookup(f, NULL, fqdn, localName, ip);
++	iRet = dnscacheLookup(f, fqdn, NULL, localName, ip);
+ 	RETiRet;
+ }
+ 
+diff --git a/runtime/tcpsrv.c b/runtime/tcpsrv.c
+index 61e9ff4d22..d5993b4f00 100644
+--- a/runtime/tcpsrv.c
++++ b/runtime/tcpsrv.c
+@@ -495,6 +495,15 @@ SessAccept(tcpsrv_t *pThis, tcpLstnPortList_t *pLstnInfo, tcps_sess_t **ppSess,
+ 
+ 	/* get the host name */
+ 	CHKiRet(netstrm.GetRemoteHName(pNewStrm, &fromHostFQDN));
++	if (!pThis->bPreserveCase) {
++		/* preserve_case = off */
++		uchar *p;
++		for(p = fromHostFQDN; *p; p++) {
++			if (isupper((int) *p)) {
++				*p = tolower((int) *p);
++			}
++		}
++	}
+ 	CHKiRet(netstrm.GetRemoteIP(pNewStrm, &fromHostIP));
+ 	CHKiRet(netstrm.GetRemAddr(pNewStrm, &addr));
+ 	/* TODO: check if we need to strip the domain name here -- rgerhards, 2008-04-24 */
+@@ -1001,6 +1010,7 @@ BEGINobjConstruct(tcpsrv) /* be sure to specify the object type also in END macr
+ 	pThis->ratelimitBurst = 10000;
+ 	pThis->bUseFlowControl = 1;
+ 	pThis->pszDrvrName = NULL;
++	pThis->bPreserveCase = 1; /* preserve case in fromhost; default to true. */
+ ENDobjConstruct(tcpsrv)
+ 
+ 
+@@ -1433,6 +1443,16 @@ SetSessMax(tcpsrv_t *pThis, int iMax)
+ }
+ 
+ 
++static rsRetVal
++SetPreserveCase(tcpsrv_t *pThis, int bPreserveCase)
++{
++	DEFiRet;
++	ISOBJ_TYPE_assert(pThis, tcpsrv);
++	pThis-> bPreserveCase = bPreserveCase;
++	RETiRet;
++}
++
++
+ /* queryInterface function
+  * rgerhards, 2008-02-29
+  */
+@@ -1491,6 +1511,7 @@ CODESTARTobjQueryInterface(tcpsrv)
+ 	pIf->SetRuleset = SetRuleset;
+ 	pIf->SetLinuxLikeRatelimiters = SetLinuxLikeRatelimiters;
+ 	pIf->SetNotificationOnRemoteClose = SetNotificationOnRemoteClose;
++	pIf->SetPreserveCase = SetPreserveCase;
+ 
+ finalize_it:
+ ENDobjQueryInterface(tcpsrv)
+diff --git a/runtime/tcpsrv.h b/runtime/tcpsrv.h
+index 22a65c20a0..f17b1b4384 100644
+--- a/runtime/tcpsrv.h
++++ b/runtime/tcpsrv.h
+@@ -81,6 +81,7 @@  struct tcpsrv_s {
+ 
+ 	int addtlFrameDelim;	/**< additional frame delimiter for plain TCP syslog framing (e.g. to handle NetScreen) */
+ 	int bDisableLFDelim;	/**< if 1, standard LF frame delimiter is disabled (*very dangerous*) */
++	sbool bPreserveCase;	/**< preserve case in fromhost */
+ 	int ratelimitInterval;
+ 	int ratelimitBurst;
+ 	tcps_sess_t **pSessions;/**< array of all of our sessions */
+@@ -168,8 +169,10 @@  BEGINinterface(tcpsrv) /* name must also be changed in ENDinterface macro! */
+ 	rsRetVal (*SetKeepAliveTime)(tcpsrv_t*, int);
+ 	/* added v18 */
+ 	rsRetVal (*SetbSPFramingFix)(tcpsrv_t*, sbool);
++	/* added v21 -- Preserve case in fromhost, 2018-08-16 */
++	rsRetVal (*SetPreserveCase)(tcpsrv_t *pThis, int bPreserveCase);
+ ENDinterface(tcpsrv)
+-#define tcpsrvCURR_IF_VERSION 18 /* increment whenever you change the interface structure! */
++#define tcpsrvCURR_IF_VERSION 21 /* increment whenever you change the interface structure! */
+ /* change for v4:
+  * - SetAddtlFrameDelim() added -- rgerhards, 2008-12-10
+  * - SetInputName() added -- rgerhards, 2008-12-10
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1399569-flushontxend.patch b/SOURCES/rsyslog-8.24.0-rhbz1399569-flushontxend.patch
new file mode 100644
index 0000000..abe61bc
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1399569-flushontxend.patch
@@ -0,0 +1,36 @@
+From 1a80a71e91f445f29763fbd724a1c6f3fbf2077a Mon Sep 17 00:00:00 2001
+From: Tomas Sykora <tosykora@redhat.com>
+Date: Fri, 23 Dec 2016 06:49:22 -0500
+Subject: [PATCH 1/3] When flushOnTXEnd is off messages should be written to a
+ file only when the buffer is full. This was broken with upstream commit
+ 6de0103, which was reverted with this patch.
+
+Resolves: RHBZ#1399569
+---
+ tools/omfile.c | 9 ++-------
+ 1 file changed, 2 insertions(+), 7 deletions(-)
+
+diff --git a/tools/omfile.c b/tools/omfile.c
+index 77bf65c..4d849c5 100644
+--- a/tools/omfile.c
++++ b/tools/omfile.c
+@@ -1046,14 +1046,9 @@ CODESTARTcommitTransaction
+ 		writeFile(pData, pParams, i);
+ 	}
+ 	/* Note: pStrm may be NULL if there was an error opening the stream */
+-	if(pData->bUseAsyncWriter) {
+-		if(pData->bFlushOnTXEnd && pData->pStrm != NULL) {
++	if(pData->bFlushOnTXEnd && pData->pStrm != NULL) {
++		if(!pData->bUseAsyncWriter)
+ 			CHKiRet(strm.Flush(pData->pStrm));
+-		}
+-	} else {
+-		if(pData->pStrm != NULL) {
+-			CHKiRet(strm.Flush(pData->pStrm));
+-		}
+ 	}
+ 
+ finalize_it:
+-- 
+2.9.3
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1400594-tls-config.patch b/SOURCES/rsyslog-8.24.0-rhbz1400594-tls-config.patch
new file mode 100644
index 0000000..5011000
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1400594-tls-config.patch
@@ -0,0 +1,29 @@
+From 83ea7bc475cc722033082e51416947842810d1fc Mon Sep 17 00:00:00 2001
+From: Tomas Sykora <tosykora@redhat.com>
+Date: Fri, 23 Dec 2016 06:51:52 -0500
+Subject: [PATCH 2/3] In rsyslog v7 because of bug mix of old and new syntax
+ had to be used to configure tls server-client. In rsyslog v8 this bug was
+ fixed but the mixed configuration didn't worked anymore which was a
+ regresion. With this patch, mixed configuration workes again.
+
+Resolves: RHBZ#1400594
+---
+ tools/omfwd.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/tools/omfwd.c b/tools/omfwd.c
+index 45fcfd6..c1d3e64 100644
+--- a/tools/omfwd.c
++++ b/tools/omfwd.c
+@@ -925,6 +925,8 @@ initTCP(wrkrInstanceData_t *pWrkrData)
+ 		CHKiRet(tcpclt.SetSendPrepRetry(pWrkrData->pTCPClt, TCPSendPrepRetry));
+ 		CHKiRet(tcpclt.SetFraming(pWrkrData->pTCPClt, pData->tcp_framing));
+ 		CHKiRet(tcpclt.SetRebindInterval(pWrkrData->pTCPClt, pData->iRebindInterval));
++		if (cs.iStrmDrvrMode)
++                        pData->iStrmDrvrMode = cs.iStrmDrvrMode;
+ 	}
+ finalize_it:
+ 	RETiRet;
+-- 
+2.9.3
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1401456-sd-service-network.patch b/SOURCES/rsyslog-8.24.0-rhbz1401456-sd-service-network.patch
new file mode 100644
index 0000000..f3cbf26
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1401456-sd-service-network.patch
@@ -0,0 +1,12 @@
+diff -up ./rsyslog.service.in.network ./rsyslog.service.in
+--- ./rsyslog.service.in.network	2017-08-30 09:19:50.845218557 -0400
++++ ./rsyslog.service.in	2017-08-30 09:21:34.141218557 -0400
+@@ -1,6 +1,8 @@
+ [Unit]
+ Description=System Logging Service
+ Requires=syslog.socket
++Wants=network.target network-online.target
++After=network.target network-online.target
+ Documentation=man:rsyslogd(8)
+ Documentation=http://www.rsyslog.com/doc/
+ 
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1401870-watermark.patch b/SOURCES/rsyslog-8.24.0-rhbz1401870-watermark.patch
new file mode 100644
index 0000000..7d931e5
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1401870-watermark.patch
@@ -0,0 +1,31 @@
+From f1cff52cc3bca3ed050f5e8e2c25698bebcf3258 Mon Sep 17 00:00:00 2001
+From: Tomas Sykora <tosykora@redhat.com>
+Date: Fri, 23 Dec 2016 06:56:41 -0500
+Subject: [PATCH 3/3] When highwatermark is reached, messages should be
+ imediately written to a queue file. Instead of that, messages weren't written
+ until rsyslog was stopped. This was caused by overwritting watermark values
+ by -1.
+
+Resolves: RHBZ#1401870
+---
+ action.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/action.c b/action.c
+index 45828cc..3f8c82c 100644
+--- a/action.c
++++ b/action.c
+@@ -276,8 +276,8 @@ actionResetQueueParams(void)
+ 	cs.ActionQueType = QUEUETYPE_DIRECT;		/* type of the main message queue above */
+ 	cs.iActionQueueSize = 1000;			/* size of the main message queue above */
+ 	cs.iActionQueueDeqBatchSize = 16;		/* default batch size */
+-	cs.iActionQHighWtrMark = -1;			/* high water mark for disk-assisted queues */
+-	cs.iActionQLowWtrMark = -1;			/* low water mark for disk-assisted queues */
++	cs.iActionQHighWtrMark = 800;			/* high water mark for disk-assisted queues */
++	cs.iActionQLowWtrMark = 200;			/* low water mark for disk-assisted queues */
+ 	cs.iActionQDiscardMark = 980;			/* begin to discard messages */
+ 	cs.iActionQDiscardSeverity = 8;			/* discard warning and above */
+ 	cs.iActionQueueNumWorkers = 1;			/* number of worker threads for the mm queue above */
+-- 
+2.9.3
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1403831-missing-cmd-line-switches.patch b/SOURCES/rsyslog-8.24.0-rhbz1403831-missing-cmd-line-switches.patch
new file mode 100644
index 0000000..74ec4e9
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1403831-missing-cmd-line-switches.patch
@@ -0,0 +1,123 @@
+From e62f3e1a46a6599a6b8dc22e9df0ebefe00b7d08 Mon Sep 17 00:00:00 2001
+From: jvymazal <jvymazal@redhat.com>
+Date: Fri, 13 Jan 2017 12:06:04 +0100
+Subject: [PATCH] added obsolete command-line switches for backward
+ comaptibility (#1)
+
+* added obsolete command-line switches for backward comaptibility
+
+* removed the added warnings since we are not obsoleting any cmd-line opts
+
+Resolves: RHBZ#1403831
+---
+ tools/rsyslogd.c | 49 +++++++++++++++++++------------------------------
+ 1 file changed, 19 insertions(+), 30 deletions(-)
+
+diff --git a/tools/rsyslogd.c b/tools/rsyslogd.c
+index b53eeaf..5e3cde2 100644
+--- a/tools/rsyslogd.c
++++ b/tools/rsyslogd.c
+@@ -1191,6 +1191,15 @@ initAll(int argc, char **argv)
+ 		case 'w': /* disable disallowed host warnings */
+ 		case 'C':
+ 		case 'x': /* disable dns for remote messages */
++		case 'a': /* obsolete switches from here below for backwards compatibility */
++		case 'c':
++		case 'g':
++		case 'h':
++		case 'm':
++		case 'o':
++		case 'p':
++		case 'r':
++		case 't':
+ 			CHKiRet(bufOptAdd(ch, optarg));
+ 			break;
+ #if defined(_AIX)
+@@ -1249,27 +1258,15 @@ initAll(int argc, char **argv)
+ 		DBGPRINTF("deque option %c, optarg '%s'\n", ch, (arg == NULL) ? "" : arg);
+ 		switch((char)ch) {
+                 case '4':
+-			fprintf (stderr, "rsyslogd: the -4 command line option will go away "
+-				 "soon.\nPlease use the global(net.ipprotocol=\"ipv4-only\") "
+-				 "configuration parameter instead.\n");
+ 	                glbl.SetDefPFFamily(PF_INET);
+                         break;
+                 case '6':
+-			fprintf (stderr, "rsyslogd: the -6 command line option will go away "
+-				 "soon.\nPlease use the global(net.ipprotocol=\"ipv6-only\") "
+-				 "configuration parameter instead.\n");
+                         glbl.SetDefPFFamily(PF_INET6);
+                         break;
+                 case 'A':
+-			fprintf (stderr, "rsyslogd: the -A command line option will go away "
+-				 "soon.\n"
+-				 "Please use the omfwd parameter \"upd.sendToAll\" instead.\n");
+                         send_to_all++;
+                         break;
+ 		case 'S':		/* Source IP for local client to be used on multihomed host */
+-			fprintf (stderr, "rsyslogd: the -S command line option will go away "
+-				 "soon.\n"
+-				 "Please use the omrelp parameter \"localClientIP\" instead.\n");
+ 			if(glbl.GetSourceIPofLocalClient() != NULL) {
+ 				fprintf (stderr, "rsyslogd: Only one -S argument allowed, the first one is taken.\n");
+ 			} else {
+@@ -1283,9 +1280,6 @@ initAll(int argc, char **argv)
+ 			PidFile = arg;
+ 			break;
+ 		case 'l':
+-			fprintf (stderr, "rsyslogd: the -l command line option will go away "
+-				 "soon.\n Make yourself heard on the rsyslog mailing "
+-				 "list if you need it any longer.\n");
+ 			if(glbl.GetLocalHosts() != NULL) {
+ 				fprintf (stderr, "rsyslogd: Only one -l argument allowed, the first one is taken.\n");
+ 			} else {
+@@ -1299,21 +1293,12 @@ initAll(int argc, char **argv)
+ 			iConfigVerify = (arg == NULL) ? 0 : atoi(arg);
+ 			break;
+ 		case 'q':               /* add hostname if DNS resolving has failed */
+-			fprintf (stderr, "rsyslogd: the -q command line option will go away "
+-				 "soon.\nPlease use the global(net.aclAddHostnameOnFail=\"on\") "
+-				 "configuration parameter instead.\n");
+ 		        *(net.pACLAddHostnameOnFail) = 1;
+ 		        break;
+ 		case 'Q':               /* dont resolve hostnames in ACL to IPs */
+-			fprintf (stderr, "rsyslogd: the -Q command line option will go away "
+-				 "soon.\nPlease use the global(net.aclResolveHostname=\"off\") "
+-				 "configuration parameter instead.\n");
+ 		        *(net.pACLDontResolve) = 1;
+ 		        break;
+ 		case 's':
+-			fprintf (stderr, "rsyslogd: the -s command line option will go away "
+-				 "soon.\n Make yourself heard on the rsyslog mailing "
+-				 "list if you need it any longer.\n");
+ 			if(glbl.GetStripDomains() != NULL) {
+ 				fprintf (stderr, "rsyslogd: Only one -s argument allowed, the first one is taken.\n");
+ 			} else {
+@@ -1353,17 +1338,21 @@ initAll(int argc, char **argv)
+ 			bChDirRoot = 0;
+ 			break;
+ 		case 'w':		/* disable disallowed host warnigs */
+-			fprintf (stderr, "rsyslogd: the -w command line option will go away "
+-				 "soon.\nPlease use the global(net.permitWarning=\"off\") "
+-				 "configuration parameter instead.\n");
+ 			glbl.SetOption_DisallowWarning(0);
+ 			break;
+ 		case 'x':		/* disable dns for remote messages */
+-			fprintf (stderr, "rsyslogd: the -x command line option will go away "
+-				 "soon.\nPlease use the global(net.enableDNS=\"off\") "
+-				 "configuration parameter instead.\n");
+ 			glbl.SetDisableDNS(1);
+ 			break;
++		case 'a':
++		case 'c':
++		case 'g':
++		case 'h':
++		case 'm':
++		case 'o':
++		case 'p':
++		case 'r':
++		case 't':
++			break;
+                case '?':
+ 		default:
+ 			rsyslogd_usage();
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1403907-imudp-deprecated-parameter.patch b/SOURCES/rsyslog-8.24.0-rhbz1403907-imudp-deprecated-parameter.patch
new file mode 100644
index 0000000..dbb8685
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1403907-imudp-deprecated-parameter.patch
@@ -0,0 +1,30 @@
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Fri, 03 Feb 2017 15:12:42 -0500
+Subject: [PATCH 1/1] rsyslog-8.24.0-rhbz1403907-imudp-deprecated-parameter.patch
+
+Resolves:
+rhbz#1403907
+---
+ plugins/imudp/imudp.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+--- a/plugins/imudp/imudp.c	2017-01-10 10:00:04.000000000 +0100
++++ b/plugins/imudp/imudp.c	2017-02-03 14:46:59.075987660 +0100
+@@ -920,8 +920,6 @@
+ 			inst->bAppendPortToInpname = (int) pvals[i].val.d.n;
+ 			bAppendPortUsed = 1;
+ 		} else if(!strcmp(inppblk.descr[i].name, "inputname")) {
+-			errmsg.LogError(0, RS_RET_DEPRECATED , "imudp: deprecated parameter inputname "
+-					"used. Suggest to use name instead");
+ 			if(inst->inputname != NULL) {
+ 				errmsg.LogError(0, RS_RET_INVALID_PARAMS, "imudp: name and inputname "
+ 						"parameter specified - only one can be used");
+@@ -929,8 +927,6 @@
+ 			}
+ 			inst->inputname = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else if(!strcmp(inppblk.descr[i].name, "inputname.appendport")) {
+-			errmsg.LogError(0, RS_RET_DEPRECATED , "imudp: deprecated parameter inputname.appendport "
+-					"used. Suggest to use name.appendport instead");
+ 			if(bAppendPortUsed) {
+ 				errmsg.LogError(0, RS_RET_INVALID_PARAMS, "imudp: name.appendport and "
+ 						"inputname.appendport parameter specified - only one can be used");
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1419228-journal-switch-persistent.patch b/SOURCES/rsyslog-8.24.0-rhbz1419228-journal-switch-persistent.patch
new file mode 100644
index 0000000..bd1d320
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1419228-journal-switch-persistent.patch
@@ -0,0 +1,116 @@
+diff -up ./plugins/imjournal/imjournal.c.journal ./plugins/imjournal/imjournal.c
+--- ./plugins/imjournal/imjournal.c.journal	2017-03-13 14:38:11.820000000 +0100
++++ ./plugins/imjournal/imjournal.c	2017-03-13 14:45:16.006000000 +0100
+@@ -115,6 +115,25 @@ static const char *pid_field_name;	/* re
+ static ratelimit_t *ratelimiter = NULL;
+ static sd_journal *j;
+ 
++static rsRetVal persistJournalState(void);
++static rsRetVal loadJournalState(void);
++
++static rsRetVal openJournal(sd_journal** jj) {
++	DEFiRet;
++
++	if (sd_journal_open(jj, SD_JOURNAL_LOCAL_ONLY) < 0)
++		iRet = RS_RET_IO_ERROR;
++	RETiRet;
++}
++
++static void closeJournal(sd_journal** jj) {
++
++	if (cs.stateFile) { /* can't persist without a state file */
++		persistJournalState();
++	}
++	sd_journal_close(*jj);
++}
++
+ 
+ /* ugly workaround to handle facility numbers; values
+  * derived from names need to be eight times smaller,
+@@ -436,20 +455,25 @@ persistJournalState (void)
+ }
+ 
+ 
++static rsRetVal skipOldMessages(void);
+ /* Polls the journal for new messages. Similar to sd_journal_wait()
+  * except for the special handling of EINTR.
+  */
++
++#define POLL_TIMEOUT 1000 /* timeout for poll is 1s */
++
+ static rsRetVal
+ pollJournal(void)
+ {
+ 	DEFiRet;
+ 	struct pollfd pollfd;
+-	int r;
++	int pr = 0;
++	int jr = 0;
+ 
+ 	pollfd.fd = sd_journal_get_fd(j);
+ 	pollfd.events = sd_journal_get_events(j);
+-	r = poll(&pollfd, 1, -1);
+-	if (r == -1) {
++	pr = poll(&pollfd, 1, POLL_TIMEOUT);
++	if (pr == -1) {
+ 		if (errno == EINTR) {
+ 			/* EINTR is also received during termination
+ 			 * so return now to check the term state.
+@@ -465,12 +489,30 @@ pollJournal(void)
+ 		}
+ 	}
+ 
+-	assert(r == 1);
+ 
+-	r = sd_journal_process(j);
+-	if (r < 0) {
+-		char errStr[256];
++	jr = sd_journal_process(j);
++	
++	if (pr == 1 && jr == SD_JOURNAL_INVALIDATE) {
++		/* do not persist stateFile sd_journal_get_cursor will fail! */
++		char* tmp = cs.stateFile;
++		cs.stateFile = NULL;
++		closeJournal(&j);
++		cs.stateFile = tmp;
++
++		iRet = openJournal(&j);
++		if (iRet != RS_RET_OK) {
++			char errStr[256];
++			rs_strerror_r(errno, errStr, sizeof(errStr));
++			errmsg.LogError(0, RS_RET_IO_ERROR,
++				"sd_journal_open() failed: '%s'", errStr);
++			ABORT_FINALIZE(RS_RET_ERR);
++		}
+ 
++		iRet = loadJournalState();
++		errmsg.LogError(0, RS_RET_OK, "imjournal: "
++			"journal reloaded...");	
++	} else if (jr < 0) {
++		char errStr[256];
+ 		rs_strerror_r(errno, errStr, sizeof(errStr));
+ 		errmsg.LogError(0, RS_RET_ERR,
+ 			"sd_journal_process() failed: '%s'", errStr);
+@@ -694,20 +736,13 @@ ENDfreeCnf
+ /* open journal */
+ BEGINwillRun
+ CODESTARTwillRun
+-	int ret;
+-	ret = sd_journal_open(&j, SD_JOURNAL_LOCAL_ONLY);
+-	if (ret < 0) {
+-		iRet = RS_RET_IO_ERROR;
+-	}
++	iRet = openJournal(&j);
+ ENDwillRun
+ 
+ /* close journal */
+ BEGINafterRun
+ CODESTARTafterRun
+-	if (cs.stateFile) { /* can't persist without a state file */
+-		persistJournalState();
+-	}
+-	sd_journal_close(j);
++	closeJournal(&j);
+ 	ratelimitDestruct(ratelimiter);
+ ENDafterRun
+ 
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1422414-glbDoneLoadCnf-segfault.patch b/SOURCES/rsyslog-8.24.0-rhbz1422414-glbDoneLoadCnf-segfault.patch
new file mode 100644
index 0000000..bfb171f
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1422414-glbDoneLoadCnf-segfault.patch
@@ -0,0 +1,84 @@
+From d649d77cd585f70c6122d7e4ce95e82ad0e9af5e Mon Sep 17 00:00:00 2001
+From: Radovan Sroka <rsroka@redhat.com>
+Date: Thu, 23 Feb 2017 15:05:27 +0100
+Subject: [PATCH] Fixed segfault in glblDoneLoadCnf()
+
+This is caused by uninitialized net static structure in glbl objUse(net)
+is not called after start of glbl module.
+
+It looks like it is not possible to use objUse(net) when glbl module
+starts. So we call it inside of glblDoneLoadCnf().
+---
+ runtime/glbl.c   | 6 ++++--
+ runtime/glbl.h   | 2 +-
+ runtime/rsconf.c | 6 +++---
+ 3 files changed, 8 insertions(+), 6 deletions(-)
+
+diff --git a/runtime/glbl.c b/runtime/glbl.c
+index 2079c46..1a32c2d 100644
+--- a/runtime/glbl.c
++++ b/runtime/glbl.c
+@@ -1076,11 +1076,13 @@ do_setenv(const char *const var)
+ /* This processes the "regular" parameters which are to be set after the
+  * config has been fully loaded.
+  */
+-void
++rsRetVal
+ glblDoneLoadCnf(void)
+ {
+ 	int i;
+ 	unsigned char *cstr;
++	DEFiRet;
++	CHKiRet(objUse(net, CORE_COMPONENT));
+ 
+ 	qsort(tzinfos, ntzinfos, sizeof(tzinfo_t), qs_arrcmp_tzinfo);
+ 	DBGPRINTF("Timezone information table (%d entries):\n", ntzinfos);
+@@ -1210,7 +1212,7 @@ glblDoneLoadCnf(void)
+ 		stddbg = -1;
+ 	}
+ 
+-finalize_it:	return;
++finalize_it:	RETiRet;
+ }
+ 
+ 
+diff --git a/runtime/glbl.h b/runtime/glbl.h
+index 6d18a2c..048c0a6 100644
+--- a/runtime/glbl.h
++++ b/runtime/glbl.h
+@@ -125,7 +125,7 @@ void glblProcessCnf(struct cnfobj *o);
+ void glblProcessTimezone(struct cnfobj *o);
+ void glblProcessMainQCnf(struct cnfobj *o);
+ void glblDestructMainqCnfObj(void);
+-void glblDoneLoadCnf(void);
++rsRetVal glblDoneLoadCnf(void);
+ const uchar * glblGetWorkDirRaw(void);
+ tzinfo_t* glblFindTimezoneInfo(char *id);
+ int GetGnuTLSLoglevel(void);
+diff --git a/runtime/rsconf.c b/runtime/rsconf.c
+index dc6bd7f..f337e1f 100644
+--- a/runtime/rsconf.c
++++ b/runtime/rsconf.c
+@@ -627,11 +627,11 @@ dropPrivileges(rsconf_t *cnf)
+ /* tell the rsysog core (including ourselfs) that the config load is done and
+  * we need to prepare to move over to activate mode.
+  */
+-static inline void
++static inline rsRetVal
+ tellCoreConfigLoadDone(void)
+ {
+ 	DBGPRINTF("telling rsyslog core that config load for %p is done\n", loadConf);
+-	glblDoneLoadCnf();
++	return glblDoneLoadCnf();
+ }
+ 
+ 
+@@ -1345,7 +1345,7 @@ ourConf = loadConf; // TODO: remove, once ourConf is gone!
+ 	DBGPRINTF("Number of actions in this configuration: %d\n", iActionNbr);
+ 	rulesetOptimizeAll(loadConf);
+ 
+-	tellCoreConfigLoadDone();
++	CHKiRet(tellCoreConfigLoadDone());
+ 	tellModulesConfigLoadDone();
+ 
+ 	tellModulesCheckConfig();
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1422789-missing-chdir-w-chroot.patch b/SOURCES/rsyslog-8.24.0-rhbz1422789-missing-chdir-w-chroot.patch
new file mode 100644
index 0000000..e67e13f
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1422789-missing-chdir-w-chroot.patch
@@ -0,0 +1,15 @@
+diff --git a/tools/rsyslogd.c b/tools/rsyslogd.c
+index c099705..12b037f 100644
+--- a/tools/rsyslogd.c
++++ b/tools/rsyslogd.c
+@@ -1350,6 +1350,10 @@ initAll(int argc, char **argv)
+ 				perror("chroot");
+ 				exit(1);
+ 			}
++            if(chdir("/") != 0) {
++                perror("chdir");
++                exit(1);
++            }
+ 			break;
+ 		case 'u':		/* misc user settings */
+ 			iHelperUOpt = (arg == NULL) ? 0 : atoi(arg);
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1427821-backport-num2ipv4.patch b/SOURCES/rsyslog-8.24.0-rhbz1427821-backport-num2ipv4.patch
new file mode 100644
index 0000000..2822fa0
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1427821-backport-num2ipv4.patch
@@ -0,0 +1,243 @@
+From d6f180ec175f3660e36478b9e32ec6ca73e33604 Mon Sep 17 00:00:00 2001
+From: Jan Gerhards <jgerhards@adiscon.com>
+Date: Fri, 10 Feb 2017 14:30:01 +0100
+Subject: [PATCH] add num2ipv4 function and test
+
+closes https://github.com/rsyslog/rsyslog/issues/1322
+
+testbench: add testcase empty string for num2ipv4 function
+
+see https://github.com/rsyslog/rsyslog/issues/1412
+---
+ grammar/rainerscript.c    | 115 ++++++++++++++++++++++++++++++++++++++++++++++
+ grammar/rainerscript.h    |   4 +-
+ tests/Makefile.am         |   2 +
+ tests/rscript_num2ipv4.sh |  41 +++++++++++++++++
+ 4 files changed, 161 insertions(+), 1 deletion(-)
+ create mode 100755 tests/rscript_num2ipv4.sh
+
+diff --git a/grammar/rainerscript.c b/grammar/rainerscript.c
+index 30af5e7b..2f0fc2d8 100644
+--- a/grammar/rainerscript.c
++++ b/grammar/rainerscript.c
+@@ -1710,6 +1710,113 @@ doRandomGen(struct svar *__restrict__ const sourceVal) {
+ 	return x % max;
+ }
+ 
++static long long
++ipv42num(char *str)
++{
++	unsigned num[4] = {0, 0, 0, 0};
++	long long value = -1;
++	size_t len = strlen(str);
++	int cyc = 0;
++	int prevdot = 0;
++	int startblank = 0;
++	int endblank = 0;
++	DBGPRINTF("rainerscript: (ipv42num) arg: '%s'\n", str);
++	for(unsigned int i = 0 ; i < len ; i++) {
++		switch(str[i]){
++		case '0':
++		case '1':
++		case '2':
++		case '3':
++		case '4':
++		case '5':
++		case '6':
++		case '7':
++		case '8':
++		case '9':
++			if(endblank == 1){
++				DBGPRINTF("rainerscript: (ipv42num) error: wrong IP-Address format (invalid space(1))\n");
++				goto done;
++			}
++			prevdot = 0;
++			startblank = 0;
++			DBGPRINTF("rainerscript: (ipv42num) cycle: %d\n", cyc);
++			num[cyc] = num[cyc]*10+(str[i]-'0');
++			break;
++		case ' ':
++			prevdot = 0;
++			if(i == 0 || startblank == 1){
++				startblank = 1;
++				break;
++			}
++			else{
++				endblank = 1;
++				break;
++			}
++		case '.':
++			if(endblank == 1){
++				DBGPRINTF("rainerscript: (ipv42num) error: wrong IP-Address format (inalid space(2))\n");
++				goto done;
++			}
++			startblank = 0;
++			if(prevdot == 1){
++				DBGPRINTF("rainerscript: (ipv42num) error: wrong IP-Address format (two dots after one another)\n");
++				goto done;
++			}
++			prevdot = 1;
++			cyc++;
++			if(cyc > 3){
++				DBGPRINTF("rainerscript: (ipv42num) error: wrong IP-Address format (too many dots)\n");
++				goto done;
++			}
++			break;
++		default:
++			DBGPRINTF("rainerscript: (ipv42num) error: wrong IP-Address format (invalid charakter)\n");
++			goto done;
++		}
++	}
++	if(cyc != 3){
++		DBGPRINTF("rainerscript: (ipv42num) error: wrong IP-Address format (wrong number of dots)\n");
++		goto done;
++	}
++	value = num[0]*256*256*256+num[1]*256*256+num[2]*256+num[3];
++done:
++	DBGPRINTF("rainerscript: (ipv42num): return value:'%lld'\n",value);
++	return(value);
++}
++
++
++static es_str_t*
++num2ipv4(struct svar *__restrict__ const sourceVal) {
++	int success = 0;
++	int numip[4];
++	char str[16];
++	size_t len;
++	es_str_t *estr;
++	long long num = var2Number(sourceVal, &success);
++	DBGPRINTF("rainrescript: (num2ipv4) var2Number output: '%lld\n'", num);
++	if (! success) {
++		DBGPRINTF("rainerscript: (num2ipv4) couldn't access number\n");
++		len = snprintf(str, 16, "-1");
++		goto done;
++	}
++	if(num < 0 || num > 4294967295) {
++		DBGPRINTF("rainerscript: (num2ipv4) invalid number(too big/negative); does not represent IPv4 address\n");
++		len = snprintf(str, 16, "-1");
++		goto done;
++	}
++	for(int i = 0 ; i < 4 ; i++){
++		numip[i] = num % 256;
++		num = num / 256;
++	}
++	DBGPRINTF("rainerscript: (num2ipv4) Numbers: 1:'%d' 2:'%d' 3:'%d' 4:'%d'\n", numip[0], numip[1], numip[2], numip[3]);
++	len = snprintf(str, 16, "%d.%d.%d.%d", numip[3], numip[2], numip[1], numip[0]);
++done:
++	DBGPRINTF("rainerscript: (num2ipv4) ipv4-Address: %s, lengh: %zu\n", str, len);
++	estr = es_newStrFromCStr(str, len);
++	return(estr);
++}
++
++
+ /* Perform a function call. This has been moved out of cnfExprEval in order
+  * to keep the code small and easier to maintain.
+  */
+@@ -1775,6 +1882,12 @@ doFuncCall(struct cnffunc *__restrict__ const func, struct svar *__restrict__ co
+ 		ret->datatype = 'N';
+ 		varFreeMembers(&r[0]);
+ 		break;
++	case CNFFUNC_NUM2IPV4:
++		cnfexprEval(func->expr[0], &r[0], usrptr);
++		ret->d.estr = num2ipv4(&r[0]);
++		ret->datatype = 'S';
++		varFreeMembers(&r[0]);
++		break;
+ 	case CNFFUNC_GETENV:
+ 		/* note: the optimizer shall have replaced calls to getenv()
+ 		 * with a constant argument to a single string (once obtained via
+@@ -3958,6 +4071,8 @@ funcName2ID(es_str_t *fname, unsigned short nParams)
+ 		GENERATE_FUNC("strlen", 1, CNFFUNC_STRLEN);
+ 	} else if(FUNC_NAME("getenv")) {
+ 		GENERATE_FUNC("getenv", 1, CNFFUNC_GETENV);
++	} else if(FUNC_NAME("num2ipv4")) {
++		GENERATE_FUNC("num2ipv4", 1, CNFFUNC_NUM2IPV4);
+ 	} else if(FUNC_NAME("tolower")) {
+ 		GENERATE_FUNC("tolower", 1, CNFFUNC_TOLOWER);
+ 	} else if(FUNC_NAME("cstr")) {
+diff --git a/grammar/rainerscript.h b/grammar/rainerscript.h
+index 7bac7566..0fdbdc72 100644
+--- a/grammar/rainerscript.h
++++ b/grammar/rainerscript.h
+@@ -232,7 +232,9 @@ enum cnffuncid {
+ 	CNFFUNC_REPLACE,
+ 	CNFFUNC_WRAP,
+ 	CNFFUNC_RANDOM,
+-	CNFFUNC_DYN_INC
++	CNFFUNC_DYN_INC,
++	CNFFUNC_IPV42NUM,
++	CNFFUNC_NUM2IPV4
+ };
+ 
+ struct cnffunc {
+diff --git a/tests/Makefile.am b/tests/Makefile.am
+index f792b44a..572c6250 100644
+--- a/tests/Makefile.am
++++ b/tests/Makefile.am
+@@ -206,6 +206,7 @@ TESTS +=  \
+ 	rscript_lt_var.sh \
+ 	rscript_ne.sh \
+ 	rscript_ne_var.sh \
++	rscript_num2ipv4.sh \
+ 	empty-prop-comparison.sh \
+ 	rs_optimizer_pri.sh \
+ 	cee_simple.sh \
+@@ -728,6 +729,7 @@ EXTRA_DIST= \
+ 	rscript_ne.sh \
+ 	testsuites/rscript_ne.conf \
+ 	rscript_ne_var.sh \
++	rscript_num2ipv4.sh \
+ 	testsuites/rscript_ne_var.conf \
+ 	rscript_eq.sh \
+ 	testsuites/rscript_eq.conf \
+diff --git a/tests/rscript_num2ipv4.sh b/tests/rscript_num2ipv4.sh
+new file mode 100755
+index 00000000..6e0026a9
+--- /dev/null
++++ b/tests/rscript_num2ipv4.sh
+@@ -0,0 +1,41 @@
++#!/bin/bash
++# add 2017-02-09 by Jan Gerhards, released under ASL 2.0
++. $srcdir/diag.sh init
++. $srcdir/diag.sh generate-conf
++. $srcdir/diag.sh add-conf '
++module(load="../plugins/imtcp/.libs/imtcp")
++input(type="imtcp" port="13514")
++
++set $!ip!v1 = num2ipv4("0");
++set $!ip!v2 = num2ipv4("1");
++set $!ip!v3 = num2ipv4("256");
++set $!ip!v4 = num2ipv4("65536");
++set $!ip!v5 = num2ipv4("16777216");
++set $!ip!v6 = num2ipv4("135");
++set $!ip!v7 = num2ipv4("16843009");
++set $!ip!v8 = num2ipv4("3777036554");
++set $!ip!v9 = num2ipv4("2885681153");
++set $!ip!v10 = num2ipv4("4294967295");
++
++set $!ip!e1 = num2ipv4("a");
++set $!ip!e2 = num2ipv4("-123");
++set $!ip!e3 = num2ipv4("1725464567890");
++set $!ip!e4 = num2ipv4("4294967296");
++set $!ip!e5 = num2ipv4("2839.");
++set $!ip!e6 = num2ipv4("");
++
++
++template(name="outfmt" type="string" string="%!ip%\n")
++local4.* action(type="omfile" file="rsyslog.out.log" template="outfmt")
++'
++. $srcdir/diag.sh startup
++. $srcdir/diag.sh tcpflood -m1 -y
++. $srcdir/diag.sh shutdown-when-empty
++. $srcdir/diag.sh wait-shutdown
++echo '{ "v1": "0.0.0.0", "v2": "0.0.0.1", "v3": "0.0.1.0", "v4": "0.1.0.0", "v5": "1.0.0.0", "v6": "0.0.0.135", "v7": "1.1.1.1", "v8": "225.33.1.10", "v9": "172.0.0.1", "v10": "255.255.255.255", "e1": "-1", "e2": "-1", "e3": "-1", "e4": "-1", "e5": "-1", "e6": "-1" }' | cmp rsyslog.out.log
++if [ ! $? -eq 0 ]; then
++  echo "invalid function output detected, rsyslog.out.log is:"
++  cat rsyslog.out.log
++  . $srcdir/diag.sh error-exit 1
++fi;
++. $srcdir/diag.sh exit
+-- 
+2.12.2
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1427821-str2num-emty-string-handle.patch b/SOURCES/rsyslog-8.24.0-rhbz1427821-str2num-emty-string-handle.patch
new file mode 100644
index 0000000..ab35376
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1427821-str2num-emty-string-handle.patch
@@ -0,0 +1,28 @@
+@@ -, +, @@ 
+---
+ grammar/rainerscript.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+--- a/grammar/rainerscript.c	
++++ a/grammar/rainerscript.c	
+@@ -1276,6 +1276,13 @@ str2num(es_str_t *s, int *bSuccess)
+ 	int64_t num = 0;
+ 	const uchar *const c = es_getBufAddr(s);
+ 
++	if(s->lenStr == 0) {
++		DBGPRINTF("rainerscript: str2num: strlen == 0; invalid input (no string)\n");
++		if(bSuccess != NULL) {
++			*bSuccess = 1;
++		}
++		goto done;
++	}
+ 	if(c[0] == '-') {
+ 		neg = -1;
+ 		i = -1;
+@@ -1290,6 +1297,7 @@ str2num(es_str_t *s, int *bSuccess)
+ 	num *= neg;
+ 	if(bSuccess != NULL)
+ 		*bSuccess = (i == s->lenStr) ? 1 : 0;
++done:
+ 	return num;
+ }
+ 
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1427828-set-unset-not-checking-varName.patch b/SOURCES/rsyslog-8.24.0-rhbz1427828-set-unset-not-checking-varName.patch
new file mode 100644
index 0000000..8d89d2b
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1427828-set-unset-not-checking-varName.patch
@@ -0,0 +1,90 @@
+From e2767839bc23f1a2f70543efabfe0ca1be166ee9 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Tue, 24 Jan 2017 13:24:29 +0100
+Subject: [PATCH] rainescript: set/unset statement do not check variable name
+ validity
+
+Only JSON-based variables can be use with set and unset. Unfortunately,
+this restriction is not checked. If an invalid variable is given
+(e.g. $invalid), this is not detected upon config processing on
+startup. During execution phase, this can lead to a segfault, a
+memory leak or other types of problems.
+
+see also https://github.com/rsyslog/rsyslog/issues/1376
+closes https://github.com/rsyslog/rsyslog/issues/1377
+---
+ grammar/rainerscript.c | 43 +++++++++++++++++++++++++++++++++++++++----
+ 1 file changed, 39 insertions(+), 4 deletions(-)
+
+diff --git a/grammar/rainerscript.c b/grammar/rainerscript.c
+index 0ebd6f1..2106ef9 100644
+--- a/grammar/rainerscript.c
++++ b/grammar/rainerscript.c
+@@ -3062,6 +3062,19 @@ cnfstmtNew(unsigned s_type)
+ 	return cnfstmt;
+ }
+ 
++/* This function disables a cnfstmt by setting it to NOP. This is
++ * useful when we detect errors late in the parsing processing, where
++ * we need to return a valid cnfstmt. The optimizer later removes the
++ * NOPs, so all is well.
++ * NOTE: this call assumes that no dynamic data structures have been
++ * allocated. If so, these MUST be freed before calling cnfstmtDisable().
++ */
++static void
++cnfstmtDisable(struct cnfstmt *cnfstmt)
++{
++	cnfstmt->nodetype = S_NOP;
++}
++
+ void cnfstmtDestructLst(struct cnfstmt *root);
+ 
+ static void cnfIteratorDestruct(struct cnfitr *itr);
+@@ -3166,11 +3179,22 @@ cnfIteratorDestruct(struct cnfitr *itr)
+ struct cnfstmt *
+ cnfstmtNewSet(char *var, struct cnfexpr *expr, int force_reset)
+ {
++	propid_t propid;
+ 	struct cnfstmt* cnfstmt;
+ 	if((cnfstmt = cnfstmtNew(S_SET)) != NULL) {
+-		cnfstmt->d.s_set.varname = (uchar*) var;
+-		cnfstmt->d.s_set.expr = expr;
+-		cnfstmt->d.s_set.force_reset = force_reset;
++		if(propNameToID((uchar *)var, &propid) == RS_RET_OK
++		   && (   propid == PROP_CEE
++		       || propid == PROP_LOCAL_VAR
++		       || propid == PROP_GLOBAL_VAR)
++		   ) {
++			cnfstmt->d.s_set.varname = (uchar*) var;
++			cnfstmt->d.s_set.expr = expr;
++			cnfstmt->d.s_set.force_reset = force_reset;
++		} else {
++			parser_errmsg("invalid variable '%s' in set statement.", var);
++			free(var);
++			cnfstmtDisable(cnfstmt);
++		}
+ 	}
+ 	return cnfstmt;
+ }
+@@ -3254,9 +3278,20 @@ cnfstmtNewReloadLookupTable(struct cnffparamlst *fparams)
+ struct cnfstmt *
+ cnfstmtNewUnset(char *var)
+ {
++	propid_t propid;
+ 	struct cnfstmt* cnfstmt;
+ 	if((cnfstmt = cnfstmtNew(S_UNSET)) != NULL) {
+-		cnfstmt->d.s_unset.varname = (uchar*) var;
++		if(propNameToID((uchar *)var, &propid) == RS_RET_OK
++		   && (   propid == PROP_CEE
++		       || propid == PROP_LOCAL_VAR
++		       || propid == PROP_GLOBAL_VAR)
++		   ) {
++			cnfstmt->d.s_unset.varname = (uchar*) var;
++		} else {
++			parser_errmsg("invalid variable '%s' in unset statement.", var);
++			free(var);
++			cnfstmtDisable(cnfstmt);
++		}
+ 	}
+ 	return cnfstmt;
+ }
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1431616-pmrfc3164sd-backport.patch b/SOURCES/rsyslog-8.24.0-rhbz1431616-pmrfc3164sd-backport.patch
new file mode 100644
index 0000000..39e16b6
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1431616-pmrfc3164sd-backport.patch
@@ -0,0 +1,413 @@
+diff -up ./configure.ac.pmrfc3164sd ./configure.ac
+--- ./configure.ac.pmrfc3164sd	2017-01-10 10:01:24.000000000 +0100
++++ ./configure.ac	2017-04-03 10:10:40.912388923 +0200
+@@ -1559,6 +1559,18 @@ AC_ARG_ENABLE(pmpanngfw,
+ )
+ AM_CONDITIONAL(ENABLE_PMPANNGFW, test x$enable_pmpanngfw = xyes)
+ 
++# settings for pmrfc3164sd
++AC_ARG_ENABLE(pmrfc3164sd,
++        [AS_HELP_STRING([--enable-pmrfc3164sd],[Compiles rfc3164sd parser module @<:@default=no@:>@])],
++        [case "${enableval}" in
++         yes) enable_pmrfc3164sd="yes" ;;
++          no) enable_pmrfc3164sd="no" ;;
++           *) AC_MSG_ERROR(bad value ${enableval} for --enable-pmrfc3164sd) ;;
++         esac],
++        [enable_pmrfc3164sd=no]
++)
++AM_CONDITIONAL(ENABLE_PMRFC3164SD, test x$enable_pmrfc3164sd = xyes)
++
+ 
+ # settings for omruleset
+ AC_ARG_ENABLE(omruleset,
+@@ -1953,6 +1965,7 @@ AC_CONFIG_FILES([Makefile \
+ 		plugins/mmexternal/Makefile \
+ 		plugins/omstdout/Makefile \
+ 		plugins/omjournal/Makefile \
++		plugins/pmrfc3164sd/Makefile \
+ 		plugins/pmciscoios/Makefile \
+ 		plugins/pmnull/Makefile \
+ 		plugins/omruleset/Makefile \
+@@ -2057,6 +2070,7 @@ echo "    omamqp1 module will be compile
+ echo "    omtcl module will be compiled:            $enable_omtcl"
+ echo
+ echo "---{ parser modules }---"
++echo "    pmrfc3164sd module will be compiled:      $enable_pmrfc3164sd"
+ echo "    pmlastmsg module will be compiled:        $enable_pmlastmsg"
+ echo "    pmcisconames module will be compiled:     $enable_pmcisconames"
+ echo "    pmciscoios module will be compiled:       $enable_pmciscoios"
+diff -up ./Makefile.am.pmrfc3164sd ./Makefile.am
+--- ./Makefile.am.pmrfc3164sd	2017-01-10 10:00:04.000000000 +0100
++++ ./Makefile.am	2017-04-03 10:10:40.912388923 +0200
+@@ -115,6 +115,10 @@ if ENABLE_PMLASTMSG
+ SUBDIRS += plugins/pmlastmsg
+ endif
+ 
++if ENABLE_PMRFC3164SD
++SUBDIRS += plugins/pmrfc3164sd
++endif
++
+ if ENABLE_OMRULESET
+ SUBDIRS += plugins/omruleset
+ endif
+diff -up ./plugins/pmrfc3164sd/Makefile.am.pmrfc3164sd ./plugins/pmrfc3164sd/Makefile.am
+--- ./plugins/pmrfc3164sd/Makefile.am.pmrfc3164sd	2017-04-03 10:10:40.912388923 +0200
++++ ./plugins/pmrfc3164sd/Makefile.am	2017-04-03 10:10:40.912388923 +0200
+@@ -0,0 +1,8 @@
++pkglib_LTLIBRARIES = pmrfc3164sd.la
++
++pmrfc3164sd_la_SOURCES = pmrfc3164sd.c
++pmrfc3164sd_la_CPPFLAGS =  $(RSRT_CFLAGS) $(PTHREADS_CFLAGS) -I ../../tools
++pmrfc3164sd_la_LDFLAGS = -module -avoid-version
++pmrfc3164sd_la_LIBADD = 
++
++EXTRA_DIST = 
+diff -up ./plugins/pmrfc3164sd/pmrfc3164sd.c.pmrfc3164sd ./plugins/pmrfc3164sd/pmrfc3164sd.c
+--- ./plugins/pmrfc3164sd/pmrfc3164sd.c.pmrfc3164sd	2017-04-03 10:10:40.913388923 +0200
++++ ./plugins/pmrfc3164sd/pmrfc3164sd.c	2017-04-03 10:26:52.012341658 +0200
+@@ -0,0 +1,345 @@
++/* pmrfc3164sd.c
++ * This is a parser module for RFC3164(legacy syslog)-formatted messages.
++ *
++ * NOTE: read comments in module-template.h to understand how this file
++ *       works!
++ *
++ * File begun on 2009-11-04 by RGerhards
++ *
++ * Copyright 2007, 2009 Rainer Gerhards and Adiscon GmbH.
++ *
++ * This file is part of rsyslog.
++ *
++ * Rsyslog is free software: you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation, either version 3 of the License, or
++ * (at your option) any later version.
++ *
++ * Rsyslog is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with Rsyslog.  If not, see <http://www.gnu.org/licenses/>.
++ *
++ * A copy of the GPL can be found in the file "COPYING" in this distribution.
++ */
++#include "config.h"
++#include "rsyslog.h"
++#include <stdlib.h>
++#include <string.h>
++#include <assert.h>
++#include <errno.h>
++#include <ctype.h>
++#include "syslogd.h"
++#include "conf.h"
++#include "syslogd-types.h"
++#include "template.h"
++#include "msg.h"
++#include "module-template.h"
++#include "glbl.h"
++#include "errmsg.h"
++#include "parser.h"
++#include "datetime.h"
++#include "unicode-helper.h"
++
++MODULE_TYPE_PARSER
++MODULE_TYPE_NOKEEP
++MODULE_CNFNAME("pmrfc3164sd")
++PARSER_NAME("contrib.rfc3164sd")
++
++/* internal structures
++ */
++DEF_PMOD_STATIC_DATA
++DEFobjCurrIf(errmsg)
++DEFobjCurrIf(glbl)
++DEFobjCurrIf(parser)
++DEFobjCurrIf(datetime)
++
++
++/* static data */
++static int bParseHOSTNAMEandTAG;	/* cache for the equally-named global param - performance enhancement */
++
++
++BEGINisCompatibleWithFeature
++CODESTARTisCompatibleWithFeature
++	if(eFeat == sFEATUREAutomaticSanitazion)
++		iRet = RS_RET_OK;
++	if(eFeat == sFEATUREAutomaticPRIParsing)
++		iRet = RS_RET_OK;
++ENDisCompatibleWithFeature
++
++/* Helper to parseRFCSyslogMsg. This function parses the structured
++ * data field of a message. It does NOT parse inside structured data,
++ * just gets the field as whole. Parsing the single entities is left
++ * to other functions. The parsepointer is advanced
++ * to after the terminating SP. The caller must ensure that the 
++ * provided buffer is large enough to hold the to be extracted value.
++ * Returns 0 if everything is fine or 1 if either the field is not
++ * SP-terminated or any other error occurs. -- rger, 2005-11-24
++ * The function now receives the size of the string and makes sure
++ * that it does not process more than that. The *pLenStr counter is
++ * updated on exit. -- rgerhards, 2009-09-23
++ */
++static int parseRFCStructuredData(uchar **pp2parse, uchar *pResult, int *pLenStr)
++{
++	uchar *p2parse;
++	int bCont = 1;
++	int iRet = 0;
++	int lenStr;
++
++	assert(pp2parse != NULL);
++	assert(*pp2parse != NULL);
++	assert(pResult != NULL);
++
++	p2parse = *pp2parse;
++	lenStr = *pLenStr;
++
++	/* this is the actual parsing loop
++	 * Remeber: structured data starts with [ and includes any characters
++	 * until the first ] followed by a SP. There may be spaces inside
++	 * structured data. There may also be \] inside the structured data, which
++	 * do NOT terminate an element.
++	 */
++	 
++	/* trim */
++	while(lenStr > 0 && *p2parse == ' ') {
++		++p2parse; /* eat SP, but only if not at end of string */
++		--lenStr;
++	}
++		 
++	if(lenStr == 0 || *p2parse != '[')
++		return 1; /* this is NOT structured data! */
++
++	if(*p2parse == '-') { /* empty structured data? */
++		*pResult++ = '-';
++		++p2parse;
++		--lenStr;
++	} else {
++		while(bCont) {
++			if(lenStr < 2) {
++				/* we now need to check if we have only structured data */
++				if(lenStr > 0 && *p2parse == ']') {
++					*pResult++ = *p2parse;
++					p2parse++;
++					lenStr--;
++					bCont = 0;
++				} else {
++					iRet = 1; /* this is not valid! */
++					bCont = 0;
++				}
++			} else if(*p2parse == '\\' && *(p2parse+1) == ']') {
++				/* this is escaped, need to copy both */
++				*pResult++ = *p2parse++;
++				*pResult++ = *p2parse++;
++				lenStr -= 2;
++			} else if(*p2parse == ']' && *(p2parse+1) == ' ') {
++				/* found end, just need to copy the ] and eat the SP */
++				*pResult++ = *p2parse;
++				p2parse += 2;
++				lenStr -= 2;
++				bCont = 0;
++			} else {
++				*pResult++ = *p2parse++;
++				--lenStr;
++			}
++		}
++	}
++
++	if(lenStr > 0 && *p2parse == ' ') {
++		++p2parse; /* eat SP, but only if not at end of string */
++		--lenStr;
++	} else {
++		iRet = 1; /* there MUST be an SP! */
++	}
++	*pResult = '\0';
++
++	/* set the new parse pointer */
++	*pp2parse = p2parse;
++	*pLenStr = lenStr;
++	return 0;
++}
++
++/* parse a legay-formatted syslog message.
++ */
++BEGINparse
++	uchar *p2parse;
++	int lenMsg;
++	int bTAGCharDetected;
++	int i;	/* general index for parsing */
++	uchar bufParseTAG[CONF_TAG_MAXSIZE];
++	uchar bufParseHOSTNAME[CONF_HOSTNAME_MAXSIZE];
++	uchar *pBuf = NULL;
++CODESTARTparse
++	dbgprintf("Message will now be parsed by the legacy syslog parser with structured-data support.\n");
++	assert(pMsg != NULL);
++	assert(pMsg->pszRawMsg != NULL);
++	lenMsg = pMsg->iLenRawMsg - pMsg->offAfterPRI; /* note: offAfterPRI is already the number of PRI chars (do not add one!) */
++	p2parse = pMsg->pszRawMsg + pMsg->offAfterPRI; /* point to start of text, after PRI */
++	setProtocolVersion(pMsg, MSG_LEGACY_PROTOCOL);
++
++	/* Check to see if msg contains a timestamp. We start by assuming
++	 * that the message timestamp is the time of reception (which we 
++	 * generated ourselfs and then try to actually find one inside the
++	 * message. There we go from high-to low precison and are done
++	 * when we find a matching one. -- rgerhards, 2008-09-16
++	 */
++	if(datetime.ParseTIMESTAMP3339(&(pMsg->tTIMESTAMP), &p2parse, &lenMsg) == RS_RET_OK) {
++		/* we are done - parse pointer is moved by ParseTIMESTAMP3339 */;
++	} else if(datetime.ParseTIMESTAMP3164(&(pMsg->tTIMESTAMP), &p2parse, &lenMsg, NO_PARSE3164_TZSTRING, NO_PERMIT_YEAR_AFTER_TIME) == RS_RET_OK) {
++		/* we are done - parse pointer is moved by ParseTIMESTAMP3164 */;
++	} else if(*p2parse == ' ' && lenMsg > 1) { /* try to see if it is slighly malformed - HP procurve seems to do that sometimes */
++		++p2parse;	/* move over space */
++		--lenMsg;
++		if(datetime.ParseTIMESTAMP3164(&(pMsg->tTIMESTAMP), &p2parse, &lenMsg, NO_PARSE3164_TZSTRING, NO_PERMIT_YEAR_AFTER_TIME) == RS_RET_OK) {
++			/* indeed, we got it! */
++			/* we are done - parse pointer is moved by ParseTIMESTAMP3164 */;
++		} else {/* parse pointer needs to be restored, as we moved it off-by-one
++			 * for this try.
++			 */
++			--p2parse;
++			++lenMsg;
++		}
++	}
++
++	if(pMsg->msgFlags & IGNDATE) {
++		/* we need to ignore the msg data, so simply copy over reception date */
++		memcpy(&pMsg->tTIMESTAMP, &pMsg->tRcvdAt, sizeof(struct syslogTime));
++	}
++
++	/* rgerhards, 2006-03-13: next, we parse the hostname and tag. But we 
++	 * do this only when the user has not forbidden this. I now introduce some
++	 * code that allows a user to configure rsyslogd to treat the rest of the
++	 * message as MSG part completely. In this case, the hostname will be the
++	 * machine that we received the message from and the tag will be empty. This
++	 * is meant to be an interim solution, but for now it is in the code.
++	 */
++	if(bParseHOSTNAMEandTAG && !(pMsg->msgFlags & INTERNAL_MSG)) {
++		/* parse HOSTNAME - but only if this is network-received!
++		 * rger, 2005-11-14: we still have a problem with BSD messages. These messages
++		 * do NOT include a host name. In most cases, this leads to the TAG to be treated
++		 * as hostname and the first word of the message as the TAG. Clearly, this is not
++		 * of advantage ;) I think I have now found a way to handle this situation: there
++		 * are certain characters which are frequently used in TAG (e.g. ':'), which are
++		 * *invalid* in host names. So while parsing the hostname, I check for these characters.
++		 * If I find them, I set a simple flag but continue. After parsing, I check the flag.
++		 * If it was set, then we most probably do not have a hostname but a TAG. Thus, I change
++		 * the fields. I think this logic shall work with any type of syslog message.
++		 * rgerhards, 2009-06-23: and I now have extended this logic to every character
++		 * that is not a valid hostname.
++		 */
++		bTAGCharDetected = 0;
++		if(lenMsg > 0 && pMsg->msgFlags & PARSE_HOSTNAME) {
++			i = 0;
++			while(i < lenMsg && (isalnum(p2parse[i]) || p2parse[i] == '.' || p2parse[i] == '.'
++				|| p2parse[i] == '_' || p2parse[i] == '-') && i < (CONF_HOSTNAME_MAXSIZE - 1)) {
++				bufParseHOSTNAME[i] = p2parse[i];
++				++i;
++			}
++
++			if(i == lenMsg) {
++				/* we have a message that is empty immediately after the hostname,
++				* but the hostname thus is valid! -- rgerhards, 2010-02-22
++				*/
++				p2parse += i;
++				lenMsg -= i;
++				bufParseHOSTNAME[i] = '\0';
++				MsgSetHOSTNAME(pMsg, bufParseHOSTNAME, i);
++			} else if(i > 0 && p2parse[i] == ' ' && isalnum(p2parse[i-1])) {
++				/* we got a hostname! */
++				p2parse += i + 1; /* "eat" it (including SP delimiter) */
++				lenMsg -= i + 1;
++				bufParseHOSTNAME[i] = '\0';
++				MsgSetHOSTNAME(pMsg, bufParseHOSTNAME, i);
++			}
++		}
++
++		/* now parse TAG - that should be present in message from all sources.
++		 * This code is somewhat not compliant with RFC 3164. As of 3164,
++		 * the TAG field is ended by any non-alphanumeric character. In
++		 * practice, however, the TAG often contains dashes and other things,
++		 * which would end the TAG. So it is not desirable. As such, we only
++		 * accept colon and SP to be terminators. Even there is a slight difference:
++		 * a colon is PART of the TAG, while a SP is NOT part of the tag
++		 * (it is CONTENT). Starting 2008-04-04, we have removed the 32 character
++		 * size limit (from RFC3164) on the tag. This had bad effects on existing
++		 * envrionments, as sysklogd didn't obey it either (probably another bug
++		 * in RFC3164...). We now receive the full size, but will modify the
++		 * outputs so that only 32 characters max are used by default.
++		 */
++		i = 0;
++		while(lenMsg > 0 && *p2parse != ':' && *p2parse != ' ' && i < CONF_TAG_MAXSIZE) {
++			bufParseTAG[i++] = *p2parse++;
++			--lenMsg;
++		}
++		if(lenMsg > 0 && *p2parse == ':') {
++			++p2parse; 
++			--lenMsg;
++			bufParseTAG[i++] = ':';
++		}
++
++		/* no TAG can only be detected if the message immediatly ends, in which case an empty TAG
++		 * is considered OK. So we do not need to check for empty TAG. -- rgerhards, 2009-06-23
++		 */
++		bufParseTAG[i] = '\0';	/* terminate string */
++		MsgSetTAG(pMsg, bufParseTAG, i);
++	} else {/* we enter this code area when the user has instructed rsyslog NOT
++		 * to parse HOSTNAME and TAG - rgerhards, 2006-03-13
++		 */
++		if(!(pMsg->msgFlags & INTERNAL_MSG)) {
++			DBGPRINTF("HOSTNAME and TAG not parsed by user configuraton.\n");
++		}
++	}
++
++	CHKmalloc(pBuf = MALLOC(sizeof(uchar) * (lenMsg + 1)));
++
++	/* STRUCTURED-DATA */
++	if (parseRFCStructuredData(&p2parse, pBuf, &lenMsg) == 0)
++		MsgSetStructuredData(pMsg, (char*)pBuf);
++	else
++		MsgSetStructuredData(pMsg, "-");
++
++	/* The rest is the actual MSG */
++	MsgSetMSGoffs(pMsg, p2parse - pMsg->pszRawMsg);
++
++finalize_it:
++	if(pBuf != NULL)
++		free(pBuf);
++ENDparse
++
++
++BEGINmodExit
++CODESTARTmodExit
++	/* release what we no longer need */
++	objRelease(errmsg, CORE_COMPONENT);
++	objRelease(glbl, CORE_COMPONENT);
++	objRelease(parser, CORE_COMPONENT);
++	objRelease(datetime, CORE_COMPONENT);
++ENDmodExit
++
++
++BEGINqueryEtryPt
++CODESTARTqueryEtryPt
++CODEqueryEtryPt_STD_PMOD_QUERIES
++CODEqueryEtryPt_IsCompatibleWithFeature_IF_OMOD_QUERIES
++ENDqueryEtryPt
++
++
++BEGINmodInit()
++CODESTARTmodInit
++	*ipIFVersProvided = CURR_MOD_IF_VERSION; /* we only support the current interface specification */
++CODEmodInit_QueryRegCFSLineHdlr
++	CHKiRet(objUse(glbl, CORE_COMPONENT));
++	CHKiRet(objUse(errmsg, CORE_COMPONENT));
++	CHKiRet(objUse(parser, CORE_COMPONENT));
++	CHKiRet(objUse(datetime, CORE_COMPONENT));
++
++	dbgprintf("rfc3164sd parser init called\n");
++ 	bParseHOSTNAMEandTAG = glbl.GetParseHOSTNAMEandTAG(); /* cache value, is set only during rsyslogd option processing */
++
++
++ENDmodInit
++
++/* vim:set ai:
++ */
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1462160-set.statement-crash.patch b/SOURCES/rsyslog-8.24.0-rhbz1462160-set.statement-crash.patch
new file mode 100644
index 0000000..112f75c
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1462160-set.statement-crash.patch
@@ -0,0 +1,150 @@
+diff -up ./plugins/imjournal/imjournal.c.default_tag ./plugins/imjournal/imjournal.c
+--- ./plugins/imjournal/imjournal.c.default_tag	2017-01-10 04:00:04.000000000 -0500
++++ ./plugins/imjournal/imjournal.c	2017-08-28 07:55:19.545930923 -0400
+@@ -78,6 +78,7 @@ static struct configSettings_s {
+ 	int iDfltSeverity;
+ 	int iDfltFacility;
+ 	int bUseJnlPID;
++	char *dfltTag;
+ } cs;
+ 
+ static rsRetVal facilityHdlr(uchar **pp, void *pVal);
+@@ -93,6 +94,7 @@ static struct cnfparamdescr modpdescr[]
+ 	{ "defaultseverity", eCmdHdlrSeverity, 0 },
+ 	{ "defaultfacility", eCmdHdlrString, 0 },
+ 	{ "usepidfromsystem", eCmdHdlrBinary, 0 },
++	{ "defaulttag", eCmdHdlrGetWord, 0 },
+ };
+ static struct cnfparamblk modpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -103,6 +105,7 @@ static struct cnfparamblk modpblk =
+ #define DFLT_persiststateinterval 10
+ #define DFLT_SEVERITY pri2sev(LOG_NOTICE)
+ #define DFLT_FACILITY pri2fac(LOG_USER)
++#define DFLT_TAG "journal"
+ 
+ static int bLegacyCnfModGlobalsPermitted = 1;/* are legacy module-global config parameters permitted? */
+ 
+@@ -194,8 +197,13 @@ enqMsg(uchar *msg, uchar *pszTag, int iF
+ 	}
+ 	MsgSetFlowControlType(pMsg, eFLOWCTL_LIGHT_DELAY);
+ 	MsgSetInputName(pMsg, pInputName);
++	/* Recalculating the message length shouldn't cause problems as all
++ 	 * potential zero-bytes have been excaped in sanitizeValue(). */ 	
+ 	len = strlen((char*)msg);
+ 	MsgSetRawMsg(pMsg, (char*)msg, len);
++	/* NB: SanitizeMsg() only touches the raw message and its
++ 	 * length which only contain the msg part. Thus the TAG and
++ 	 * other fields are not sanitized. */ 
+ 	if(len > 0)
+ 		parser.SanitizeMsg(pMsg);
+ 	MsgSetMSGoffs(pMsg, 0);	/* we do not have a header... */
+@@ -233,7 +241,7 @@ readjournal(void)
+ 
+ 	/* Information from messages */
+ 	char *message = NULL;
+-	char *sys_iden;
++	char *sys_iden = NULL;
+ 	char *sys_iden_help = NULL;
+ 
+ 	const void *get;
+@@ -294,29 +302,34 @@ readjournal(void)
+ 	/* Get message identifier, client pid and add ':' */
+ 	if (sd_journal_get_data(j, "SYSLOG_IDENTIFIER", &get, &length) >= 0) {
+ 		CHKiRet(sanitizeValue(((const char *)get) + 18, length - 18, &sys_iden));
+-	} else {
+-		CHKmalloc(sys_iden = strdup("journal"));
+ 	}
+ 
+-	if (sd_journal_get_data(j, pid_field_name, &pidget, &pidlength) >= 0) {
+-		char *sys_pid;
+-		int val_ofs;
+-
+-		val_ofs = strlen(pid_field_name) + 1; /* name + '=' */
+-		CHKiRet_Hdlr(sanitizeValue(((const char *)pidget) + val_ofs, pidlength - val_ofs, &sys_pid)) {
+-			free (sys_iden);
+-			FINALIZE;
++	if (sys_iden == NULL && !cs.dfltTag[0]) {
++               /* This is a special case: if no tag was obtained from
++                * the message and the user has set the default tag to
++                * an empty string, nothing is inserted.
++                */
++               CHKmalloc(sys_iden_help = calloc(1, 1));
++       	} else {
++        	if (sys_iden == NULL) {
++                	/* Use a predefined tag if it can't be obtained from the message */
++                        CHKmalloc(sys_iden = strdup(cs.dfltTag));
++               	}
++               	if (sd_journal_get_data(j, "SYSLOG_PID", &pidget, &pidlength) >= 0) {
++                	char *sys_pid;
++                        CHKiRet_Hdlr(sanitizeValue(((const char *)pidget) + 11, pidlength - 11, &sys_pid)) {
++                                free (sys_iden);
++                                FINALIZE;
++                        }
++                        r = asprintf(&sys_iden_help, "%s[%s]:", sys_iden, sys_pid);
++                        free (sys_pid);
++                } else {
++                        r = asprintf(&sys_iden_help, "%s:", sys_iden);
++		}
++		free (sys_iden);
++		if (-1 == r) {
++			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
+ 		}
+-		r = asprintf(&sys_iden_help, "%s[%s]:", sys_iden, sys_pid);
+-		free (sys_pid);
+-	} else {
+-		r = asprintf(&sys_iden_help, "%s:", sys_iden);
+-	}
+-
+-	free (sys_iden);
+-
+-	if (-1 == r) {
+-		ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
+ 	}
+ 
+ 	json = json_object_new_object();
+@@ -585,6 +598,10 @@ CODESTARTrunInput
+ 		pid_field_name = "SYSLOG_PID";
+ 	}
+ 
++	if (cs.dfltTag == NULL) {
++		cs.dfltTag = strdup(DFLT_TAG);
++	}
++
+ 	/* this is an endless loop - it is terminated when the thread is
+ 	 * signalled to do so. This, however, is handled by the framework.
+ 	 */
+@@ -633,6 +650,7 @@ CODESTARTbeginCnfLoad
+ 	cs.ratelimitInterval = 600;
+ 	cs.iDfltSeverity = DFLT_SEVERITY;
+ 	cs.iDfltFacility = DFLT_FACILITY;
++	cs.dfltTag = NULL;
+ 	cs.bUseJnlPID = 0;
+ ENDbeginCnfLoad
+ 
+@@ -655,6 +673,7 @@ ENDactivateCnf
+ BEGINfreeCnf
+ CODESTARTfreeCnf
+ 	free(cs.stateFile);
++	free(cs.dfltTag);
+ ENDfreeCnf
+ 
+ /* open journal */
+@@ -739,6 +758,8 @@ CODESTARTsetModCnf
+ 			free(fac);
+ 		} else if (!strcmp(modpblk.descr[i].name, "usepidfromsystem")) {
+ 			cs.bUseJnlPID = (int) pvals[i].val.d.n;
++		} else if (!strcmp(modpblk.descr[i].name, "defaulttag")) {
++			cs.dfltTag = (char *)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else {
+ 			dbgprintf("imjournal: program error, non-handled "
+ 				"param '%s' in beginCnfLoad\n", modpblk.descr[i].name);
+@@ -799,6 +820,8 @@ CODEmodInit_QueryRegCFSLineHdlr
+ 		facilityHdlr, &cs.iDfltFacility, STD_LOADABLE_MODULE_ID));
+ 	CHKiRet(omsdRegCFSLineHdlr((uchar *)"imjournalusepidfromsystem", 0, eCmdHdlrBinary,
+ 		NULL, &cs.bUseJnlPID, STD_LOADABLE_MODULE_ID));
++	CHKiRet(omsdRegCFSLineHdlr((uchar *)"imjournaldefaulttag", 0, eCmdHdlrGetWord,
++		NULL, &cs.dfltTag, STD_LOADABLE_MODULE_ID));
+ ENDmodInit
+ /* vim:set ai:
+  */
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1488186-fixed-nullptr-check.patch b/SOURCES/rsyslog-8.24.0-rhbz1488186-fixed-nullptr-check.patch
new file mode 100644
index 0000000..19f84d4
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1488186-fixed-nullptr-check.patch
@@ -0,0 +1,27 @@
+From a5b40bb57cf47d964ad3873ddf550e7885df9f8e Mon Sep 17 00:00:00 2001
+From: Marek Tamaskovic <tamaskovic.marek@gmail.com>
+Date: Wed, 6 Sep 2017 17:08:45 +0200
+Subject: [PATCH] fixed nullptr check
+
+---
+ plugins/imjournal/imjournal.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/plugins/imjournal/imjournal.c b/plugins/imjournal/imjournal.c
+index e3e0f39..938e8b8 100644
+--- a/plugins/imjournal/imjournal.c
++++ b/plugins/imjournal/imjournal.c
+@@ -514,7 +514,9 @@ pollJournal(void)
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+ 
+-		iRet = loadJournalState();
++		if(cs.stateFile != NULL){
++			iRet = loadJournalState();
++		}
+ 		LogMsg(0, RS_RET_OK, LOG_NOTICE, "imjournal: journal reloaded...");
+ 	} else if (jr < 0) {
+ 		char errStr[256];
+-- 
+2.9.5
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1497985-journal-reloaded-message.patch b/SOURCES/rsyslog-8.24.0-rhbz1497985-journal-reloaded-message.patch
new file mode 100644
index 0000000..31237c4
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1497985-journal-reloaded-message.patch
@@ -0,0 +1,46 @@
+diff -up ./plugins/imjournal/imjournal.c.journal-reloaded ./plugins/imjournal/imjournal.c
+--- ./plugins/imjournal/imjournal.c.journal-reloaded	2017-10-09 11:05:53.698885473 -0400
++++ ./plugins/imjournal/imjournal.c	2017-10-09 11:08:54.179885473 -0400
+@@ -509,8 +509,7 @@ pollJournal(void)
+ 		}
+ 
+ 		iRet = loadJournalState();
+-		errmsg.LogError(0, RS_RET_OK, "imjournal: "
+-			"journal reloaded...");	
++		LogMsg(0, RS_RET_OK, LOG_NOTICE, "imjournal: journal reloaded...");
+ 	} else if (jr < 0) {
+ 		char errStr[256];
+ 		rs_strerror_r(errno, errStr, sizeof(errStr));
+diff -up ./runtime/errmsg.c.journal-reloaded ./runtime/errmsg.c
+--- ./runtime/errmsg.c.journal-reloaded	2016-12-03 12:41:03.000000000 -0500
++++ ./runtime/errmsg.c	2017-10-09 11:05:53.704885473 -0400
+@@ -115,7 +115,7 @@ doLogMsg(const int iErrno, const int iEr
+  * maps to a specific error event).
+  * rgerhards, 2008-06-27
+  */
+-static void __attribute__((format(printf, 3, 4)))
++void __attribute__((format(printf, 3, 4)))
+ LogError(const int iErrno, const int iErrCode, const char *fmt, ... )
+ {
+ 	va_list ap;
+@@ -144,7 +144,7 @@ LogError(const int iErrno, const int iEr
+  * maps to a specific error event).
+  * rgerhards, 2008-06-27
+  */
+-static void __attribute__((format(printf, 4, 5)))
++void __attribute__((format(printf, 4, 5)))
+ LogMsg(const int iErrno, const int iErrCode, const int severity, const char *fmt, ... )
+ {
+ 	va_list ap;
+diff -up ./runtime/errmsg.h.journal-reloaded ./runtime/errmsg.h
+--- ./runtime/errmsg.h.journal-reloaded	2016-12-03 12:41:03.000000000 -0500
++++ ./runtime/errmsg.h	2017-10-09 11:05:53.704885473 -0400
+@@ -44,5 +44,8 @@ ENDinterface(errmsg)
+ PROTOTYPEObj(errmsg);
+ void resetErrMsgsFlag(void);
+ int hadErrMsgs(void);
++void __attribute__((format(printf, 3, 4))) LogError(const int iErrno, const int iErrCode, const char *fmt, ... );
++void __attribute__((format(printf, 4, 5)))
++	LogMsg(const int iErrno, const int iErrCode, const int severity, const char *fmt, ... );
+ 
+ #endif /* #ifndef INCLUDED_ERRMSG_H */
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1505103-omrelp-rebindinterval.patch b/SOURCES/rsyslog-8.24.0-rhbz1505103-omrelp-rebindinterval.patch
new file mode 100644
index 0000000..39fda2b
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1505103-omrelp-rebindinterval.patch
@@ -0,0 +1,28 @@
+From cc09e7a6e893157a4d7a173c78f4b0a0496e8fbd Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Thu, 28 Sep 2017 19:08:35 +0200
+Subject: [PATCH] omrelp bugfix: segfault if rebindinterval config param is
+ used
+
+closes https://github.com/rsyslog/rsyslog/issues/120
+---
+ plugins/omrelp/omrelp.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/plugins/omrelp/omrelp.c b/plugins/omrelp/omrelp.c
+index 3df062e0e..d32a66e07 100644
+--- a/plugins/omrelp/omrelp.c
++++ b/plugins/omrelp/omrelp.c
+@@ -566,8 +566,10 @@ ENDdoAction
+
+ BEGINendTransaction
+ CODESTARTendTransaction
+-	dbgprintf("omrelp: endTransaction\n");
+-	relpCltHintBurstEnd(pWrkrData->pRelpClt);
++	DBGPRINTF("omrelp: endTransaction, connected %d\n", pWrkrData->bIsConnected);
++	if(pWrkrData->bIsConnected) {
++		relpCltHintBurstEnd(pWrkrData->pRelpClt);
++	}
+ ENDendTransaction
+
+ BEGINparseSelectorAct
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1507145-omelastic-client-cert.patch b/SOURCES/rsyslog-8.24.0-rhbz1507145-omelastic-client-cert.patch
new file mode 100644
index 0000000..84e9d0f
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1507145-omelastic-client-cert.patch
@@ -0,0 +1,208 @@
+From 02772eb5f28b3c3a98f0d739b6210ca82d58f7ee Mon Sep 17 00:00:00 2001
+From: Rich Megginson <rmeggins@redhat.com>
+Date: Thu, 8 Feb 2018 18:13:13 -0700
+Subject: [PATCH] omelasticsearch - add support for CA cert, client cert auth
+
+This allows omelasticsearch to perform client cert based authentication
+to Elasticsearch.
+Add the following parameters:
+`tls.cacert` - Full path and filename of the file containing the CA cert
+               for the CA that issued the Elasticsearch server(s) cert(s)
+`tls.mycert` - Full path and filename of the file containing the client
+               cert used to authenticate to Elasticsearch
+`tls.myprivkey` - Full path and filename of the file containing the client
+                  key used to authenticate to Elasticsearch
+---
+ plugins/omelasticsearch/omelasticsearch.c | 79 ++++++++++++++++++++++++++++---
+ 1 file changed, 73 insertions(+), 6 deletions(-)
+
+diff --git a/plugins/omelasticsearch/omelasticsearch.c b/plugins/omelasticsearch/omelasticsearch.c
+index 97d8fb233..88bd5e16c 100644
+--- a/plugins/omelasticsearch/omelasticsearch.c
++++ b/plugins/omelasticsearch/omelasticsearch.c
+@@ -110,6 +110,9 @@ typedef struct _instanceData {
+ 	size_t maxbytes;
+ 	sbool useHttps;
+ 	sbool allowUnsignedCerts;
++	uchar *caCertFile;
++	uchar *myCertFile;
++	uchar *myPrivKeyFile;
+ } instanceData;
+ 
+ typedef struct wrkrInstanceData {
+@@ -154,7 +157,10 @@ static struct cnfparamdescr actpdescr[] = {
+ 	{ "template", eCmdHdlrGetWord, 0 },
+ 	{ "dynbulkid", eCmdHdlrBinary, 0 },
+ 	{ "bulkid", eCmdHdlrGetWord, 0 },
+-	{ "allowunsignedcerts", eCmdHdlrBinary, 0 }
++	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
++	{ "tls.cacert", eCmdHdlrString, 0 },
++	{ "tls.mycert", eCmdHdlrString, 0 },
++	{ "tls.myprivkey", eCmdHdlrString, 0 }
+ };
+ static struct cnfparamblk actpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -168,6 +174,9 @@ BEGINcreateInstance
+ CODESTARTcreateInstance
+ 	pData->fdErrFile = -1;
+ 	pthread_mutex_init(&pData->mutErrFile, NULL);
++	pData->caCertFile = NULL;
++	pData->myCertFile = NULL;
++	pData->myPrivKeyFile = NULL;
+ ENDcreateInstance
+ 
+ BEGINcreateWrkrInstance
+@@ -216,6 +225,9 @@ CODESTARTfreeInstance
+ 	free(pData->timeout);
+ 	free(pData->errorFile);
+ 	free(pData->bulkId);
++	free(pData->caCertFile);
++	free(pData->myCertFile);
++	free(pData->myPrivKeyFile);
+ ENDfreeInstance
+ 
+ BEGINfreeWrkrInstance
+@@ -270,6 +282,9 @@ CODESTARTdbgPrintInstInfo
+ 	dbgprintf("\tinterleaved=%d\n", pData->interleaved);
+ 	dbgprintf("\tdynbulkid=%d\n", pData->dynBulkId);
+ 	dbgprintf("\tbulkid='%s'\n", pData->bulkId);
++	dbgprintf("\ttls.cacert='%s'\n", pData->caCertFile);
++	dbgprintf("\ttls.mycert='%s'\n", pData->myCertFile);
++	dbgprintf("\ttls.myprivkey='%s'\n", pData->myPrivKeyFile);
+ ENDdbgPrintInstInfo
+ 
+ 
+@@ -311,7 +326,7 @@ computeBaseUrl(const char*const serverParam,
+ 		r = useHttps ? es_addBuf(&urlBuf, SCHEME_HTTPS, sizeof(SCHEME_HTTPS)-1) :
+ 			es_addBuf(&urlBuf, SCHEME_HTTP, sizeof(SCHEME_HTTP)-1);
+ 
+-	if (r == 0) r = es_addBuf(&urlBuf, serverParam, strlen(serverParam));
++	if (r == 0) r = es_addBuf(&urlBuf, (char *)serverParam, strlen(serverParam));
+ 	if (r == 0 && !strchr(host, ':')) {
+ 		snprintf(portBuf, sizeof(portBuf), ":%d", defaultPort);
+ 		r = es_addBuf(&urlBuf, portBuf, strlen(portBuf));
+@@ -1296,7 +1311,7 @@ finalize_it:
+ }
+ 
+ static void
+-curlCheckConnSetup(CURL *handle, HEADER *header, long timeout, sbool allowUnsignedCerts)
++curlCheckConnSetup(CURL *handle, HEADER *header, long timeout, sbool allowUnsignedCerts, wrkrInstanceData_t *pWrkrData)
+ {
+ 	curl_easy_setopt(handle, CURLOPT_HTTPHEADER, header);
+ 	curl_easy_setopt(handle, CURLOPT_NOBODY, TRUE);
+@@ -1305,13 +1320,21 @@ curlCheckConnSetup(CURL *handle, HEADER *header, long timeout, sbool allowUnsign
+ 
+ 	if(allowUnsignedCerts)
+ 		curl_easy_setopt(handle, CURLOPT_SSL_VERIFYPEER, FALSE);
++	if(pWrkrData->pData->caCertFile)
++		curl_easy_setopt(handle, CURLOPT_CAINFO, pWrkrData->pData->caCertFile);
++	if(pWrkrData->pData->myCertFile)
++		curl_easy_setopt(handle, CURLOPT_SSLCERT, pWrkrData->pData->myCertFile);
++	if(pWrkrData->pData->myPrivKeyFile)
++		curl_easy_setopt(handle, CURLOPT_SSLKEY, pWrkrData->pData->myPrivKeyFile);
++	/* uncomment for in-dept debuggung:
++	curl_easy_setopt(handle, CURLOPT_VERBOSE, TRUE); */
+ 
+ 	/* Only enable for debugging
+ 	curl_easy_setopt(curl, CURLOPT_VERBOSE, TRUE); */
+ }
+ 
+ static void
+-curlPostSetup(CURL *handle, HEADER *header, uchar* authBuf)
++curlPostSetup(CURL *handle, HEADER *header, uchar* authBuf, wrkrInstanceData_t *pWrkrData)
+ {
+ 	curl_easy_setopt(handle, CURLOPT_HTTPHEADER, header);
+ 	curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, curlResult);
+@@ -1322,6 +1345,12 @@ curlPostSetup(CURL *handle, HEADER *header, uchar* authBuf)
+ 		curl_easy_setopt(handle, CURLOPT_USERPWD, authBuf);
+ 		curl_easy_setopt(handle, CURLOPT_PROXYAUTH, CURLAUTH_ANY);
+ 	}
++	if(pWrkrData->pData->caCertFile)
++		curl_easy_setopt(handle, CURLOPT_CAINFO, pWrkrData->pData->caCertFile);
++	if(pWrkrData->pData->myCertFile)
++		curl_easy_setopt(handle, CURLOPT_SSLCERT, pWrkrData->pData->myCertFile);
++	if(pWrkrData->pData->myPrivKeyFile)
++		curl_easy_setopt(handle, CURLOPT_SSLKEY, pWrkrData->pData->myPrivKeyFile);
+ }
+ 
+ static rsRetVal
+@@ -1332,7 +1361,7 @@ curlSetup(wrkrInstanceData_t *pWrkrData, instanceData *pData)
+ 	if (pWrkrData->curlPostHandle == NULL) {
+ 		return RS_RET_OBJ_CREATION_FAILED;
+ 	}
+-	curlPostSetup(pWrkrData->curlPostHandle, pWrkrData->curlHeader, pData->authBuf);
++	curlPostSetup(pWrkrData->curlPostHandle, pWrkrData->curlHeader, pData->authBuf, pWrkrData);
+ 
+ 	pWrkrData->curlCheckConnHandle = curl_easy_init();
+ 	if (pWrkrData->curlCheckConnHandle == NULL) {
+@@ -1341,7 +1370,7 @@ curlSetup(wrkrInstanceData_t *pWrkrData, instanceData *pData)
+ 		return RS_RET_OBJ_CREATION_FAILED;
+ 	}
+ 	curlCheckConnSetup(pWrkrData->curlCheckConnHandle, pWrkrData->curlHeader,
+-		pData->healthCheckTimeout, pData->allowUnsignedCerts);
++		pData->healthCheckTimeout, pData->allowUnsignedCerts, pWrkrData);
+ 
+ 	return RS_RET_OK;
+ }
+@@ -1372,6 +1401,9 @@ setInstParamDefaults(instanceData *pData)
+ 	pData->interleaved=0;
+ 	pData->dynBulkId= 0;
+ 	pData->bulkId = NULL;
++	pData->caCertFile = NULL;
++	pData->myCertFile = NULL;
++	pData->myPrivKeyFile = NULL;
+ }
+ 
+ BEGINnewActInst
+@@ -1380,6 +1412,8 @@ BEGINnewActInst
+ 	struct cnfarray* servers = NULL;
+ 	int i;
+ 	int iNumTpls;
++	FILE *fp;
++	char errStr[1024];
+ CODESTARTnewActInst
+ 	if((pvals = nvlstGetParams(lst, &actpblk, NULL)) == NULL) {
+ 		ABORT_FINALIZE(RS_RET_MISSING_CNFPARAMS);
+@@ -1435,6 +1469,39 @@ CODESTARTnewActInst
+ 			pData->dynBulkId = pvals[i].val.d.n;
+ 		} else if(!strcmp(actpblk.descr[i].name, "bulkid")) {
+ 			pData->bulkId = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(actpblk.descr[i].name, "tls.cacert")) {
++			pData->caCertFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->caCertFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				errmsg.LogError(0, RS_RET_NO_FILE_ACCESS,
++						"error: 'tls.cacert' file %s couldn't be accessed: %s\n",
++						pData->caCertFile, errStr);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "tls.mycert")) {
++			pData->myCertFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->myCertFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				errmsg.LogError(0, RS_RET_NO_FILE_ACCESS,
++						"error: 'tls.mycert' file %s couldn't be accessed: %s\n",
++						pData->myCertFile, errStr);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "tls.myprivkey")) {
++			pData->myPrivKeyFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->myPrivKeyFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				errmsg.LogError(0, RS_RET_NO_FILE_ACCESS,
++						"error: 'tls.myprivkey' file %s couldn't be accessed: %s\n",
++						pData->myPrivKeyFile, errStr);
++			} else {
++				fclose(fp);
++			}
+ 		} else {
+ 			dbgprintf("omelasticsearch: program error, non-handled "
+ 			  "param '%s'\n", actpblk.descr[i].name);
+-- 
+2.14.3
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1511485-deserialize-property-name.patch b/SOURCES/rsyslog-8.24.0-rhbz1511485-deserialize-property-name.patch
new file mode 100644
index 0000000..2bf5f9e
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1511485-deserialize-property-name.patch
@@ -0,0 +1,59 @@
+From c49e42f4f8381fc8e92579c41cefb2c85fe45929 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Tue, 7 Feb 2017 13:09:40 +0100
+Subject: [PATCH] core: fix sequence error in msg object deserializer
+
+Corruption of disk queue (or disk part of DA queue) always happens if
+the "json" property (message variables) is present and "structured-data"
+property is also present. This causes rsyslog to serialize to the
+queue in wrong property sequence, which will lead to error -2308 on
+deserialization.
+
+Seems to be a long-standing bug. Depending on version used, some or
+all messages in disk queue may be lost.
+
+closes https://github.com/rsyslog/rsyslog/issues/1404
+---
+ runtime/msg.c | 14 ++++++++------
+ 1 file changed, 8 insertions(+), 6 deletions(-)
+
+diff --git a/runtime/msg.c b/runtime/msg.c
+index 7cfeca843..cfa95517e 100644
+--- a/runtime/msg.c
++++ b/runtime/msg.c
+@@ -1350,6 +1350,11 @@ MsgDeserialize(smsg_t * const pMsg, strm_t *pStrm)
+ 		reinitVar(pVar);
+ 		CHKiRet(objDeserializeProperty(pVar, pStrm));
+ 	}
++	if(isProp("pszStrucData")) {
++		MsgSetStructuredData(pMsg, (char*) rsCStrGetSzStrNoNULL(pVar->val.pStr));
++		reinitVar(pVar);
++		CHKiRet(objDeserializeProperty(pVar, pStrm));
++	}
+ 	if(isProp("json")) {
+ 		tokener = json_tokener_new();
+ 		pMsg->json = json_tokener_parse_ex(tokener, (char*)rsCStrGetSzStrNoNULL(pVar->val.pStr),
+@@ -1366,11 +1371,6 @@ MsgDeserialize(smsg_t * const pMsg, strm_t *pStrm)
+ 		reinitVar(pVar);
+ 		CHKiRet(objDeserializeProperty(pVar, pStrm));
+ 	}
+-	if(isProp("pszStrucData")) {
+-		MsgSetStructuredData(pMsg, (char*) rsCStrGetSzStrNoNULL(pVar->val.pStr));
+-		reinitVar(pVar);
+-		CHKiRet(objDeserializeProperty(pVar, pStrm));
+-	}
+ 	if(isProp("pCSAPPNAME")) {
+ 		MsgSetAPPNAME(pMsg, (char*) rsCStrGetSzStrNoNULL(pVar->val.pStr));
+ 		reinitVar(pVar);
+@@ -1401,8 +1401,10 @@ MsgDeserialize(smsg_t * const pMsg, strm_t *pStrm)
+ 	 * but on the other hand it works decently AND we will probably replace
+ 	 * the whole persisted format soon in any case. -- rgerhards, 2012-11-06
+ 	 */
+-	if(!isProp("offMSG"))
++	if(!isProp("offMSG")) {
++		DBGPRINTF("error property: %s\n", rsCStrGetSzStrNoNULL(pVar->pcsName));
+ 		ABORT_FINALIZE(RS_RET_DS_PROP_SEQ_ERR);
++	}
+ 	MsgSetMSGoffs(pMsg, pVar->val.num);
+ finalize_it:
+ 	if(pVar != NULL)
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1512551-caching-sockaddr.patch b/SOURCES/rsyslog-8.24.0-rhbz1512551-caching-sockaddr.patch
new file mode 100644
index 0000000..cf36062
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1512551-caching-sockaddr.patch
@@ -0,0 +1,70 @@
+From 5f828658a317c86095fc2b982801b58bf8b8ee6f Mon Sep 17 00:00:00 2001
+From: mosvald <mosvald@redhat.com>
+Date: Mon, 4 Dec 2017 08:10:37 +0100
+Subject: [PATCH] cache sin_addr instead of the whole sockaddr structure
+
+---
+ runtime/dnscache.c | 41 +++++++++++++++++++++++++++++++----------
+ 1 file changed, 31 insertions(+), 10 deletions(-)
+
+diff --git a/runtime/dnscache.c b/runtime/dnscache.c
+index 388a64f5e..d1d6e10a1 100644
+--- a/runtime/dnscache.c
++++ b/runtime/dnscache.c
+@@ -79,22 +79,43 @@ static prop_t *staticErrValue;
+ static unsigned int
+ hash_from_key_fn(void *k) 
+ {
+-    int len;
+-    uchar *rkey = (uchar*) k; /* we treat this as opaque bytes */
+-    unsigned hashval = 1;
+-
+-    len = SALEN((struct sockaddr*)k);
+-    while(len--)
+-        hashval = hashval * 33 + *rkey++;
++	int len = 0;
++	uchar *rkey; /* we treat this as opaque bytes */
++	unsigned hashval = 1;
++
++	switch (((struct sockaddr *)k)->sa_family) {
++		case AF_INET:
++			len = sizeof (struct in_addr);
++			rkey = (uchar*) &(((struct sockaddr_in *)k)->sin_addr);
++			break;
++		case AF_INET6:
++			len = sizeof (struct in6_addr);
++			rkey = (uchar*) &(((struct sockaddr_in6 *)k)->sin6_addr);
++			break;
++	}
++	while(len--)
++		hashval = hashval * 33 + *rkey++;
+ 
+-    return hashval;
++	return hashval;
+ }
+ 
+ static int
+ key_equals_fn(void *key1, void *key2)
+ {
+-	return (SALEN((struct sockaddr*)key1) == SALEN((struct sockaddr*) key2) 
+-		   && !memcmp(key1, key2, SALEN((struct sockaddr*) key1)));
++	int RetVal = 0;
++
++	if (((struct sockaddr *)key1)->sa_family != ((struct sockaddr *)key2)->sa_family)
++		return 0;
++	 switch (((struct sockaddr *)key1)->sa_family) {
++		case AF_INET:
++			RetVal = !memcmp(&((struct sockaddr_in *)key1)->sin_addr, &((struct sockaddr_in *)key2)->sin_addr, sizeof (struct in_addr));
++			break;
++		case AF_INET6:
++			RetVal = !memcmp(&((struct sockaddr_in6 *)key1)->sin6_addr, &((struct sockaddr_in6 *)key2)->sin6_addr, sizeof (struct in6_addr));
++			break;
++	}
++
++	return RetVal;
+ }
+ 
+ /* destruct a cache entry.
+-- 
+2.14.3
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch b/SOURCES/rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch
new file mode 100644
index 0000000..d0f5721
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch
@@ -0,0 +1,3160 @@
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Mon, 28 Jun 2018 15:07:55 +0100
+Subject: Imfile rewrite with symlink support
+
+This commit greatly refactors imfile internal workings. It changes the
+handling of inotify, FEN, and polling modes. Mostly unchanged is the
+processing of the way a file is read and state files are kept.
+
+This is about a 50% rewrite of the module.
+
+Polling, inotify, and FEN modes now use greatly unified code. Some
+differences still exists and may be changed with further commits. The
+internal handling of wildcards and file detection has been completely
+re-written from scratch. For example, previously when multi-level
+wildcards were used these were not reliably detected. The code also
+now provides much of the same functionality in all modes, most importantly
+wildcards are now also supported in polling mode.
+
+The refactoring sets ground for further enhancements and smaller
+refactorings. This commit provides the same feature set that imfile
+had previously.
+
+Some specific changes:
+bugfix: imfile did not pick up all files when not present
+at startup
+
+bugfix: directories only support "*" wildcard, no others
+
+bugfix: parameter "sortfiles" did only work in FEN mode
+
+provides the ability to dynamically add and remove files via
+multi-level wildcards
+
+the state file name currently has been changed to inode number
+
+We change it to json and also change the way it is stored and loaded.
+This sets base to additional improvements in imfile.
+
+When imfile rewrites state files, it does not truncate previous
+content. If the new content is smaller than the existing one, the
+existing part will not be overwritten, resulting in invalid json.
+That in turn can lead to some other failures.
+
+This introduces symlink detection and following as well
+as monitoring changes on them.
+
+stream/bugfix: memory leak on stream open if filename as already generated - 
+this can happen if imfile reads a state file. On each open, memory for the
+file name can be lost.
+
+(cherry picked from commit a03dccf8484d621fe06cb2d11816fbe921751e54 - https://gitlab.cee.redhat.com/rsyslog/rsyslog)
+---
+ plugins/imfile/imfile.c                          | 2264 ++++++++++++++++++++++---------------------
+ runtime/msg.c                                    |   22 ++++++++++++++++++++++
+ runtime/msg.h                                    |    1 +
+ runtime/stream.c                                 |  136 ++++++++++++++++++++-------
+ runtime/stream.h                                 |   17 ++++++++++++++---
+ 5 files changed, 1303 insertions(+), 1137 deletitions(-)
+
+diff --git a/plugins/imfile/imfile.c b/plugins/imfile/imfile.c
+index b0bc860bcd16beaecd67ce1b7c61991356ea5471..f8225d7068d8fc98edde7bbed194be1105b1696b 100644
+--- a/plugins/imfile/imfile.c
++++ b/plugins/imfile/imfile.c
+@@ -35,6 +35,7 @@
+ #include <unistd.h>
+ #include <glob.h>
+ #include <poll.h>
++#include <json.h>
+ #include <fnmatch.h>
+ #ifdef HAVE_SYS_INOTIFY_H
+ #include <sys/inotify.h>
+@@ -56,6 +57,7 @@
+ #include "stringbuf.h"
+ #include "ruleset.h"
+ #include "ratelimit.h"
++#include "parserif.h"
+ 
+ #include <regex.h> // TODO: fix via own module
+ 
+@@ -77,50 +81,19 @@ static int bLegacyCnfModGlobalsPermitted;/* are legacy module-global config para
+ 
+ #define NUM_MULTISUB 1024 /* default max number of submits */
+ #define DFLT_PollInterval 10
+-
+-#define INIT_FILE_TAB_SIZE 4 /* default file table size - is extended as needed, use 2^x value */
+-#define INIT_FILE_IN_DIR_TAB_SIZE 1 /* initial size for "associated files tab" in directory table */
+ #define INIT_WDMAP_TAB_SIZE 1 /* default wdMap table size - is extended as needed, use 2^x value */
+-
+ #define ADD_METADATA_UNSPECIFIED -1
++#define CONST_LEN_CEE_COOKIE 5
++#define CONST_CEE_COOKIE "@cee:"
++
++/* If set to 1, fileTableDisplay will be compiled and used for debugging */
++#define ULTRA_DEBUG 0
++
++/* Setting GLOB_BRACE to ZERO which disables support for GLOB_BRACE if not available on current platform */
++#ifndef GLOB_BRACE
++	#define GLOB_BRACE 0
++#endif
+ 
+-/* this structure is used in pure polling mode as well one of the support
+- * structures for inotify.
+- */
+-typedef struct lstn_s {
+-	struct lstn_s *next, *prev;
+-	struct lstn_s *masterLstn;/* if dynamic file (via wildcard), this points to the configured
+-				 * master entry. For master entries, it is always NULL. Only
+-				 * dynamic files can be deleted from the "files" list. */
+-	uchar *pszFileName;
+-	uchar *pszDirName;
+-	uchar *pszBaseName;
+-	uchar *pszTag;
+-	size_t lenTag;
+-	uchar *pszStateFile; /* file in which state between runs is to be stored (dynamic if NULL) */
+-	int readTimeout;
+-	int iFacility;
+-	int iSeverity;
+-	int maxLinesAtOnce;
+-	uint32_t trimLineOverBytes;
+-	int nRecords; /**< How many records did we process before persisting the stream? */
+-	int iPersistStateInterval; /**< how often should state be persisted? (0=on close only) */
+-	strm_t *pStrm;	/* its stream (NULL if not assigned) */
+-	sbool bRMStateOnDel;
+-	sbool hasWildcard;
+-	uint8_t readMode;	/* which mode to use in ReadMulteLine call? */
+-	uchar *startRegex;	/* regex that signifies end of message (NULL if unset) */
+-	regex_t end_preg;	/* compiled version of startRegex */
+-	uchar *prevLineSegment;	/* previous line segment (in regex mode) */
+-	sbool escapeLF;	/* escape LF inside the MSG content? */
+-	sbool reopenOnTruncate;
+-	sbool addMetadata;
+-	sbool addCeeTag;
+-	sbool freshStartTail; /* read from tail of file on fresh start? */
+-	ruleset_t *pRuleset;	/* ruleset to bind listener to (use system default if unspecified) */
+-	ratelimit_t *ratelimiter;
+-	multi_submit_t multiSub;
+-} lstn_t;
+ 
+ static struct configSettings_s {
+ 	uchar *pszFileName;
+@@ -138,9 +111,11 @@ static struct configSettings_s {
+ 
+ struct instanceConf_s {
+ 	uchar *pszFileName;
++	uchar *pszFileName_forOldStateFile; /* we unfortunately needs this to read old state files */
+ 	uchar *pszDirName;
+ 	uchar *pszFileBaseName;
+ 	uchar *pszTag;
++	size_t lenTag;
+ 	uchar *pszStateFile;
+ 	uchar *pszBindRuleset;
+ 	int nMultiSub;
+@@ -151,11 +126,15 @@ struct instanceConf_s {
+ 	sbool bRMStateOnDel;
+ 	uint8_t readMode;
+ 	uchar *startRegex;
++	regex_t end_preg;	/* compiled version of startRegex */
++	sbool discardTruncatedMsg;
++	sbool msgDiscardingError;
+ 	sbool escapeLF;
+ 	sbool reopenOnTruncate;
+ 	sbool addCeeTag;
+ 	sbool addMetadata;
+ 	sbool freshStartTail;
++	sbool fileNotFoundError;
+ 	int maxLinesAtOnce;
+ 	uint32_t trimLineOverBytes;
+ 	ruleset_t *pBindRuleset;	/* ruleset to bind listener to (use system default if unspecified) */
+@@ -163,9 +142,54 @@ struct instanceConf_s {
+ };
+ 
+ 
++/* file system objects */
++typedef struct fs_edge_s fs_edge_t;
++typedef struct fs_node_s fs_node_t;
++typedef struct act_obj_s act_obj_t;
++struct act_obj_s {
++	act_obj_t *prev;
++	act_obj_t *next;
++	fs_edge_t *edge;	/* edge which this object belongs to */
++	char *name;		/* full path name of active object */
++	char *basename;		/* only basename */ //TODO: remove when refactoring rename support
++	char *source_name;  /* if this object is target of a symlink, source_name is its name (else NULL) */
++	//char *statefile;	/* base name of state file (for move operations) */
++	int wd;
++	time_t timeoutBase; /* what time to calculate the timeout against? */
++	/* file dynamic data */
++	int in_move;	/* workaround for inotify move: if set, state file must not be deleted */
++	ino_t ino;	/* current inode nbr */
++	strm_t *pStrm;	/* its stream (NULL if not assigned) */
++	int nRecords; /**< How many records did we process before persisting the stream? */
++	ratelimit_t *ratelimiter;
++	multi_submit_t multiSub;
++	int is_symlink;
++};
++struct fs_edge_s {
++	fs_node_t *parent;
++	fs_node_t *node;	/* node this edge points to */
++	fs_edge_t *next;
++	uchar *name;
++	uchar *path;
++	act_obj_t *active;
++	int is_file;
++	int ninst;	/* nbr of instances in instarr */
++	instanceConf_t **instarr;
++};
++struct fs_node_s {
++	fs_edge_t *edges;
++	fs_node_t *root;
++};
++
++
+ /* forward definitions */
+-static rsRetVal persistStrmState(lstn_t *pInfo);
++static rsRetVal persistStrmState(act_obj_t *);
+ static rsRetVal resetConfigVariables(uchar __attribute__((unused)) *pp, void __attribute__((unused)) *pVal);
++static rsRetVal pollFile(act_obj_t *act);
++static int getBasename(uchar *const __restrict__ basen, uchar *const __restrict__ path);
++static void act_obj_unlink(act_obj_t *act);
++static uchar * getStateFileName(const act_obj_t *, uchar *, const size_t);
++static int getFullStateFileName(const uchar *const, uchar *const pszout, const size_t ilenout);
+ 
+ 
+ #define OPMODE_POLLING 0
+@@ -178,57 +200,23 @@ struct modConfData_s {
+ 	int readTimeout;
+ 	int timeoutGranularity;		/* value in ms */
+ 	instanceConf_t *root, *tail;
+-	lstn_t *pRootLstn;
+-	lstn_t *pTailLstn;
++	fs_node_t *conf_tree;
+ 	uint8_t opMode;
+ 	sbool configSetViaV2Method;
++	sbool sortFiles;
++	sbool normalizePath;	/* normalize file system pathes (all start with root dir) */
+ 	sbool haveReadTimeouts;	/* use special processing if read timeouts exist */
++	sbool bHadFileData;	/* actually a global variable:
++				   1 - last call to pollFile() had data
++				   0 - last call to pollFile() had NO data
++				   Must be manually reset to 0 if desired. Helper for
++				   polling mode.
++				 */
+ };
+ static modConfData_t *loadModConf = NULL;/* modConf ptr to use for the current load process */
+ static modConfData_t *runModConf = NULL;/* modConf ptr to use for the current load process */
+ 
+ #ifdef HAVE_INOTIFY_INIT
+-/* support for inotify mode */
+-
+-/* we need to track directories */
+-struct dirInfoFiles_s { /* associated files */
+-	lstn_t *pLstn;
+-	int refcnt;	/* due to inotify's async nature, we may have multiple
+-			 * references to a single file inside our cache - e.g. when
+-			 * inodes are removed, and the file name is re-created BUT another
+-			 * process (like rsyslogd ;)) holds open the old inode.
+-			 */
+-};
+-typedef struct dirInfoFiles_s dirInfoFiles_t;
+-
+-/* This structure is a dynamic table to track file entries */
+-struct fileTable_s {
+-	dirInfoFiles_t *listeners;
+-	int currMax;
+-	int allocMax;
+-};
+-typedef struct fileTable_s fileTable_t;
+-
+-/* The dirs table (defined below) contains one entry for each directory that
+- * is to be monitored. For each directory, it contains array which point to
+- * the associated *active* files as well as *configured* files. Note that
+- * the configured files may currently not exist, but will be processed
+- * when they are created.
+- */
+-struct dirInfo_s {
+-	uchar *dirName;
+-	fileTable_t active; /* associated active files */
+-	fileTable_t configured; /* associated configured files */
+-};
+-typedef struct dirInfo_s dirInfo_t;
+-static dirInfo_t *dirs = NULL;
+-static int allocMaxDirs;
+-static int currMaxDirs;
+-/* the following two macros are used to select the correct file table */
+-#define ACTIVE_FILE 1
+-#define CONFIGURED_FILE 0
+-
+-
+ /* We need to map watch descriptors to our actual objects. Unfortunately, the
+  * inotify API does not provide us with any cookie, so a simple O(1) algorithm
+  * cannot be done (what a shame...). We assume that maintaining the array is much
+@@ -238,9 +226,7 @@ static int currMaxDirs;
+  */
+ struct wd_map_s {
+ 	int wd;		/* ascending sort key */
+-	lstn_t *pLstn;	/* NULL, if this is a dir entry, otherwise pointer into listener(file) table */
+-	int dirIdx;	/* index into dirs table, undefined if pLstn == NULL */
+-	time_t timeoutBase; /* what time to calculate the timeout against? */
++	act_obj_t *act; /* point to related active object */
+ };
+ typedef struct wd_map_s wd_map_t;
+ static wd_map_t *wdmap = NULL;
+@@ -257,6 +243,8 @@ static struct cnfparamdescr modpdescr[] = {
+ 	{ "pollinginterval", eCmdHdlrPositiveInt, 0 },
+ 	{ "readtimeout", eCmdHdlrPositiveInt, 0 },
+ 	{ "timeoutgranularity", eCmdHdlrPositiveInt, 0 },
++	{ "sortfiles", eCmdHdlrBinary, 0 },
++	{ "normalizepath", eCmdHdlrBinary, 0 },
+ 	{ "mode", eCmdHdlrGetWord, 0 }
+ };
+ static struct cnfparamblk modpblk =
+@@ -286,7 +274,8 @@ static struct cnfparamdescr inppdescr[] = {
+ 	{ "addceetag", eCmdHdlrBinary, 0 },
+ 	{ "statefile", eCmdHdlrString, CNFPARAM_DEPRECATED },
+ 	{ "readtimeout", eCmdHdlrPositiveInt, 0 },
+-	{ "freshstarttail", eCmdHdlrBinary, 0}
++	{ "freshstarttail", eCmdHdlrBinary, 0},
++	{ "filenotfounderror", eCmdHdlrBinary, 0}
+ };
+ static struct cnfparamblk inppblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -297,18 +286,106 @@ static struct cnfparamblk inppblk =
+ #include "im-helper.h" /* must be included AFTER the type definitions! */
+ 
+ 
+-#ifdef HAVE_INOTIFY_INIT
+-/* support for inotify mode */
++/* Support for "old cruft" state files will potentially become optional in the
++ * future (hopefully). To prepare so, we use conditional compilation with a
++ * fixed-true condition ;-) -- rgerhards, 2018-03-28
++ * reason: https://github.com/rsyslog/rsyslog/issues/2231#issuecomment-376862280
++ */
++#define ENABLE_V1_STATE_FILE_FORMAT_SUPPORT 1
++#ifdef ENABLE_V1_STATE_FILE_FORMAT_SUPPORT
++static uchar *
++OLD_getStateFileName(const instanceConf_t *const inst,
++	 uchar *const __restrict__ buf,
++	 const size_t lenbuf)
++{
++	DBGPRINTF("OLD_getStateFileName trying '%s'\n", inst->pszFileName_forOldStateFile);
++	snprintf((char*)buf, lenbuf - 1, "imfile-state:%s", inst->pszFileName_forOldStateFile);
++	buf[lenbuf-1] = '\0'; /* be on the safe side... */
++	uchar *p = buf;
++	for( ; *p ; ++p) {
++		if(*p == '/')
++			*p = '-';
++	}
++	return buf;
++}
+ 
+-#if 0 /* enable if you need this for debugging */
++/* try to open an old-style state file for given file. If the state file does not
++ * exist or cannot be read, an error is returned.
++ */
++static rsRetVal
++OLD_openFileWithStateFile(act_obj_t *const act)
++{
++	DEFiRet;
++	strm_t *psSF = NULL;
++	uchar pszSFNam[MAXFNAME];
++	size_t lenSFNam;
++	struct stat stat_buf;
++	uchar statefile[MAXFNAME];
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
++
++	uchar *const statefn = OLD_getStateFileName(inst, statefile, sizeof(statefile));
++	DBGPRINTF("OLD_openFileWithStateFile: trying to open state for '%s', state file '%s'\n",
++		  act->name, statefn);
++
++	/* Get full path and file name */
++	lenSFNam = getFullStateFileName(statefn, pszSFNam, sizeof(pszSFNam));
++
++	/* check if the file exists */
++	if(stat((char*) pszSFNam, &stat_buf) == -1) {
++		if(errno == ENOENT) {
++			DBGPRINTF("OLD_openFileWithStateFile: NO state file (%s) exists for '%s'\n",
++				pszSFNam, act->name);
++			ABORT_FINALIZE(RS_RET_FILE_NOT_FOUND);
++		} else {
++			char errStr[1024];
++			rs_strerror_r(errno, errStr, sizeof(errStr));
++			DBGPRINTF("OLD_openFileWithStateFile: error trying to access state "
++				"file for '%s':%s\n", act->name, errStr);
++			ABORT_FINALIZE(RS_RET_IO_ERROR);
++		}
++	}
++
++	/* If we reach this point, we have a state file */
++
++	DBGPRINTF("old state file found - instantiating from it\n");
++	CHKiRet(strm.Construct(&psSF));
++	CHKiRet(strm.SettOperationsMode(psSF, STREAMMODE_READ));
++	CHKiRet(strm.SetsType(psSF, STREAMTYPE_FILE_SINGLE));
++	CHKiRet(strm.SetFName(psSF, pszSFNam, lenSFNam));
++	CHKiRet(strm.SetFileNotFoundError(psSF, inst->fileNotFoundError));
++	CHKiRet(strm.ConstructFinalize(psSF));
++
++	/* read back in the object */
++	CHKiRet(obj.Deserialize(&act->pStrm, (uchar*) "strm", psSF, NULL, act));
++	free(act->pStrm->pszFName);
++	CHKmalloc(act->pStrm->pszFName = ustrdup(act->name));
++
++	strm.CheckFileChange(act->pStrm);
++	CHKiRet(strm.SeekCurrOffs(act->pStrm));
++
++	/* we now persist the new state file and delete the old one, so we will
++	 * never have to deal with the old one. */
++	persistStrmState(act);
++	unlink((char*)pszSFNam);
++
++finalize_it:
++	if(psSF != NULL)
++		strm.Destruct(&psSF);
++	RETiRet;
++}
++#endif /* #ifdef ENABLE_V1_STATE_FILE_FORMAT_SUPPORT */
++
++
++#ifdef HAVE_INOTIFY_INIT
++#if ULTRA_DEBUG == 1
+ static void
+-dbg_wdmapPrint(char *msg)
++dbg_wdmapPrint(const char *msg)
+ {
+ 	int i;
+ 	DBGPRINTF("%s\n", msg);
+ 	for(i = 0 ; i < nWdmap ; ++i)
+-		DBGPRINTF("wdmap[%d]: wd: %d, file %d, dir %d\n", i,
+-			  wdmap[i].wd, wdmap[i].fIdx, wdmap[i].dirIdx);
++		DBGPRINTF("wdmap[%d]: wd: %d, act %p, name: %s\n",
++			i, wdmap[i].wd, wdmap[i].act, wdmap[i].act->name);
+ }
+ #endif
+ 
+@@ -324,48 +401,10 @@ finalize_it:
+ 	RETiRet;
+ }
+ 
+-/* looks up a wdmap entry by dirIdx and returns it's index if found
+- * or -1 if not found.
+- */
+-static int
+-wdmapLookupListner(lstn_t* pLstn)
+-{
+-	int i = 0;
+-	int wd = -1;
+-	/* Loop through */
+-	for(i = 0 ; i < nWdmap; ++i) {
+-		if (wdmap[i].pLstn == pLstn)
+-			wd = wdmap[i].wd;
+-	}
+-
+-	return wd;
+-}
+-
+-/* compare function for bsearch() */
+-static int
+-wdmap_cmp(const void *k, const void *a)
+-{
+-	int key = *((int*) k);
+-	wd_map_t *etry = (wd_map_t*) a;
+-	if(key < etry->wd)
+-		return -1;
+-	else if(key > etry->wd)
+-		return 1;
+-	else
+-		return 0;
+-}
+-/* looks up a wdmap entry and returns it's index if found
+- * or -1 if not found.
+- */
+-static wd_map_t *
+-wdmapLookup(int wd)
+-{
+-	return bsearch(&wd, wdmap, nWdmap, sizeof(wd_map_t), wdmap_cmp);
+-}
+ 
+ /* note: we search backwards, as inotify tends to return increasing wd's */
+ static rsRetVal
+-wdmapAdd(int wd, const int dirIdx, lstn_t *const pLstn)
++wdmapAdd(int wd, act_obj_t *const act)
+ {
+ 	wd_map_t *newmap;
+ 	int newmapsize;
+@@ -375,7 +414,7 @@ wdmapAdd(int wd, const int dirIdx, lstn_t *const pLstn)
+ 	for(i = nWdmap-1 ; i >= 0 && wdmap[i].wd > wd ; --i)
+ 		; 	/* just scan */
+ 	if(i >= 0 && wdmap[i].wd == wd) {
+-		DBGPRINTF("imfile: wd %d already in wdmap!\n", wd);
++		LogError(0, RS_RET_INTERNAL_ERROR, "imfile: wd %d already in wdmap!", wd);
+ 		ABORT_FINALIZE(RS_RET_FILE_ALREADY_IN_TABLE);
+ 	}
+ 	++i;
+@@ -392,17 +431,59 @@ wdmapAdd(int wd, const int dirIdx, lstn_t *const pLstn)
+ 		memmove(wdmap + i + 1, wdmap + i, sizeof(wd_map_t) * (nWdmap - i));
+ 	}
+ 	wdmap[i].wd = wd;
+-	wdmap[i].dirIdx = dirIdx;
+-	wdmap[i].pLstn = pLstn;
++	wdmap[i].act = act;
+ 	++nWdmap;
+-	DBGPRINTF("imfile: enter into wdmap[%d]: wd %d, dir %d, lstn %s:%s\n",i,wd,dirIdx,
+-		  (pLstn == NULL) ? "DIRECTORY" : "FILE",
+-	          (pLstn == NULL) ? dirs[dirIdx].dirName : pLstn->pszFileName);
++	DBGPRINTF("add wdmap[%d]: wd %d, act obj %p, path %s\n", i, wd, act, act->name);
+ 
+ finalize_it:
+ 	RETiRet;
+ }
+ 
++/* return wd or -1 on error */
++static int
++in_setupWatch(act_obj_t *const act, const int is_file)
++{
++	int wd = -1;
++	if(runModConf->opMode != OPMODE_INOTIFY)
++		goto done;
++
++	wd = inotify_add_watch(ino_fd, act->name,
++		(is_file) ? IN_MODIFY|IN_DONT_FOLLOW : IN_CREATE|IN_DELETE|IN_MOVED_FROM|IN_MOVED_TO);
++	if(wd < 0) { /* There is high probability of selinux denial on top-level paths */
++		if (errno != EACCES)
++			LogError(errno, RS_RET_IO_ERROR, "imfile: cannot watch object '%s'", act->name);
++		else
++			DBGPRINTF("Access denied when creating watch on '%s'\n", act->name);
++		goto done;
++	}
++	wdmapAdd(wd, act);
++	DBGPRINTF("in_setupWatch: watch %d added for %s(object %p)\n", wd, act->name, act);
++done:	return wd;
++}
++
++/* compare function for bsearch() */
++static int
++wdmap_cmp(const void *k, const void *a)
++{
++	int key = *((int*) k);
++	wd_map_t *etry = (wd_map_t*) a;
++	if(key < etry->wd)
++		return -1;
++	else if(key > etry->wd)
++		return 1;
++	else
++		return 0;
++}
++/* looks up a wdmap entry and returns it's index if found
++ * or -1 if not found.
++ */
++static wd_map_t *
++wdmapLookup(int wd)
++{
++	return bsearch(&wd, wdmap, nWdmap, sizeof(wd_map_t), wdmap_cmp);
++}
++
++
+ static rsRetVal
+ wdmapDel(const int wd)
+ {
+@@ -427,46 +506,570 @@ finalize_it:
+ 	RETiRet;
+ }
+ 
+-#endif /* #if HAVE_INOTIFY_INIT */
++#endif // #ifdef HAVE_INOTIFY_INIT
++
++static void
++fen_setupWatch(act_obj_t *const __attribute__((unused)) act)
++{
++	DBGPRINTF("fen_setupWatch: DUMMY CALLED - not on Solaris?\n");
++}
++
++static void
++fs_node_print(const fs_node_t *const node, const int level)
++{
++	fs_edge_t *chld;
++	act_obj_t *act;
++	dbgprintf("node print[%2.2d]: %p edges:\n", level, node);
++
++	for(chld = node->edges ; chld != NULL ; chld = chld->next) {
++		dbgprintf("node print[%2.2d]:     child %p '%s' isFile %d, path: '%s'\n",
++			level, chld->node, chld->name, chld->is_file, chld->path);
++		for(int i = 0 ; i < chld->ninst ; ++i) {
++			dbgprintf("\tinst: %p\n", chld->instarr[i]);
++		}
++		for(act = chld->active ; act != NULL ; act = act->next) {
++			dbgprintf("\tact : %p\n", act);
++			dbgprintf("\tact : %p: name '%s', wd: %d\n",
++				act, act->name, act->wd);
++		}
++	}
++	for(chld = node->edges ; chld != NULL ; chld = chld->next) {
++		fs_node_print(chld->node, level+1);
++	}
++}
++
++/* add a new file system object if it not yet exists, ignore call
++ * if it already does.
++ */
++static rsRetVal
++act_obj_add(fs_edge_t *const edge, const char *const name, const int is_file,
++	const ino_t ino, const int is_symlink, const char *const source)
++{
++	act_obj_t *act;
++	char basename[MAXFNAME];
++	DEFiRet;
++	
++	DBGPRINTF("act_obj_add: edge %p, name '%s' (source '%s')\n", edge, name, source? source : "---");
++	for(act = edge->active ; act != NULL ; act = act->next) {
++		if(!strcmp(act->name, name)) {
++                       if (!source || !act->source_name || !strcmp(act->source_name, source)) {
++                               DBGPRINTF("active object '%s' already exists in '%s' - no need to add\n",
++                                       name, edge->path);
++                               FINALIZE;
++                       }
++		}
++	}
++	DBGPRINTF("add new active object '%s' in '%s'\n", name, edge->path);
++	CHKmalloc(act = calloc(sizeof(act_obj_t), 1));
++	CHKmalloc(act->name = strdup(name));
++       if (-1 == getBasename((uchar*)basename, (uchar*)name)) {
++               CHKmalloc(act->basename = strdup(name)); /* assume basename is same as name */
++       } else {
++               CHKmalloc(act->basename = strdup(basename));
++       }
++	act->edge = edge;
++	act->ino = ino;
++	act->is_symlink = is_symlink;
++       if (source) { /* we are target of symlink */
++               CHKmalloc(act->source_name = strdup(source));
++       } else {
++               act->source_name = NULL;
++       }
++	#ifdef HAVE_INOTIFY_INIT
++	act->wd = in_setupWatch(act, is_file);
++	#endif
++	fen_setupWatch(act);
++	if(is_file && !is_symlink) {
++		const instanceConf_t *const inst = edge->instarr[0];// TODO: same file, multiple instances?
++		CHKiRet(ratelimitNew(&act->ratelimiter, "imfile", name));
++		CHKmalloc(act->multiSub.ppMsgs = MALLOC(inst->nMultiSub * sizeof(smsg_t *)));
++		act->multiSub.maxElem = inst->nMultiSub;
++		act->multiSub.nElem = 0;
++		pollFile(act);
++	}
++
++	/* all well, add to active list */
++	if(edge->active != NULL) {
++		edge->active->prev = act;
++	}
++	act->next = edge->active;
++	edge->active = act;
++//dbgprintf("printout of fs tree after act_obj_add for '%s'\n", name);
++//fs_node_print(runModConf->conf_tree, 0);
++//dbg_wdmapPrint("wdmap after act_obj_add");
++finalize_it:
++	if(iRet != RS_RET_OK) {
++		if(act != NULL) {
++			free(act->name);
++			free(act);
++		}
++	}
++	RETiRet;
++}
++
++
++/* this walks an edges active list and detects and acts on any changes
++ * seen there. It does NOT detect newly appeared files, as they are not
++ * inside the active list!
++ */
++static void
++detect_updates(fs_edge_t *const edge)
++{
++	act_obj_t *act;
++	struct stat fileInfo;
++	int restart = 0;
++
++	for(act = edge->active ; act != NULL ; ) {
++		DBGPRINTF("detect_updates checking active obj '%s'\n", act->name);
++		const int r = lstat(act->name, &fileInfo);
++		if(r == -1) { /* object gone away? */
++			DBGPRINTF("object gone away, unlinking: '%s'\n", act->name);
++			act_obj_unlink(act);
++			restart = 1;
++			break;
++		}
++		// TODO: add inode check for change notification!
++
++		/* Note: active nodes may get deleted, so we need to do the
++		 * pointer advancement at the end of the for loop!
++		 */
++		act = act->next;
++	}
++	if (restart)
++		detect_updates(edge);
++}
++
++
++/* check if active files need to be processed. This is only needed in
++ * polling mode.
++ */
++static void
++poll_active_files(fs_edge_t *const edge)
++{
++	if(   runModConf->opMode != OPMODE_POLLING
++	   || !edge->is_file
++	   || glbl.GetGlobalInputTermState() != 0) {
++		return;
++	}
++
++	act_obj_t *act;
++	for(act = edge->active ; act != NULL ; act = act->next) {
++		fen_setupWatch(act);
++		DBGPRINTF("poll_active_files: polling '%s'\n", act->name);
++		pollFile(act);
++	}
++}
++
++static rsRetVal
++process_symlink(fs_edge_t *const chld, const char *symlink)
++{
++	DEFiRet;
++	char *target = NULL;
++	CHKmalloc(target = realpath(symlink, target));
++       struct stat fileInfo;
++       if(lstat(target, &fileInfo) != 0) {
++               LogError(errno, RS_RET_ERR,     "imfile: process_symlink cannot stat file '%s' - ignored", target);
++               FINALIZE;
++       }
++       const int is_file = (S_ISREG(fileInfo.st_mode));
++       DBGPRINTF("process_symlink:  found '%s', File: %d (config file: %d), symlink: %d\n",
++               target, is_file, chld->is_file, 0);
++	if (act_obj_add(chld, target, is_file, fileInfo.st_ino, 0, symlink) == RS_RET_OK) {
++		/* need to watch parent target as well for proper rotation support */
++		uint idx = ustrlen(chld->active->name) - ustrlen(chld->active->basename);
++		if (idx) { /* basename is different from name */
++			char parent[MAXFNAME];
++			memcpy(parent, chld->active->name, idx-1);
++			parent[idx-1] = '\0';
++			if(lstat(parent, &fileInfo) != 0) {
++				LogError(errno, RS_RET_ERR,
++						 "imfile: process_symlink: cannot stat directory '%s' - ignored", parent);
++				FINALIZE;
++			}
++			DBGPRINTF("process_symlink:	adding parent '%s' of target '%s'\n", parent, target);
++			act_obj_add(chld->parent->root->edges, parent, 0, fileInfo.st_ino, 0, NULL);
++		}
++	}
++
++finalize_it:
++	free(target);
++	RETiRet;
++}
++
++static void
++poll_tree(fs_edge_t *const chld)
++{
++	struct stat fileInfo;
++	glob_t files;
++	int issymlink;
++	DBGPRINTF("poll_tree: chld %p, name '%s', path: %s\n", chld, chld->name, chld->path);
++	detect_updates(chld);
++	const int ret = glob((char*)chld->path, runModConf->sortFiles|GLOB_BRACE, NULL, &files);
++	DBGPRINTF("poll_tree: glob returned %d\n", ret);
++	if(ret == 0) {
++		DBGPRINTF("poll_tree: processing %d files\n", (int) files.gl_pathc);
++		for(unsigned i = 0 ; i < files.gl_pathc ; i++) {
++			if(glbl.GetGlobalInputTermState() != 0) {
++				goto done;
++			}
++			char *const file = files.gl_pathv[i];
++			if(lstat(file, &fileInfo) != 0) {
++				LogError(errno, RS_RET_ERR,
++					"imfile: poll_tree cannot stat file '%s' - ignored", file);
++				continue;
++			}
++
++			if (S_ISLNK(fileInfo.st_mode)) {
++				rsRetVal slink_ret = process_symlink(chld, file);
++				if (slink_ret != RS_RET_OK) {
++					continue;
++				}
++				issymlink = 1;
++			} else {
++				issymlink = 0;
++			}
++			const int is_file = (S_ISREG(fileInfo.st_mode) || issymlink);
++			DBGPRINTF("poll_tree:  found '%s', File: %d (config file: %d), symlink: %d\n",
++				file, is_file, chld->is_file, issymlink);
++			if(!is_file && S_ISREG(fileInfo.st_mode)) {
++				LogMsg(0, RS_RET_ERR, LOG_WARNING,
++					"imfile: '%s' is neither a regular file, symlink, nor a "
++					"directory - ignored", file);
++				continue;
++			}
++			if(chld->is_file != is_file) {
++				LogMsg(0, RS_RET_ERR, LOG_WARNING,
++					"imfile: '%s' is %s but %s expected - ignored",
++					file, (is_file) ? "FILE" : "DIRECTORY",
++					(chld->is_file) ? "FILE" : "DIRECTORY");
++				continue;
++			}
++			act_obj_add(chld, file, is_file, fileInfo.st_ino, issymlink, NULL);
++		}
++		globfree(&files);
++	}
++
++	poll_active_files(chld);
++
++done:	return;
++}
++
++#ifdef HAVE_INOTIFY_INIT // TODO: shouldn't we use that in polling as well?
++static void
++poll_timeouts(fs_edge_t *const edge)
++{
++	if(edge->is_file) {
++		act_obj_t *act;
++		for(act = edge->active ; act != NULL ; act = act->next) {
++			if(strmReadMultiLine_isTimedOut(act->pStrm)) {
++				DBGPRINTF("timeout occured on %s\n", act->name);
++				pollFile(act);
++			}
++		}
++	}
++}
++#endif
++
++
++/* destruct a single act_obj object */
++static void
++act_obj_destroy(act_obj_t *const act, const int is_deleted)
++{
++	uchar *statefn;
++	uchar statefile[MAXFNAME];
++	uchar toDel[MAXFNAME];
++
++	if(act == NULL)
++		return;
++
++	DBGPRINTF("act_obj_destroy: act %p '%s', (source '%s'), wd %d, pStrm %p, is_deleted %d, in_move %d\n",
++		act, act->name, act->source_name? act->source_name : "---", act->wd, act->pStrm, is_deleted, act->in_move);
++       if(act->is_symlink && is_deleted) {
++		act_obj_t *target_act;
++		for(target_act = act->edge->active ; target_act != NULL ; target_act = target_act->next) {
++			if(target_act->source_name && !strcmp(target_act->source_name, act->name)) {
++				DBGPRINTF("act_obj_destroy: unlinking slink target %s of %s "
++						"symlink\n", target_act->name, act->name);
++				act_obj_unlink(target_act);
++				break;
++			}
++		}
++	}
++	if(act->ratelimiter != NULL) {
++		ratelimitDestruct(act->ratelimiter);
++	}
++	if(act->pStrm != NULL) {
++		const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
++		pollFile(act); /* get any left-over data */
++		if(inst->bRMStateOnDel) {
++			statefn = getStateFileName(act, statefile, sizeof(statefile));
++			getFullStateFileName(statefn, toDel, sizeof(toDel));
++			statefn = toDel;
++		}
++		persistStrmState(act);
++		strm.Destruct(&act->pStrm);
++		/* we delete state file after destruct in case strm obj initiated a write */
++		if(is_deleted && !act->in_move && inst->bRMStateOnDel) {
++			DBGPRINTF("act_obj_destroy: deleting state file %s\n", statefn);
++			unlink((char*)statefn);
++		}
++	}
++	#ifdef HAVE_INOTIFY_INIT
++	if(act->wd != -1) {
++		wdmapDel(act->wd);
++	}
++	#endif
++	#if defined(OS_SOLARIS) && defined (HAVE_PORT_SOURCE_FILE)
++	if(act->pfinf != NULL) {
++		free(act->pfinf->fobj.fo_name);
++		free(act->pfinf);
++	}
++	#endif
++	free(act->basename);
++	free(act->source_name);
++	//free(act->statefile);
++	free(act->multiSub.ppMsgs);
++	#if defined(OS_SOLARIS) && defined (HAVE_PORT_SOURCE_FILE)
++		act->is_deleted = 1;
++	#else
++		free(act->name);
++		free(act);
++	#endif
++}
++
+ 
++/* destroy complete act list starting at given node */
++static void
++act_obj_destroy_all(act_obj_t *act)
++{
++	if(act == NULL)
++		return;
++
++	DBGPRINTF("act_obj_destroy_all: act %p '%s', wd %d, pStrm %p\n", act, act->name, act->wd, act->pStrm);
++	while(act != NULL) {
++		act_obj_t *const toDel = act;
++		act = act->next;
++		act_obj_destroy(toDel, 0);
++	}
++}
++
++#if 0
++/* debug: find if ptr is still present in list */
++static void
++chk_active(const act_obj_t *act, const act_obj_t *const deleted)
++{
++	while(act != NULL) {
++		DBGPRINTF("chk_active %p vs %p\n", act, deleted);
++		if(act->prev == deleted)
++			DBGPRINTF("chk_active %p prev points to %p\n", act, deleted);
++		if(act->next == deleted)
++			DBGPRINTF("chk_active %p next points to %p\n", act, deleted);
++		act = act->next;
++		DBGPRINTF("chk_active next %p\n", act);
++	}
++}
++#endif
++
++/* unlink act object from linked list and then
++ * destruct it.
++ */
++static void
++act_obj_unlink(act_obj_t *act)
++{
++	DBGPRINTF("act_obj_unlink %p: %s\n", act, act->name);
++	if(act->prev == NULL) {
++		act->edge->active = act->next;
++	} else {
++		act->prev->next = act->next;
++	}
++	if(act->next != NULL) {
++		act->next->prev = act->prev;
++	}
++	act_obj_destroy(act, 1);
++	act = NULL;
++//dbgprintf("printout of fs tree post unlink\n");
++//fs_node_print(runModConf->conf_tree, 0);
++//dbg_wdmapPrint("wdmap after");
++}
++
++static void
++fs_node_destroy(fs_node_t *const node)
++{
++	fs_edge_t *edge;
++	DBGPRINTF("node destroy: %p edges:\n", node);
++
++	for(edge = node->edges ; edge != NULL ; ) {
++		fs_node_destroy(edge->node);
++		fs_edge_t *const toDel = edge;
++		edge = edge->next;
++		act_obj_destroy_all(toDel->active);
++		free(toDel->name);
++		free(toDel->path);
++		free(toDel->instarr);
++		free(toDel);
++	}
++	free(node);
++}
++
++static void
++fs_node_walk(fs_node_t *const node,
++	void (*f_usr)(fs_edge_t*const))
++{
++	DBGPRINTF("node walk: %p edges:\n", node);
++
++	fs_edge_t *edge;
++	for(edge = node->edges ; edge != NULL ; edge = edge->next) {
++		DBGPRINTF("node walk: child %p '%s'\n", edge->node, edge->name);
++		f_usr(edge);
++		fs_node_walk(edge->node, f_usr);
++	}
++}
++
++
++
++/* add a file system object to config tree (or update existing node with new monitor)
++ */
++static rsRetVal
++fs_node_add(fs_node_t *const node, fs_node_t *const source,
++	const uchar *const toFind,
++	const size_t pathIdx,
++	instanceConf_t *const inst)
++{
++	DEFiRet;
++	fs_edge_t *newchld = NULL;
++	int i;
++
++	DBGPRINTF("fs_node_add(%p, '%s') enter, idx %zd\n",
++		node, toFind+pathIdx, pathIdx);
++	assert(toFind[0] != '\0');
++	for(i = pathIdx ; (toFind[i] != '\0') && (toFind[i] != '/') ; ++i)
++		/*JUST SKIP*/;
++	const int isFile = (toFind[i] == '\0') ? 1 : 0;
++	uchar ourPath[PATH_MAX];
++	if(i == 0) {
++		ourPath[0] = '/';
++		ourPath[1] = '\0';
++	} else {
++		memcpy(ourPath, toFind, i);
++		ourPath[i] = '\0';
++	}
++	const size_t nextPathIdx = i+1;
++	const size_t len = i - pathIdx;
++	uchar name[PATH_MAX];
++	memcpy(name, toFind+pathIdx, len);
++	name[len] = '\0';
++	DBGPRINTF("fs_node_add: name '%s'\n", name); node->root = source;
++
++	fs_edge_t *chld;
++	for(chld = node->edges ; chld != NULL ; chld = chld->next) {
++		if(!ustrcmp(chld->name, name)) {
++			DBGPRINTF("fs_node_add(%p, '%s') found '%s'\n", chld->node, toFind, name);
++			/* add new instance */
++			chld->ninst++;
++			CHKmalloc(chld->instarr = realloc(chld->instarr, sizeof(instanceConf_t*) * chld->ninst));
++			chld->instarr[chld->ninst-1] = inst;
++			/* recurse */
++			if(!isFile) {
++				CHKiRet(fs_node_add(chld->node, node, toFind, nextPathIdx, inst));
++			}
++			FINALIZE;
++		}
++	}
++
++	/* could not find node --> add it */
++	DBGPRINTF("fs_node_add(%p, '%s') did not find '%s' - adding it\n",
++		node, toFind, name);
++	CHKmalloc(newchld = calloc(sizeof(fs_edge_t), 1));
++	CHKmalloc(newchld->name = ustrdup(name));
++	CHKmalloc(newchld->node = calloc(sizeof(fs_node_t), 1));
++	CHKmalloc(newchld->path = ustrdup(ourPath));
++	CHKmalloc(newchld->instarr = calloc(sizeof(instanceConf_t*), 1));
++	newchld->instarr[0] = inst;
++	newchld->is_file = isFile;
++	newchld->ninst = 1;
++	newchld->parent = node;
++
++	DBGPRINTF("fs_node_add(%p, '%s') returns %p\n", node, toFind, newchld->node);
++
++	if(!isFile) {
++		CHKiRet(fs_node_add(newchld->node, node, toFind, nextPathIdx, inst));
++	}
++
++	/* link to list */
++	newchld->next = node->edges;
++	node->edges = newchld;
++finalize_it:
++	if(iRet != RS_RET_OK) {
++		if(newchld != NULL) {
++		free(newchld->name);
++		free(newchld->node);
++		free(newchld->path);
++		free(newchld->instarr);
++		free(newchld);
++		}
++	}
++	RETiRet;
++}
++
++/* Helper function to combine statefile and workdir
++ * This function is guranteed to work only on config data and DOES NOT
++ * open or otherwise modify disk file state.
++ */
++static int
++getFullStateFileName(const uchar *const pszstatefile, uchar *const pszout, const size_t ilenout)
++{
++	int lenout;
++	const uchar* pszworkdir;
+ 
+-/* this generates a state file name suitable for the current file. To avoid
++	/* Get Raw Workdir, if it is NULL we need to propper handle it */
++	pszworkdir = glblGetWorkDirRaw();
++
++	/* Construct file name */
++	lenout = snprintf((char*)pszout, ilenout, "%s/%s",
++			     (char*) (pszworkdir == NULL ? "." : (char*) pszworkdir), (char*)pszstatefile);
++
++	/* return out length */
++	return lenout;
++}
++
++
++/* this generates a state file name suitable for the given file. To avoid
+  * malloc calls, it must be passed a buffer which should be MAXFNAME large.
+  * Note: the buffer is not necessarily populated ... always ONLY use the
+  * RETURN VALUE!
++ * This function is guranteed to work only on config data and DOES NOT
++ * open or otherwise modify disk file state.
+  */
+ static uchar *
+-getStateFileName(lstn_t *const __restrict__ pLstn,
++getStateFileName(const act_obj_t *const act,
+ 	 	 uchar *const __restrict__ buf,
+ 		 const size_t lenbuf)
+ {
+-	uchar *ret;
+-	if(pLstn->pszStateFile == NULL) {
+-		snprintf((char*)buf, lenbuf - 1, "imfile-state:%s", pLstn->pszFileName);
+-		buf[lenbuf-1] = '\0'; /* be on the safe side... */
+-		uchar *p = buf;
+-		for( ; *p ; ++p) {
+-			if(*p == '/')
+-				*p = '-';
+-		}
+-		ret = buf;
+-	} else {
+-		ret = pLstn->pszStateFile;
+-	}
+-	return ret;
++	DBGPRINTF("getStateFileName for '%s'\n", act->name);
++	snprintf((char*)buf, lenbuf - 1, "imfile-state:%lld", (long long) act->ino);
++	DBGPRINTF("getStateFileName:  stat file name now is %s\n", buf);
++	return buf;
+ }
+ 
+ 
+ /* enqueue the read file line as a message. The provided string is
+- * not freed - thuis must be done by the caller.
++ * not freed - this must be done by the caller.
+  */
+-static rsRetVal enqLine(lstn_t *const __restrict__ pLstn,
+-                        cstr_t *const __restrict__ cstrLine)
++#define MAX_OFFSET_REPRESENTATION_NUM_BYTES 20
++static rsRetVal
++enqLine(act_obj_t *const act,
++	cstr_t *const __restrict__ cstrLine,
++	const int64 strtOffs)
+ {
+ 	DEFiRet;
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
+ 	smsg_t *pMsg;
++	uchar file_offset[MAX_OFFSET_REPRESENTATION_NUM_BYTES+1];
++	const uchar *metadata_names[2] = {(uchar *)"filename",(uchar *)"fileoffset"} ;
++	const uchar *metadata_values[2] ;
++	const size_t msgLen = cstrLen(cstrLine);
+ 
+-	if(rsCStrLen(cstrLine) == 0) {
++	if(msgLen == 0) {
+ 		/* we do not process empty lines */
+ 		FINALIZE;
+ 	}
+@@ -474,27 +1180,34 @@ static rsRetVal enqLine(lstn_t *const __restrict__ pLstn,
+ 	CHKiRet(msgConstruct(&pMsg));
+ 	MsgSetFlowControlType(pMsg, eFLOWCTL_FULL_DELAY);
+ 	MsgSetInputName(pMsg, pInputName);
+-	if (pLstn->addCeeTag) {
+-		size_t msgLen = cstrLen(cstrLine);
+-		const char *const ceeToken = "@cee:";
+-		size_t ceeMsgSize = msgLen + strlen(ceeToken) +1;
++	if(inst->addCeeTag) {
++		/* Make sure we account for terminating null byte */
++		size_t ceeMsgSize = msgLen + CONST_LEN_CEE_COOKIE + 1;
+ 		char *ceeMsg;
+ 		CHKmalloc(ceeMsg = MALLOC(ceeMsgSize));
+-		strcpy(ceeMsg, ceeToken);
++		strcpy(ceeMsg, CONST_CEE_COOKIE);
+ 		strcat(ceeMsg, (char*)rsCStrGetSzStrNoNULL(cstrLine));
+ 		MsgSetRawMsg(pMsg, ceeMsg, ceeMsgSize);
+ 		free(ceeMsg);
+ 	} else {
+-		MsgSetRawMsg(pMsg, (char*)rsCStrGetSzStrNoNULL(cstrLine), cstrLen(cstrLine));
++		MsgSetRawMsg(pMsg, (char*)rsCStrGetSzStrNoNULL(cstrLine), msgLen);
+ 	}
+ 	MsgSetMSGoffs(pMsg, 0);	/* we do not have a header... */
+ 	MsgSetHOSTNAME(pMsg, glbl.GetLocalHostName(), ustrlen(glbl.GetLocalHostName()));
+-	MsgSetTAG(pMsg, pLstn->pszTag, pLstn->lenTag);
+-	msgSetPRI(pMsg, pLstn->iFacility | pLstn->iSeverity);
+-	MsgSetRuleset(pMsg, pLstn->pRuleset);
+-	if(pLstn->addMetadata)
+-		msgAddMetadata(pMsg, (uchar*)"filename", pLstn->pszFileName);
+-	ratelimitAddMsg(pLstn->ratelimiter, &pLstn->multiSub, pMsg);
++	MsgSetTAG(pMsg, inst->pszTag, inst->lenTag);
++	msgSetPRI(pMsg, inst->iFacility | inst->iSeverity);
++	MsgSetRuleset(pMsg, inst->pBindRuleset);
++	if(inst->addMetadata) {
++		if (act->source_name) {
++			metadata_values[0] = (const uchar*)act->source_name;
++		} else {
++			metadata_values[0] = (const uchar*)act->name;
++		}
++		snprintf((char *)file_offset, MAX_OFFSET_REPRESENTATION_NUM_BYTES+1, "%lld", strtOffs);
++		metadata_values[1] = file_offset;
++		msgAddMultiMetadata(pMsg, metadata_names, metadata_values, 2);
++	}
++	ratelimitAddMsg(act->ratelimiter, &act->multiSub, pMsg);
+ finalize_it:
+ 	RETiRet;
+ }
+@@ -504,70 +1213,89 @@ finalize_it:
+  * exist or cannot be read, an error is returned.
+  */
+ static rsRetVal
+-openFileWithStateFile(lstn_t *const __restrict__ pLstn)
++openFileWithStateFile(act_obj_t *const act)
+ {
+ 	DEFiRet;
+-	strm_t *psSF = NULL;
+ 	uchar pszSFNam[MAXFNAME];
+-	size_t lenSFNam;
+-	struct stat stat_buf;
+ 	uchar statefile[MAXFNAME];
++	int fd = -1;
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
+ 
+-	uchar *const statefn = getStateFileName(pLstn, statefile, sizeof(statefile));
+-	DBGPRINTF("imfile: trying to open state for '%s', state file '%s'\n",
+-		  pLstn->pszFileName, statefn);
+-	/* Construct file name */
+-	lenSFNam = snprintf((char*)pszSFNam, sizeof(pszSFNam), "%s/%s",
+-			     (char*) glbl.GetWorkDir(), (char*)statefn);
++	uchar *const statefn = getStateFileName(act, statefile, sizeof(statefile));
++
++	getFullStateFileName(statefn, pszSFNam, sizeof(pszSFNam));
++	DBGPRINTF("trying to open state for '%s', state file '%s'\n", act->name, pszSFNam);
+ 
+ 	/* check if the file exists */
+-	if(stat((char*) pszSFNam, &stat_buf) == -1) {
++	fd = open((char*)pszSFNam, O_CLOEXEC | O_NOCTTY | O_RDONLY, 0600);
++	if(fd < 0) {
+ 		if(errno == ENOENT) {
+-			DBGPRINTF("imfile: NO state file exists for '%s'\n", pLstn->pszFileName);
+-			ABORT_FINALIZE(RS_RET_FILE_NOT_FOUND);
++			DBGPRINTF("NO state file (%s) exists for '%s' - trying to see if "
++				"old-style file exists\n", pszSFNam, act->name);
++			CHKiRet(OLD_openFileWithStateFile(act));
++			FINALIZE;
+ 		} else {
+-			char errStr[1024];
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			DBGPRINTF("imfile: error trying to access state file for '%s':%s\n",
+-			          pLstn->pszFileName, errStr);
++			LogError(errno, RS_RET_IO_ERROR,
++				"imfile error trying to access state file for '%s'",
++			        act->name);
+ 			ABORT_FINALIZE(RS_RET_IO_ERROR);
+ 		}
+ 	}
+ 
+-	/* If we reach this point, we have a state file */
++	CHKiRet(strm.Construct(&act->pStrm));
+ 
+-	CHKiRet(strm.Construct(&psSF));
+-	CHKiRet(strm.SettOperationsMode(psSF, STREAMMODE_READ));
+-	CHKiRet(strm.SetsType(psSF, STREAMTYPE_FILE_SINGLE));
+-	CHKiRet(strm.SetFName(psSF, pszSFNam, lenSFNam));
+-	CHKiRet(strm.ConstructFinalize(psSF));
++	struct json_object *jval;
++	struct json_object *json = fjson_object_from_fd(fd);
++	if(json == NULL) {
++		LogError(0, RS_RET_ERR, "imfile: error reading state file for '%s'", act->name);
++	}
+ 
+-	/* read back in the object */
+-	CHKiRet(obj.Deserialize(&pLstn->pStrm, (uchar*) "strm", psSF, NULL, pLstn));
+-	DBGPRINTF("imfile: deserialized state file, state file base name '%s', "
+-		  "configured base name '%s'\n", pLstn->pStrm->pszFName,
+-		  pLstn->pszFileName);
+-	if(ustrcmp(pLstn->pStrm->pszFName, pLstn->pszFileName)) {
+-		errmsg.LogError(0, RS_RET_STATEFILE_WRONG_FNAME, "imfile: state file '%s' "
+-				"contains file name '%s', but is used for file '%s'. State "
+-				"file deleted, starting from begin of file.",
+-				pszSFNam, pLstn->pStrm->pszFName, pLstn->pszFileName);
++	/* we access some data items a bit dirty, as we need to refactor the whole
++	 * thing in any case - TODO
++	 */
++	/* Note: we ignore filname property - it is just an aid to the user. Most
++	 * importantly it *is wrong* after a file move!
++	 */
++	fjson_object_object_get_ex(json, "prev_was_nl", &jval);
++	act->pStrm->bPrevWasNL = fjson_object_get_int(jval);
+ 
+-		unlink((char*)pszSFNam);
+-		ABORT_FINALIZE(RS_RET_STATEFILE_WRONG_FNAME);
++	fjson_object_object_get_ex(json, "curr_offs", &jval);
++	act->pStrm->iCurrOffs = fjson_object_get_int64(jval);
++
++	fjson_object_object_get_ex(json, "strt_offs", &jval);
++	act->pStrm->strtOffs = fjson_object_get_int64(jval);
++
++	fjson_object_object_get_ex(json, "prev_line_segment", &jval);
++	const uchar *const prev_line_segment = (const uchar*)fjson_object_get_string(jval);
++	if(jval != NULL) {
++		CHKiRet(rsCStrConstructFromszStr(&act->pStrm->prevLineSegment, prev_line_segment));
++		cstrFinalize(act->pStrm->prevLineSegment);
++		uchar *ret = rsCStrGetSzStrNoNULL(act->pStrm->prevLineSegment);
++		DBGPRINTF("prev_line_segment present in state file 2, is: %s\n", ret);
+ 	}
+ 
+-	strm.CheckFileChange(pLstn->pStrm);
+-	CHKiRet(strm.SeekCurrOffs(pLstn->pStrm));
++	fjson_object_object_get_ex(json, "prev_msg_segment", &jval);
++	const uchar *const prev_msg_segment = (const uchar*)fjson_object_get_string(jval);
++	if(jval != NULL) {
++		CHKiRet(rsCStrConstructFromszStr(&act->pStrm->prevMsgSegment, prev_msg_segment));
++		cstrFinalize(act->pStrm->prevMsgSegment);
++		uchar *ret = rsCStrGetSzStrNoNULL(act->pStrm->prevMsgSegment);
++		DBGPRINTF("prev_msg_segment present in state file 2, is: %s\n", ret);
++	}
++	fjson_object_put(json);
+ 
+-	/* note: we do not delete the state file, so that the last position remains
+-	 * known even in the case that rsyslogd aborts for some reason (like powerfail)
+-	 */
++	CHKiRet(strm.SetFName(act->pStrm, (uchar*)act->name, strlen(act->name)));
++	CHKiRet(strm.SettOperationsMode(act->pStrm, STREAMMODE_READ));
++	CHKiRet(strm.SetsType(act->pStrm, STREAMTYPE_FILE_MONITOR));
++	CHKiRet(strm.SetFileNotFoundError(act->pStrm, inst->fileNotFoundError));
++	CHKiRet(strm.ConstructFinalize(act->pStrm));
+ 
+-finalize_it:
+-	if(psSF != NULL)
+-		strm.Destruct(&psSF);
++	CHKiRet(strm.SeekCurrOffs(act->pStrm));
+ 
++finalize_it:
++	if(fd >= 0) {
++		close(fd);
++	}
+ 	RETiRet;
+ }
+ 
+@@ -576,30 +1304,32 @@ finalize_it:
+  * checked before calling it.
+  */
+ static rsRetVal
+-openFileWithoutStateFile(lstn_t *const __restrict__ pLstn)
++openFileWithoutStateFile(act_obj_t *const act)
+ {
+ 	DEFiRet;
+ 	struct stat stat_buf;
+ 
+-	DBGPRINTF("imfile: clean startup withOUT state file for '%s'\n", pLstn->pszFileName);
+-	if(pLstn->pStrm != NULL)
+-		strm.Destruct(&pLstn->pStrm);
+-	CHKiRet(strm.Construct(&pLstn->pStrm));
+-	CHKiRet(strm.SettOperationsMode(pLstn->pStrm, STREAMMODE_READ));
+-	CHKiRet(strm.SetsType(pLstn->pStrm, STREAMTYPE_FILE_MONITOR));
+-	CHKiRet(strm.SetFName(pLstn->pStrm, pLstn->pszFileName, strlen((char*) pLstn->pszFileName)));
+-	CHKiRet(strm.ConstructFinalize(pLstn->pStrm));
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
++
++	DBGPRINTF("clean startup withOUT state file for '%s'\n", act->name);
++	if(act->pStrm != NULL)
++		strm.Destruct(&act->pStrm);
++	CHKiRet(strm.Construct(&act->pStrm));
++	CHKiRet(strm.SettOperationsMode(act->pStrm, STREAMMODE_READ));
++	CHKiRet(strm.SetsType(act->pStrm, STREAMTYPE_FILE_MONITOR));
++	CHKiRet(strm.SetFName(act->pStrm, (uchar*)act->name, strlen(act->name)));
++	CHKiRet(strm.SetFileNotFoundError(act->pStrm, inst->fileNotFoundError));
++	CHKiRet(strm.ConstructFinalize(act->pStrm));
+ 
+ 	/* As a state file not exist, this is a fresh start. seek to file end
+ 	 * when freshStartTail is on.
+ 	 */
+-	if(pLstn->freshStartTail){
+-		if(stat((char*) pLstn->pszFileName, &stat_buf) != -1) {
+-			pLstn->pStrm->iCurrOffs = stat_buf.st_size;
+-			CHKiRet(strm.SeekCurrOffs(pLstn->pStrm));
++	if(inst->freshStartTail){
++		if(stat((char*) act->name, &stat_buf) != -1) {
++			act->pStrm->iCurrOffs = stat_buf.st_size;
++			CHKiRet(strm.SeekCurrOffs(act->pStrm));
+ 		}
+ 	}
+-	strmSetReadTimeout(pLstn->pStrm, pLstn->readTimeout);
+ 
+ finalize_it:
+ 	RETiRet;
+@@ -608,17 +1338,18 @@ finalize_it:
+  * if so, reading it in. Processing continues from the last know location.
+  */
+ static rsRetVal
+-openFile(lstn_t *const __restrict__ pLstn)
++openFile(act_obj_t *const act)
+ {
+ 	DEFiRet;
++	const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
+ 
+-	CHKiRet_Hdlr(openFileWithStateFile(pLstn)) {
+-		CHKiRet(openFileWithoutStateFile(pLstn));
++	CHKiRet_Hdlr(openFileWithStateFile(act)) {
++		CHKiRet(openFileWithoutStateFile(act));
+ 	}
+ 
+-	DBGPRINTF("imfile: breopenOnTruncate %d for '%s'\n",
+-		pLstn->reopenOnTruncate, pLstn->pszFileName);
+-	CHKiRet(strm.SetbReopenOnTruncate(pLstn->pStrm, pLstn->reopenOnTruncate));
++	DBGPRINTF("breopenOnTruncate %d for '%s'\n", inst->reopenOnTruncate, act->name);
++	CHKiRet(strm.SetbReopenOnTruncate(act->pStrm, inst->reopenOnTruncate));
++	strmSetReadTimeout(act->pStrm, inst->readTimeout);
+ 
+ finalize_it:
+ 	RETiRet;
+@@ -638,58 +1369,72 @@ static void pollFileCancelCleanup(void *pArg)
+ }
+ 
+ 
+-/* poll a file, need to check file rollover etc. open file if not open */
+-#if !defined(_AIX)
+-#pragma GCC diagnostic ignored "-Wempty-body"
+-#endif
++/* pollFile needs to be split due to the unfortunate pthread_cancel_push() macros. */
+ static rsRetVal
+-pollFile(lstn_t *pLstn, int *pbHadFileData)
++pollFileReal(act_obj_t *act, cstr_t **pCStr)
+ {
+-	cstr_t *pCStr = NULL;
++	int64 strtOffs;
+ 	DEFiRet;
+-
+-	/* Note: we must do pthread_cleanup_push() immediately, because the POXIS macros
+-	 * otherwise do not work if I include the _cleanup_pop() inside an if... -- rgerhards, 2008-08-14
+-	 */
+-	pthread_cleanup_push(pollFileCancelCleanup, &pCStr);
+ 	int nProcessed = 0;
+-	if(pLstn->pStrm == NULL) {
+-		CHKiRet(openFile(pLstn)); /* open file */
++
++	DBGPRINTF("pollFileReal enter, pStrm %p, name '%s'\n", act->pStrm, act->name);
++	DBGPRINTF("pollFileReal enter, edge %p\n", act->edge);
++	DBGPRINTF("pollFileReal enter, edge->instarr %p\n", act->edge->instarr);
++
++	instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
++
++	if(act->pStrm == NULL) {
++		CHKiRet(openFile(act)); /* open file */
+ 	}
+ 
+ 	/* loop below will be exited when strmReadLine() returns EOF */
+ 	while(glbl.GetGlobalInputTermState() == 0) {
+-		if(pLstn->maxLinesAtOnce != 0 && nProcessed >= pLstn->maxLinesAtOnce)
++		if(inst->maxLinesAtOnce != 0 && nProcessed >= inst->maxLinesAtOnce)
+ 			break;
+-		if(pLstn->startRegex == NULL) {
+-			CHKiRet(strm.ReadLine(pLstn->pStrm, &pCStr, pLstn->readMode, pLstn->escapeLF, pLstn->trimLineOverBytes));
++		if(inst->startRegex == NULL) {
++			CHKiRet(strm.ReadLine(act->pStrm, pCStr, inst->readMode, inst->escapeLF,
++				inst->trimLineOverBytes, &strtOffs));
+ 		} else {
+-			CHKiRet(strmReadMultiLine(pLstn->pStrm, &pCStr, &pLstn->end_preg, pLstn->escapeLF));
++			CHKiRet(strmReadMultiLine(act->pStrm, pCStr, &inst->end_preg,
++				inst->escapeLF, &strtOffs));
+ 		}
+ 		++nProcessed;
+-		if(pbHadFileData != NULL)
+-			*pbHadFileData = 1; /* this is just a flag, so set it and forget it */
+-		CHKiRet(enqLine(pLstn, pCStr)); /* process line */
+-		rsCStrDestruct(&pCStr); /* discard string (must be done by us!) */
+-		if(pLstn->iPersistStateInterval > 0 && pLstn->nRecords++ >= pLstn->iPersistStateInterval) {
+-			persistStrmState(pLstn);
+-			pLstn->nRecords = 0;
++		runModConf->bHadFileData = 1; /* this is just a flag, so set it and forget it */
++		CHKiRet(enqLine(act, *pCStr, strtOffs)); /* process line */
++		rsCStrDestruct(pCStr); /* discard string (must be done by us!) */
++		if(inst->iPersistStateInterval > 0 && ++act->nRecords >= inst->iPersistStateInterval) {
++			persistStrmState(act);
++			act->nRecords = 0;
+ 		}
+ 	}
+ 
+ finalize_it:
+-	multiSubmitFlush(&pLstn->multiSub);
+-	pthread_cleanup_pop(0);
++	multiSubmitFlush(&act->multiSub);
+ 
+-	if(pCStr != NULL) {
+-		rsCStrDestruct(&pCStr);
++	if(*pCStr != NULL) {
++		rsCStrDestruct(pCStr);
+ 	}
+ 
+ 	RETiRet;
+ }
+-#if !defined(_AIX)
+-#pragma GCC diagnostic warning "-Wempty-body"
+-#endif
++
++/* poll a file, need to check file rollover etc. open file if not open */
++static rsRetVal
++pollFile(act_obj_t *const act)
++{
++	cstr_t *pCStr = NULL;
++	DEFiRet;
++	if (act->is_symlink) {
++		FINALIZE;    /* no reason to poll symlink file */
++	}
++	/* Note: we must do pthread_cleanup_push() immediately, because the POSIX macros
++	 * otherwise do not work if I include the _cleanup_pop() inside an if... -- rgerhards, 2008-08-14
++	 */
++	pthread_cleanup_push(pollFileCancelCleanup, &pCStr);
++	iRet = pollFileReal(act, &pCStr);
++	pthread_cleanup_pop(0);
++finalize_it: RETiRet;
++}
+ 
+ 
+ /* create input instance, set default parameters, and
+@@ -722,6 +1467,7 @@ createInstance(instanceConf_t **pinst)
+ 	inst->addMetadata = ADD_METADATA_UNSPECIFIED;
+ 	inst->addCeeTag = 0;
+ 	inst->freshStartTail = 0;
++	inst->fileNotFoundError = 1;
+ 	inst->readTimeout = loadModConf->readTimeout;
+ 
+ 	/* node created, let's add to config */
+@@ -767,19 +1513,11 @@ getBasename(uchar *const __restrict__ basen, uchar *const __restrict__ path)
+ }
+ 
+ /* this function checks instance parameters and does some required pre-processing
+- * (e.g. split filename in path and actual name)
+- * Note: we do NOT use dirname()/basename() as they have portability problems.
+  */
+ static rsRetVal
+-checkInstance(instanceConf_t *inst)
++checkInstance(instanceConf_t *const inst)
+ {
+-	char dirn[MAXFNAME];
+-	uchar basen[MAXFNAME];
+-	int i;
+-	struct stat sb;
+-	int r;
+-	int eno;
+-	char errStr[512];
++	uchar curr_wd[MAXFNAME];
+ 	DEFiRet;
+ 
+ 	/* this is primarily for the clang static analyzer, but also
+@@ -788,36 +1526,37 @@ checkInstance(instanceConf_t *inst)
+ 	if(inst->pszFileName == NULL)
+ 		ABORT_FINALIZE(RS_RET_INTERNAL_ERROR);
+ 
+-	i = getBasename(basen, inst->pszFileName);
+-	if (i == -1) {
+-		errmsg.LogError(0, RS_RET_CONFIG_ERROR, "imfile: file path '%s' does not include a basename component",
+-			inst->pszFileName);
+-		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
+-	}
+-	
+-	memcpy(dirn, inst->pszFileName, i); /* do not copy slash */
+-	dirn[i] = '\0';
+-	CHKmalloc(inst->pszFileBaseName = (uchar*) strdup((char*)basen));
+-	CHKmalloc(inst->pszDirName = (uchar*) strdup(dirn));
+-
+-	if(dirn[0] == '\0') {
+-		dirn[0] = '/';
+-		dirn[1] = '\0';
+-	}
+-	r = stat(dirn, &sb);
+-	if(r != 0)  {
+-		eno = errno;
+-		rs_strerror_r(eno, errStr, sizeof(errStr));
+-		errmsg.LogError(0, RS_RET_CONFIG_ERROR, "imfile warning: directory '%s': %s",
+-				dirn, errStr);
+-		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
+-	}
+-	if(!S_ISDIR(sb.st_mode)) {
+-		errmsg.LogError(0, RS_RET_CONFIG_ERROR, "imfile warning: configured directory "
+-				"'%s' is NOT a directory", dirn);
+-		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	CHKmalloc(inst->pszFileName_forOldStateFile = ustrdup(inst->pszFileName));
++	if(loadModConf->normalizePath) {
++		if(inst->pszFileName[0] == '.' && inst->pszFileName[1] == '/') {
++			DBGPRINTF("imfile: removing heading './' from name '%s'\n", inst->pszFileName);
++			memmove(inst->pszFileName, inst->pszFileName+2, ustrlen(inst->pszFileName) - 1);
++		}
++
++		if(inst->pszFileName[0] != '/') {
++			if(getcwd((char*)curr_wd, MAXFNAME) == NULL || curr_wd[0] != '/') {
++				LogError(errno, RS_RET_ERR, "imfile: error querying current working "
++					"directory - can not continue with %s", inst->pszFileName);
++				ABORT_FINALIZE(RS_RET_ERR);
++			}
++			const size_t len_curr_wd = ustrlen(curr_wd);
++			if(len_curr_wd + ustrlen(inst->pszFileName) + 1 >= MAXFNAME) {
++				LogError(0, RS_RET_ERR, "imfile: length of configured file and current "
++					"working directory exceeds permitted size - ignoring %s",
++					inst->pszFileName);
++				ABORT_FINALIZE(RS_RET_ERR);
++			}
++			curr_wd[len_curr_wd] = '/';
++			strcpy((char*)curr_wd+len_curr_wd+1, (char*)inst->pszFileName);
++			free(inst->pszFileName);
++			CHKmalloc(inst->pszFileName = ustrdup(curr_wd));
++		}
+ 	}
++	dbgprintf("imfile: adding file monitor for '%s'\n", inst->pszFileName);
+ 
++	if(inst->pszTag != NULL) {
++		inst->lenTag = ustrlen(inst->pszTag);
++	}
+ finalize_it:
+ 	RETiRet;
+ }
+@@ -869,140 +1608,14 @@ addInstance(void __attribute__((unused)) *pVal, uchar *pNewVal)
+ 	inst->bRMStateOnDel = 0;
+ 	inst->readTimeout = loadModConf->readTimeout;
+ 
+-	CHKiRet(checkInstance(inst));
+-
+-	/* reset legacy system */
+-	cs.iPersistStateInterval = 0;
+-	resetConfigVariables(NULL, NULL); /* values are both dummies */
+-
+-finalize_it:
+-	free(pNewVal); /* we do not need it, but we must free it! */
+-	RETiRet;
+-}
+-
+-
+-/* This adds a new listener object to the bottom of the list, but
+- * it does NOT initialize any data members except for the list
+- * pointers themselves.
+- */
+-static rsRetVal
+-lstnAdd(lstn_t **newLstn)
+-{
+-	lstn_t *pLstn;
+-	DEFiRet;
+-
+-	CHKmalloc(pLstn = (lstn_t*) MALLOC(sizeof(lstn_t)));
+-	if(runModConf->pRootLstn == NULL) {
+-		runModConf->pRootLstn = pLstn;
+-		pLstn->prev = NULL;
+-	} else {
+-		runModConf->pTailLstn->next = pLstn;
+-		pLstn->prev = runModConf->pTailLstn;
+-	}
+-	runModConf->pTailLstn = pLstn;
+-	pLstn->next = NULL;
+-	*newLstn = pLstn;
+-
+-finalize_it:
+-	RETiRet;
+-}
+-
+-/* delete a listener object */
+-static void
+-lstnDel(lstn_t *pLstn)
+-{
+-	DBGPRINTF("imfile: lstnDel called for %s\n", pLstn->pszFileName);
+-	if(pLstn->pStrm != NULL) { /* stream open? */
+-		persistStrmState(pLstn);
+-		strm.Destruct(&(pLstn->pStrm));
+-	}
+-	ratelimitDestruct(pLstn->ratelimiter);
+-	free(pLstn->multiSub.ppMsgs);
+-	free(pLstn->pszFileName);
+-	free(pLstn->pszTag);
+-	free(pLstn->pszStateFile);
+-	free(pLstn->pszBaseName);
+-	if(pLstn->startRegex != NULL)
+-		regfree(&pLstn->end_preg);
+-
+-	if(pLstn == runModConf->pRootLstn)
+-		runModConf->pRootLstn = pLstn->next;
+-	if(pLstn == runModConf->pTailLstn)
+-		runModConf->pTailLstn = pLstn->prev;
+-	if(pLstn->next != NULL)
+-		pLstn->next->prev = pLstn->prev;
+-	if(pLstn->prev != NULL)
+-		pLstn->prev->next = pLstn->next;
+-	free(pLstn);
+-}
+-
+-/* This function is called when a new listener shall be added.
+- * It also does some late stage error checking on the config
+- * and reports issues it finds.
+- */
+-static rsRetVal
+-addListner(instanceConf_t *inst)
+-{
+-	DEFiRet;
+-	lstn_t *pThis;
+-	sbool hasWildcard;
+-
+-	hasWildcard = containsGlobWildcard((char*)inst->pszFileBaseName);
+-	if(hasWildcard) {
+-		if(runModConf->opMode == OPMODE_POLLING) {
+-			errmsg.LogError(0, RS_RET_IMFILE_WILDCARD,
+-				"imfile: The to-be-monitored file \"%s\" contains "
+-				"wildcards. This is not supported in "
+-				"polling mode.", inst->pszFileName);
+-			ABORT_FINALIZE(RS_RET_IMFILE_WILDCARD);
+-		} else if(inst->pszStateFile != NULL) {
+-			errmsg.LogError(0, RS_RET_IMFILE_WILDCARD,
+-				"imfile: warning: it looks like to-be-monitored "
+-				"file \"%s\" contains wildcards. This usually "
+-				"does not work well with specifying a state file.",
+-				inst->pszFileName);
+-		}
+-	}
++	CHKiRet(checkInstance(inst));
++
++	/* reset legacy system */
++	cs.iPersistStateInterval = 0;
++	resetConfigVariables(NULL, NULL); /* values are both dummies */
+ 
+-	CHKiRet(lstnAdd(&pThis));
+-	pThis->hasWildcard = hasWildcard;
+-	pThis->pszFileName = (uchar*) strdup((char*) inst->pszFileName);
+-	pThis->pszDirName = inst->pszDirName; /* use memory from inst! */
+-	pThis->pszBaseName = (uchar*)strdup((char*)inst->pszFileBaseName); /* be consistent with expanded wildcards! */
+-	pThis->pszTag = (uchar*) strdup((char*) inst->pszTag);
+-	pThis->lenTag = ustrlen(pThis->pszTag);
+-	pThis->pszStateFile = inst->pszStateFile == NULL ? NULL : (uchar*) strdup((char*) inst->pszStateFile);
+-
+-	CHKiRet(ratelimitNew(&pThis->ratelimiter, "imfile", (char*)inst->pszFileName));
+-	CHKmalloc(pThis->multiSub.ppMsgs = MALLOC(inst->nMultiSub * sizeof(smsg_t *)));
+-	pThis->multiSub.maxElem = inst->nMultiSub;
+-	pThis->multiSub.nElem = 0;
+-	pThis->iSeverity = inst->iSeverity;
+-	pThis->iFacility = inst->iFacility;
+-	pThis->maxLinesAtOnce = inst->maxLinesAtOnce;
+-	pThis->trimLineOverBytes = inst->trimLineOverBytes;
+-	pThis->iPersistStateInterval = inst->iPersistStateInterval;
+-	pThis->readMode = inst->readMode;
+-	pThis->startRegex = inst->startRegex; /* no strdup, as it is read-only */
+-	if(pThis->startRegex != NULL)
+-		if(regcomp(&pThis->end_preg, (char*)pThis->startRegex, REG_EXTENDED)) {
+-			DBGPRINTF("imfile: error regex compile\n");
+-			ABORT_FINALIZE(RS_RET_ERR);
+-		}
+-	pThis->bRMStateOnDel = inst->bRMStateOnDel;
+-	pThis->escapeLF = inst->escapeLF;
+-	pThis->reopenOnTruncate = inst->reopenOnTruncate;
+-	pThis->addMetadata = (inst->addMetadata == ADD_METADATA_UNSPECIFIED) ?
+-			       hasWildcard : inst->addMetadata;
+-	pThis->addCeeTag = inst->addCeeTag;
+-	pThis->readTimeout = inst->readTimeout;
+-	pThis->freshStartTail = inst->freshStartTail;
+-	pThis->pRuleset = inst->pBindRuleset;
+-	pThis->nRecords = 0;
+-	pThis->pStrm = NULL;
+-	pThis->prevLineSegment = NULL;
+-	pThis->masterLstn = NULL; /* we *are* a master! */
+ finalize_it:
++	free(pNewVal); /* we do not need it, but we must free it! */
+ 	RETiRet;
+ }
+ 
+@@ -1055,6 +1668,8 @@ CODESTARTnewInpInst
+ 			inst->addCeeTag = (sbool) pvals[i].val.d.n;
+ 		} else if(!strcmp(inppblk.descr[i].name, "freshstarttail")) {
+ 			inst->freshStartTail = (sbool) pvals[i].val.d.n;
++		} else if(!strcmp(inppblk.descr[i].name, "filenotfounderror")) {
++			inst->fileNotFoundError = (sbool) pvals[i].val.d.n;
+ 		} else if(!strcmp(inppblk.descr[i].name, "escapelf")) {
+ 			inst->escapeLF = (sbool) pvals[i].val.d.n;
+ 		} else if(!strcmp(inppblk.descr[i].name, "reopenontruncate")) {
+@@ -1087,6 +1702,16 @@ CODESTARTnewInpInst
+ 			"at the same time --- remove one of them");
+ 			ABORT_FINALIZE(RS_RET_PARAM_NOT_PERMITTED);
+ 	}
++
++	if(inst->startRegex != NULL) {
++		const int errcode = regcomp(&inst->end_preg, (char*)inst->startRegex, REG_EXTENDED);
++		if(errcode != 0) {
++			char errbuff[512];
++			regerror(errcode, &inst->end_preg, errbuff, sizeof(errbuff));
++			parser_errmsg("imfile: error in regex expansion: %s", errbuff);
++			ABORT_FINALIZE(RS_RET_ERR);
++		}
++	}
+ 	if(inst->readTimeout != 0)
+ 		loadModConf->haveReadTimeouts = 1;
+ 	CHKiRet(checkInstance(inst));
+@@ -1106,6 +1731,10 @@ CODESTARTbeginCnfLoad
+ 	loadModConf->readTimeout = 0; /* default: no timeout */
+ 	loadModConf->timeoutGranularity = 1000; /* default: 1 second */
+ 	loadModConf->haveReadTimeouts = 0; /* default: no timeout */
++	loadModConf->normalizePath = 1;
++	loadModConf->sortFiles = GLOB_NOSORT;
++	loadModConf->conf_tree = calloc(sizeof(fs_node_t), 1);
++	loadModConf->conf_tree->edges = NULL;
+ 	bLegacyCnfModGlobalsPermitted = 1;
+ 	/* init legacy config vars */
+ 	cs.pszFileName = NULL;
+@@ -1148,6 +1777,10 @@ CODESTARTsetModCnf
+ 		} else if(!strcmp(modpblk.descr[i].name, "timeoutgranularity")) {
+ 			/* note: we need ms, thus "* 1000" */
+ 			loadModConf->timeoutGranularity = (int) pvals[i].val.d.n * 1000;
++		} else if(!strcmp(modpblk.descr[i].name, "sortfiles")) {
++			loadModConf->sortFiles = ((sbool) pvals[i].val.d.n) ? 0 : GLOB_NOSORT;
++		} else if(!strcmp(modpblk.descr[i].name, "normalizepath")) {
++			loadModConf->normalizePath = (sbool) pvals[i].val.d.n;
+ 		} else if(!strcmp(modpblk.descr[i].name, "mode")) {
+ 			if(!es_strconstcmp(pvals[i].val.d.estr, "polling"))
+ 				loadModConf->opMode = OPMODE_POLLING;
+@@ -1217,19 +1850,31 @@ BEGINactivateCnf
+ 	instanceConf_t *inst;
+ CODESTARTactivateCnf
+ 	runModConf = pModConf;
+-	runModConf->pRootLstn = NULL,
+-	runModConf->pTailLstn = NULL;
++	if(runModConf->root == NULL) {
++		LogError(0, NO_ERRCODE, "imfile: no file monitors configured, "
++				"input not activated.\n");
++		ABORT_FINALIZE(RS_RET_NO_RUN);
++	}
+ 
+ 	for(inst = runModConf->root ; inst != NULL ; inst = inst->next) {
+-		addListner(inst);
++		// TODO: provide switch to turn off this warning?
++		if(!containsGlobWildcard((char*)inst->pszFileName)) {
++			if(access((char*)inst->pszFileName, R_OK) != 0) {
++				LogError(errno, RS_RET_ERR,
++					"imfile: on startup file '%s' does not exist "
++					"but is configured in static file monitor - this "
++					"may indicate a misconfiguration. If the file "
++					"appears at a later time, it will automatically "
++					"be processed. Reason", inst->pszFileName);
++			}
++		}
++		fs_node_add(runModConf->conf_tree, NULL, inst->pszFileName, 0, inst);
+ 	}
+ 
+-	/* if we could not set up any listeners, there is no point in running... */
+-	if(runModConf->pRootLstn == 0) {
+-		errmsg.LogError(0, NO_ERRCODE, "imfile: no file monitors could be started, "
+-				"input not activated.\n");
+-		ABORT_FINALIZE(RS_RET_NO_RUN);
++	if(Debug) {
++		fs_node_print(runModConf->conf_tree, 0);
+ 	}
++
+ finalize_it:
+ ENDactivateCnf
+ 
+@@ -1237,14 +1882,20 @@ ENDactivateCnf
+ BEGINfreeCnf
+ 	instanceConf_t *inst, *del;
+ CODESTARTfreeCnf
++	fs_node_destroy(pModConf->conf_tree);
++	//move_list_destruct(pModConf);
+ 	for(inst = pModConf->root ; inst != NULL ; ) {
++		if(inst->startRegex != NULL)
++			regfree(&inst->end_preg);
+ 		free(inst->pszBindRuleset);
+ 		free(inst->pszFileName);
+-		free(inst->pszDirName);
+-		free(inst->pszFileBaseName);
+ 		free(inst->pszTag);
+ 		free(inst->pszStateFile);
+-		free(inst->startRegex);
++		free(inst->pszFileName_forOldStateFile);
++		if(inst->startRegex != NULL) {
++			regfree(&inst->end_preg);
++			free(inst->startRegex);
++		}
+ 		del = inst;
+ 		inst = inst->next;
+ 		free(del);
+@@ -1252,45 +1903,25 @@ CODESTARTfreeCnf
+ ENDfreeCnf
+ 
+ 
+-/* Monitor files in traditional polling mode.
+- *
+- * We go through all files and remember if at least one had data. If so, we do
+- * another run (until no data was present in any file). Then we sleep for
+- * PollInterval seconds and restart the whole process. This ensures that as
+- * long as there is some data present, it will be processed at the fastest
+- * possible pace - probably important for busy systmes. If we monitor just a
+- * single file, the algorithm is slightly modified. In that case, the sleep
+- * hapens immediately. The idea here is that if we have just one file, we
+- * returned from the file processer because that file had no additional data.
+- * So even if we found some lines, it is highly unlikely to find a new one
+- * just now. Trying it would result in a performance-costly additional try
+- * which in the very, very vast majority of cases will never find any new
+- * lines.
+- * On spamming the main queue: keep in mind that it will automatically rate-limit
+- * ourselfes if we begin to overrun it. So we really do not need to care here.
+- */
++/* Monitor files in polling mode. */
+ static rsRetVal
+ doPolling(void)
+ {
+-	int bHadFileData; /* were there at least one file with data during this run? */
+ 	DEFiRet;
+ 	while(glbl.GetGlobalInputTermState() == 0) {
++		DBGPRINTF("doPolling: new poll run\n");
+ 		do {
+-			lstn_t *pLstn;
+-			bHadFileData = 0;
+-			for(pLstn = runModConf->pRootLstn ; pLstn != NULL ; pLstn = pLstn->next) {
+-				if(glbl.GetGlobalInputTermState() == 1)
+-					break; /* terminate input! */
+-				pollFile(pLstn, &bHadFileData);
+-			}
+-		} while(bHadFileData == 1 && glbl.GetGlobalInputTermState() == 0);
+-		  /* warning: do...while()! */
++			runModConf->bHadFileData = 0;
++			fs_node_walk(runModConf->conf_tree, poll_tree);
++			DBGPRINTF("doPolling: end poll walk, hadData %d\n", runModConf->bHadFileData);
++		} while(runModConf->bHadFileData); /* warning: do...while()! */
+ 
+ 		/* Note: the additional 10ns wait is vitally important. It guards rsyslog
+ 		 * against totally hogging the CPU if the users selects a polling interval
+ 		 * of 0 seconds. It doesn't hurt any other valid scenario. So do not remove.
+ 		 * rgerhards, 2008-02-14
+ 		 */
++		DBGPRINTF("doPolling: poll going to sleep\n");
+ 		if(glbl.GetGlobalInputTermState() == 0)
+ 			srSleep(runModConf->iPollInterval, 10);
+ 	}
+@@ -1298,631 +1929,122 @@ doPolling(void)
+ 	RETiRet;
+ }
+ 
++#if defined(HAVE_INOTIFY_INIT)
+ 
+-#ifdef HAVE_INOTIFY_INIT
+-static rsRetVal
+-fileTableInit(fileTable_t *const __restrict__ tab, const int nelem)
+-{
+-	DEFiRet;
+-	CHKmalloc(tab->listeners = malloc(sizeof(dirInfoFiles_t) * nelem));
+-	tab->allocMax = nelem;
+-	tab->currMax = 0;
+-finalize_it:
+-	RETiRet;
+-}
+-/* uncomment if needed
+ static void
+-fileTableDisplay(fileTable_t *tab)
++in_dbg_showEv(const struct inotify_event *ev)
+ {
+-	int f;
+-	uchar *baseName;
+-	DBGPRINTF("imfile: dirs.currMaxfiles %d\n", tab->currMax);
+-	for(f = 0 ; f < tab->currMax ; ++f) {
+-		baseName = tab->listeners[f].pLstn->pszBaseName;
+-		DBGPRINTF("imfile: TABLE %p CONTENTS, %d->%p:'%s'\n", tab, f, tab->listeners[f].pLstn, (char*)baseName);
+-	}
+-}
+-*/
+-
+-static int
+-fileTableSearch(fileTable_t *const __restrict__ tab, uchar *const __restrict__ fn)
+-{
+-	int f;
+-	uchar *baseName = NULL;
+-	/* UNCOMMENT FOR DEBUG fileTableDisplay(tab); */
+-	for(f = 0 ; f < tab->currMax ; ++f) {
+-		baseName = tab->listeners[f].pLstn->pszBaseName;
+-		if(!fnmatch((char*)baseName, (char*)fn, FNM_PATHNAME | FNM_PERIOD))
+-			break; /* found */
+-	}
+-	if(f == tab->currMax)
+-		f = -1;
+-	DBGPRINTF("imfile: fileTableSearch file '%s' - '%s', found:%d\n", fn, baseName, f);
+-	return f;
+-}
+-
+-static int
+-fileTableSearchNoWildcard(fileTable_t *const __restrict__ tab, uchar *const __restrict__ fn)
+-{
+-	int f;
+-	uchar *baseName = NULL;
+-	/* UNCOMMENT FOR DEBUG fileTableDisplay(tab); */
+-	for(f = 0 ; f < tab->currMax ; ++f) {
+-		baseName = tab->listeners[f].pLstn->pszBaseName;
+-		if (strcmp((const char*)baseName, (const char*)fn) == 0)
+-			break; /* found */
+-	}
+-	if(f == tab->currMax)
+-		f = -1;
+-	DBGPRINTF("imfile: fileTableSearchNoWildcard file '%s' - '%s', found:%d\n", fn, baseName, f);
+-	return f;
+-}
+-
+-/* add file to file table */
+-static rsRetVal
+-fileTableAddFile(fileTable_t *const __restrict__ tab, lstn_t *const __restrict__ pLstn)
+-{
+-	int j;
+-	DEFiRet;
+-	/* UNCOMMENT FOR DEBUG fileTableDisplay(tab); */
+-	for(j = 0 ; j < tab->currMax && tab->listeners[j].pLstn != pLstn ; ++j)
+-		; /* just scan */
+-	if(j < tab->currMax) {
+-		++tab->listeners[j].refcnt;
+-		DBGPRINTF("imfile: file '%s' already registered, refcnt now %d\n",
+-			pLstn->pszFileName, tab->listeners[j].refcnt);
+-		FINALIZE;
++	if(ev->mask & IN_IGNORED) {
++		dbgprintf("INOTIFY event: watch was REMOVED\n");
+ 	}
+-
+-	if(tab->currMax == tab->allocMax) {
+-		const int newMax = 2 * tab->allocMax;
+-		dirInfoFiles_t *newListenerTab = realloc(tab->listeners, newMax * sizeof(dirInfoFiles_t));
+-		if(newListenerTab == NULL) {
+-			errmsg.LogError(0, RS_RET_OUT_OF_MEMORY,
+-					"cannot alloc memory to map directory/file relationship "
+-					"for '%s' - ignoring", pLstn->pszFileName);
+-			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
+-		}
+-		tab->listeners = newListenerTab;
+-		tab->allocMax = newMax;
+-		DBGPRINTF("imfile: increased dir table to %d entries\n", allocMaxDirs);
++	if(ev->mask & IN_MODIFY) {
++		dbgprintf("INOTIFY event: watch was MODIFID\n");
+ 	}
+-
+-	tab->listeners[tab->currMax].pLstn = pLstn;
+-	tab->listeners[tab->currMax].refcnt = 1;
+-	tab->currMax++;
+-finalize_it:
+-	RETiRet;
+-}
+-
+-/* delete a file from file table */
+-static rsRetVal
+-fileTableDelFile(fileTable_t *const __restrict__ tab, lstn_t *const __restrict__ pLstn)
+-{
+-	int j;
+-	DEFiRet;
+-
+-	for(j = 0 ; j < tab->currMax && tab->listeners[j].pLstn != pLstn ; ++j)
+-		; /* just scan */
+-	if(j == tab->currMax) {
+-		DBGPRINTF("imfile: no association for file '%s'\n", pLstn->pszFileName);
+-		FINALIZE;
++	if(ev->mask & IN_ACCESS) {
++		dbgprintf("INOTIFY event: watch IN_ACCESS\n");
+ 	}
+-	tab->listeners[j].refcnt--;
+-	if(tab->listeners[j].refcnt == 0) {
+-		/* we remove that entry (but we never shrink the table) */
+-		if(j < tab->currMax - 1) {
+-			/* entry in middle - need to move others */
+-			memmove(tab->listeners+j, tab->listeners+j+1,
+-				(tab->currMax -j-1) * sizeof(dirInfoFiles_t));
+-		}
+-		--tab->currMax;
++	if(ev->mask & IN_ATTRIB) {
++		dbgprintf("INOTIFY event: watch IN_ATTRIB\n");
+ 	}
+-finalize_it:
+-	RETiRet;
+-}
+-/* add entry to dirs array */
+-static rsRetVal
+-dirsAdd(uchar *dirName)
+-{
+-	int newMax;
+-	dirInfo_t *newDirTab;
+-	DEFiRet;
+-
+-	if(currMaxDirs == allocMaxDirs) {
+-		newMax = 2 * allocMaxDirs;
+-		newDirTab = realloc(dirs, newMax * sizeof(dirInfo_t));
+-		if(newDirTab == NULL) {
+-			errmsg.LogError(0, RS_RET_OUT_OF_MEMORY,
+-					"cannot alloc memory to monitor directory '%s' - ignoring",
+-					dirName);
+-			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
+-		}
+-		dirs = newDirTab;
+-		allocMaxDirs = newMax;
+-		DBGPRINTF("imfile: increased dir table to %d entries\n", allocMaxDirs);
++	if(ev->mask & IN_CLOSE_WRITE) {
++		dbgprintf("INOTIFY event: watch IN_CLOSE_WRITE\n");
+ 	}
+-
+-	/* if we reach this point, there is space in the file table for the new entry */
+-	dirs[currMaxDirs].dirName = dirName;
+-	CHKiRet(fileTableInit(&dirs[currMaxDirs].active, INIT_FILE_IN_DIR_TAB_SIZE));
+-	CHKiRet(fileTableInit(&dirs[currMaxDirs].configured, INIT_FILE_IN_DIR_TAB_SIZE));
+-
+-	++currMaxDirs;
+-	DBGPRINTF("imfile: added to dirs table: '%s'\n", dirName);
+-finalize_it:
+-	RETiRet;
+-}
+-
+-
+-/* checks if a dir name is already inside the dirs array. If so, returns
+- * its index. If not present, -1 is returned.
+- */
+-static int
+-dirsFindDir(uchar *dir)
+-{
+-	int i;
+-
+-	for(i = 0 ; i < currMaxDirs && ustrcmp(dir, dirs[i].dirName) ; ++i)
+-		; /* just scan, all done in for() */
+-	if(i == currMaxDirs)
+-		i = -1;
+-	return i;
+-}
+-
+-static rsRetVal
+-dirsInit(void)
+-{
+-	instanceConf_t *inst;
+-	DEFiRet;
+-
+-	free(dirs);
+-	CHKmalloc(dirs = malloc(sizeof(dirInfo_t) * INIT_FILE_TAB_SIZE));
+-	allocMaxDirs = INIT_FILE_TAB_SIZE;
+-	currMaxDirs = 0;
+-
+-	for(inst = runModConf->root ; inst != NULL ; inst = inst->next) {
+-		if(dirsFindDir(inst->pszDirName) == -1)
+-			dirsAdd(inst->pszDirName);
++	if(ev->mask & IN_CLOSE_NOWRITE) {
++		dbgprintf("INOTIFY event: watch IN_CLOSE_NOWRITE\n");
+ 	}
+-
+-finalize_it:
+-	RETiRet;
+-}
+-
+-/* add file to directory (create association)
+- * fIdx is index into file table, all other information is pulled from that table.
+- * bActive is 1 if the file is to be added to active set, else zero
+- */
+-static rsRetVal
+-dirsAddFile(lstn_t *__restrict__ pLstn, const int bActive)
+-{
+-	int dirIdx;
+-	dirInfo_t *dir;
+-	DEFiRet;
+-
+-	dirIdx = dirsFindDir(pLstn->pszDirName);
+-	if(dirIdx == -1) {
+-		errmsg.LogError(0, RS_RET_INTERNAL_ERROR, "imfile: could not find "
+-			"directory '%s' in dirs array - ignoring",
+-			pLstn->pszDirName);
+-		FINALIZE;
++	if(ev->mask & IN_CREATE) {
++		dbgprintf("INOTIFY event: file was CREATED: %s\n", ev->name);
+ 	}
+-
+-	dir = dirs + dirIdx;
+-	CHKiRet(fileTableAddFile((bActive ? &dir->active : &dir->configured), pLstn));
+-	DBGPRINTF("imfile: associated file [%s] to directory %d[%s], Active = %d\n",
+-		pLstn->pszFileName, dirIdx, dir->dirName, bActive);
+-	/* UNCOMMENT FOR DEBUG fileTableDisplay(bActive ? &dir->active : &dir->configured); */
+-finalize_it:
+-	RETiRet;
+-}
+-
+-
+-static void
+-in_setupDirWatch(const int dirIdx)
+-{
+-	int wd;
+-	wd = inotify_add_watch(ino_fd, (char*)dirs[dirIdx].dirName, IN_CREATE|IN_DELETE|IN_MOVED_FROM);
+-	if(wd < 0) {
+-		DBGPRINTF("imfile: could not create dir watch for '%s'\n",
+-			dirs[dirIdx].dirName);
+-		goto done;
++	if(ev->mask & IN_DELETE) {
++		dbgprintf("INOTIFY event: watch IN_DELETE\n");
+ 	}
+-	wdmapAdd(wd, dirIdx, NULL);
+-	DBGPRINTF("imfile: watch %d added for dir %s\n", wd, dirs[dirIdx].dirName);
+-done:	return;
+-}
+-
+-/* Setup a new file watch for a known active file. It must already have
+- * been entered into the correct tables.
+- * Note: we need to try to read this file, as it may already contain data this
+- * needs to be processed, and we won't get an event for that as notifications
+- * happen only for things after the watch has been activated.
+- * Note: newFileName is NULL for configured files, and non-NULL for dynamically
+- * detected files (e.g. wildcards!)
+- */
+-static void
+-startLstnFile(lstn_t *const __restrict__ pLstn)
+-{
+-	rsRetVal localRet;
+-	const int wd = inotify_add_watch(ino_fd, (char*)pLstn->pszFileName, IN_MODIFY);
+-	if(wd < 0) {
+-		char errStr[512];
+-		rs_strerror_r(errno, errStr, sizeof(errStr));
+-		DBGPRINTF("imfile: could not create file table entry for '%s' - "
+-			  "not processing it now: %s\n",
+-			  pLstn->pszFileName, errStr);
+-		goto done;
++	if(ev->mask & IN_DELETE_SELF) {
++		dbgprintf("INOTIFY event: watch IN_DELETE_SELF\n");
+ 	}
+-	if((localRet = wdmapAdd(wd, -1, pLstn)) != RS_RET_OK) {
+-		DBGPRINTF("imfile: error %d adding file to wdmap, ignoring\n", localRet);
+-		goto done;
++	if(ev->mask & IN_MOVE_SELF) {
++		dbgprintf("INOTIFY event: watch IN_MOVE_SELF\n");
+ 	}
+-	DBGPRINTF("imfile: watch %d added for file %s\n", wd, pLstn->pszFileName);
+-	dirsAddFile(pLstn, ACTIVE_FILE);
+-	pollFile(pLstn, NULL);
+-done:	return;
+-}
+-
+-/* Duplicate an existing listener. This is called when a new file is to
+- * be monitored due to wildcard detection. Returns the new pLstn in
+- * the ppExisting parameter.
+- */
+-static rsRetVal
+-lstnDup(lstn_t **ppExisting, uchar *const __restrict__ newname)
+-{
+-	DEFiRet;
+-	lstn_t *const existing = *ppExisting;
+-	lstn_t *pThis;
+-
+-	CHKiRet(lstnAdd(&pThis));
+-	pThis->pszDirName = existing->pszDirName; /* read-only */
+-	pThis->pszBaseName = (uchar*)strdup((char*)newname);
+-	if(asprintf((char**)&pThis->pszFileName, "%s/%s", (char*)pThis->pszDirName, (char*)newname) == -1) {
+-		DBGPRINTF("imfile/lstnDup: asprintf failed, malfunction can happen\n");
+-		ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
+-	}
+-	pThis->pszTag = (uchar*) strdup((char*) existing->pszTag);
+-	pThis->lenTag = ustrlen(pThis->pszTag);
+-	pThis->pszStateFile = existing->pszStateFile == NULL ? NULL : (uchar*) strdup((char*) existing->pszStateFile);
+-
+-	CHKiRet(ratelimitNew(&pThis->ratelimiter, "imfile", (char*)pThis->pszFileName));
+-	pThis->multiSub.maxElem = existing->multiSub.maxElem;
+-	pThis->multiSub.nElem = 0;
+-	CHKmalloc(pThis->multiSub.ppMsgs = MALLOC(pThis->multiSub.maxElem * sizeof(smsg_t*)));
+-	pThis->iSeverity = existing->iSeverity;
+-	pThis->iFacility = existing->iFacility;
+-	pThis->maxLinesAtOnce = existing->maxLinesAtOnce;
+-	pThis->trimLineOverBytes = existing->trimLineOverBytes;
+-	pThis->iPersistStateInterval = existing->iPersistStateInterval;
+-	pThis->readMode = existing->readMode;
+-	pThis->startRegex = existing->startRegex; /* no strdup, as it is read-only */
+-	if(pThis->startRegex != NULL) // TODO: make this a single function with better error handling
+-		if(regcomp(&pThis->end_preg, (char*)pThis->startRegex, REG_EXTENDED)) {
+-			DBGPRINTF("imfile: error regex compile\n");
+-			ABORT_FINALIZE(RS_RET_ERR);
+-		}
+-	pThis->bRMStateOnDel = existing->bRMStateOnDel;
+-	pThis->hasWildcard = existing->hasWildcard;
+-	pThis->escapeLF = existing->escapeLF;
+-	pThis->reopenOnTruncate = existing->reopenOnTruncate;
+-	pThis->addMetadata = existing->addMetadata;
+-	pThis->addCeeTag = existing->addCeeTag;
+-	pThis->readTimeout = existing->readTimeout;
+-	pThis->freshStartTail = existing->freshStartTail;
+-	pThis->pRuleset = existing->pRuleset;
+-	pThis->nRecords = 0;
+-	pThis->pStrm = NULL;
+-	pThis->prevLineSegment = NULL;
+-	pThis->masterLstn = existing;
+-	*ppExisting = pThis;
+-finalize_it:
+-	RETiRet;
+-}
+-
+-/* Setup a new file watch for dynamically discovered files (via wildcards).
+- * Note: we need to try to read this file, as it may already contain data this
+- * needs to be processed, and we won't get an event for that as notifications
+- * happen only for things after the watch has been activated.
+- */
+-static void
+-in_setupFileWatchDynamic(lstn_t *pLstn, uchar *const __restrict__ newBaseName)
+-{
+-	char fullfn[MAXFNAME];
+-	struct stat fileInfo;
+-	snprintf(fullfn, MAXFNAME, "%s/%s", pLstn->pszDirName, newBaseName);
+-	if(stat(fullfn, &fileInfo) != 0) {
+-		char errStr[1024];
+-		rs_strerror_r(errno, errStr, sizeof(errStr));
+-		DBGPRINTF("imfile: ignoring file '%s' cannot stat(): %s\n",
+-			fullfn, errStr);
+-		goto done;
++	if(ev->mask & IN_MOVED_FROM) {
++		dbgprintf("INOTIFY event: watch IN_MOVED_FROM, cookie %u, name '%s'\n", ev->cookie, ev->name);
+ 	}
+-
+-	if(S_ISDIR(fileInfo.st_mode)) {
+-		DBGPRINTF("imfile: ignoring directory '%s'\n", fullfn);
+-		goto done;
++	if(ev->mask & IN_MOVED_TO) {
++		dbgprintf("INOTIFY event: watch IN_MOVED_TO, cookie %u, name '%s'\n", ev->cookie, ev->name);
+ 	}
+-
+-	if(lstnDup(&pLstn, newBaseName) != RS_RET_OK)
+-		goto done;
+-
+-	startLstnFile(pLstn);
+-done:	return;
+-}
+-
+-/* Setup a new file watch for static (configured) files.
+- * Note: we need to try to read this file, as it may already contain data this
+- * needs to be processed, and we won't get an event for that as notifications
+- * happen only for things after the watch has been activated.
+- */
+-static void
+-in_setupFileWatchStatic(lstn_t *pLstn)
+-{
+-	DBGPRINTF("imfile: adding file '%s' to configured table\n",
+-		  pLstn->pszFileName);
+-	dirsAddFile(pLstn, CONFIGURED_FILE);
+-
+-	if(pLstn->hasWildcard) {
+-		DBGPRINTF("imfile: file '%s' has wildcard, doing initial "
+-			  "expansion\n", pLstn->pszFileName);
+-		glob_t files;
+-		const int ret = glob((char*)pLstn->pszFileName,
+-					GLOB_MARK|GLOB_NOSORT|GLOB_BRACE, NULL, &files);
+-		if(ret == 0) {
+-			for(unsigned i = 0 ; i < files.gl_pathc ; i++) {
+-				uchar basen[MAXFNAME];
+-				uchar *const file = (uchar*)files.gl_pathv[i];
+-				if(file[strlen((char*)file)-1] == '/')
+-					continue;/* we cannot process subdirs! */
+-				getBasename(basen, file);
+-				in_setupFileWatchDynamic(pLstn, basen);
+-			}
+-			globfree(&files);
+-		}
+-	} else {
+-		/* Duplicate static object as well, otherwise the configobject could be deleted later! */
+-		if(lstnDup(&pLstn, pLstn->pszBaseName) != RS_RET_OK) {
+-			DBGPRINTF("imfile: in_setupFileWatchStatic failed to duplicate listener for '%s'\n", pLstn->pszFileName);
+-			goto done;
+-		}
+-		startLstnFile(pLstn);
++	if(ev->mask & IN_OPEN) {
++		dbgprintf("INOTIFY event: watch IN_OPEN\n");
+ 	}
+-done:	return;
+-}
+-
+-/* setup our initial set of watches, based on user config */
+-static void
+-in_setupInitialWatches(void)
+-{
+-	int i;
+-	for(i = 0 ; i < currMaxDirs ; ++i) {
+-		in_setupDirWatch(i);
+-	}
+-	lstn_t *pLstn;
+-	for(pLstn = runModConf->pRootLstn ; pLstn != NULL ; pLstn = pLstn->next) {
+-		if(pLstn->masterLstn == NULL) {
+-			/* we process only static (master) entries */
+-			in_setupFileWatchStatic(pLstn);
+-		}
++	if(ev->mask & IN_ISDIR) {
++		dbgprintf("INOTIFY event: watch IN_ISDIR\n");
+ 	}
+ }
+ 
+-static void
+-in_dbg_showEv(struct inotify_event *ev)
+-{
+-	if(ev->mask & IN_IGNORED) {
+-		DBGPRINTF("INOTIFY event: watch was REMOVED\n");
+-	} else if(ev->mask & IN_MODIFY) {
+-		DBGPRINTF("INOTIFY event: watch was MODIFID\n");
+-	} else if(ev->mask & IN_ACCESS) {
+-		DBGPRINTF("INOTIFY event: watch IN_ACCESS\n");
+-	} else if(ev->mask & IN_ATTRIB) {
+-		DBGPRINTF("INOTIFY event: watch IN_ATTRIB\n");
+-	} else if(ev->mask & IN_CLOSE_WRITE) {
+-		DBGPRINTF("INOTIFY event: watch IN_CLOSE_WRITE\n");
+-	} else if(ev->mask & IN_CLOSE_NOWRITE) {
+-		DBGPRINTF("INOTIFY event: watch IN_CLOSE_NOWRITE\n");
+-	} else if(ev->mask & IN_CREATE) {
+-		DBGPRINTF("INOTIFY event: file was CREATED: %s\n", ev->name);
+-	} else if(ev->mask & IN_DELETE) {
+-		DBGPRINTF("INOTIFY event: watch IN_DELETE\n");
+-	} else if(ev->mask & IN_DELETE_SELF) {
+-		DBGPRINTF("INOTIFY event: watch IN_DELETE_SELF\n");
+-	} else if(ev->mask & IN_MOVE_SELF) {
+-		DBGPRINTF("INOTIFY event: watch IN_MOVE_SELF\n");
+-	} else if(ev->mask & IN_MOVED_FROM) {
+-		DBGPRINTF("INOTIFY event: watch IN_MOVED_FROM\n");
+-	} else if(ev->mask & IN_MOVED_TO) {
+-		DBGPRINTF("INOTIFY event: watch IN_MOVED_TO\n");
+-	} else if(ev->mask & IN_OPEN) {
+-		DBGPRINTF("INOTIFY event: watch IN_OPEN\n");
+-	} else if(ev->mask & IN_ISDIR) {
+-		DBGPRINTF("INOTIFY event: watch IN_ISDIR\n");
+-	} else {
+-		DBGPRINTF("INOTIFY event: unknown mask code %8.8x\n", ev->mask);
+-	 }
+-}
+-
+ 
+-/* inotify told us that a file's wd was closed. We now need to remove
+- * the file from our internal structures. Remember that a different inode
+- * with the same name may already be in processing.
+- */
+ static void
+-in_removeFile(const int dirIdx,
+-	      lstn_t *const __restrict__ pLstn)
++in_handleFileEvent(struct inotify_event *ev, const wd_map_t *const etry)
+ {
+-	uchar statefile[MAXFNAME];
+-	uchar toDel[MAXFNAME];
+-	int bDoRMState;
+-        int wd;
+-	uchar *statefn;
+-	DBGPRINTF("imfile: remove listener '%s', dirIdx %d\n",
+-	          pLstn->pszFileName, dirIdx);
+-	if(pLstn->bRMStateOnDel) {
+-		statefn = getStateFileName(pLstn, statefile, sizeof(statefile));
+-		snprintf((char*)toDel, sizeof(toDel), "%s/%s",
+-				     glbl.GetWorkDir(), (char*)statefn);
+-		bDoRMState = 1;
++	if(ev->mask & IN_MODIFY) {
++		DBGPRINTF("fs_node_notify_file_update: act->name '%s'\n", etry->act->name);
++		pollFile(etry->act);
+ 	} else {
+-		bDoRMState = 0;
+-	}
+-	pollFile(pLstn, NULL); /* one final try to gather data */
+-	/*	delete listener data */
+-	DBGPRINTF("imfile: DELETING listener data for '%s' - '%s'\n", pLstn->pszBaseName, pLstn->pszFileName);
+-	lstnDel(pLstn);
+-	fileTableDelFile(&dirs[dirIdx].active, pLstn);
+-	if(bDoRMState) {
+-		DBGPRINTF("imfile: unlinking '%s'\n", toDel);
+-		if(unlink((char*)toDel) != 0) {
+-			char errStr[1024];
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_ERR, "imfile: could not remove state "
+-				"file \"%s\": %s", toDel, errStr);
+-		}
++		DBGPRINTF("got non-expected inotify event:\n");
++		in_dbg_showEv(ev);
+ 	}
+-        wd = wdmapLookupListner(pLstn);
+-        wdmapDel(wd);
+ }
+ 
+-static void
+-in_handleDirEventCREATE(struct inotify_event *ev, const int dirIdx)
+-{
+-	lstn_t *pLstn;
+-	int ftIdx;
+-	ftIdx = fileTableSearch(&dirs[dirIdx].active, (uchar*)ev->name);
+-	if(ftIdx >= 0) {
+-		pLstn = dirs[dirIdx].active.listeners[ftIdx].pLstn;
+-	} else {
+-		DBGPRINTF("imfile: file '%s' not active in dir '%s'\n",
+-			ev->name, dirs[dirIdx].dirName);
+-		ftIdx = fileTableSearch(&dirs[dirIdx].configured, (uchar*)ev->name);
+-		if(ftIdx == -1) {
+-			DBGPRINTF("imfile: file '%s' not associated with dir '%s'\n",
+-				ev->name, dirs[dirIdx].dirName);
+-			goto done;
+-		}
+-		pLstn = dirs[dirIdx].configured.listeners[ftIdx].pLstn;
+-	}
+-	DBGPRINTF("imfile: file '%s' associated with dir '%s'\n", ev->name, dirs[dirIdx].dirName);
+-	in_setupFileWatchDynamic(pLstn, (uchar*)ev->name);
+-done:	return;
+-}
+ 
+-/* note: we need to care only for active files in the DELETE case.
+- * Two reasons: a) if this is a configured file, it should be active
+- * b) if not for some reason, there still is nothing we can do against
+- * it, and trying to process a *deleted* file really makes no sense
+- * (remeber we don't have it open, so it actually *is gone*).
++/* workaround for IN_MOVED: walk active list and prevent state file deletion of
++ * IN_MOVED_IN active object
++ * TODO: replace by a more generic solution.
+  */
+ static void
+-in_handleDirEventDELETE(struct inotify_event *const ev, const int dirIdx)
+-{
+-	const int ftIdx = fileTableSearch(&dirs[dirIdx].active, (uchar*)ev->name);
+-	if(ftIdx == -1) {
+-		DBGPRINTF("imfile: deleted file '%s' not active in dir '%s'\n",
+-			ev->name, dirs[dirIdx].dirName);
+-		goto done;
+-	}
+-	DBGPRINTF("imfile: imfile delete processing for '%s'\n",
+-	          dirs[dirIdx].active.listeners[ftIdx].pLstn->pszFileName);
+-	in_removeFile(dirIdx, dirs[dirIdx].active.listeners[ftIdx].pLstn);
+-done:	return;
+-}
+-
+-static void
+-in_handleDirEvent(struct inotify_event *const ev, const int dirIdx)
++flag_in_move(fs_edge_t *const edge, const char *name_moved)
+ {
+-	DBGPRINTF("imfile: handle dir event for %s\n", dirs[dirIdx].dirName);
+-	if((ev->mask & IN_CREATE)) {
+-		in_handleDirEventCREATE(ev, dirIdx);
+-	} else if((ev->mask & IN_DELETE)) {
+-		in_handleDirEventDELETE(ev, dirIdx);
+-	} else {
+-		DBGPRINTF("imfile: got non-expected inotify event:\n");
+-		in_dbg_showEv(ev);
+-	}
+-}
++	act_obj_t *act;
+ 
+-
+-static void
+-in_handleFileEvent(struct inotify_event *ev, const wd_map_t *const etry)
+-{
+-	if(ev->mask & IN_MODIFY) {
+-		pollFile(etry->pLstn, NULL);
+-	} else {
+-		DBGPRINTF("imfile: got non-expected inotify event:\n");
+-		in_dbg_showEv(ev);
++	for(act = edge->active ; act != NULL ; act = act->next) {
++		DBGPRINTF("checking active object %s\n", act->basename);
++		if(!strcmp(act->basename, name_moved)){
++			DBGPRINTF("found file\n");
++			act->in_move = 1;
++			break;
++		} else {
++			DBGPRINTF("name check fails, '%s' != '%s'\n", act->basename, name_moved);
++		}
+ 	}
+ }
+ 
+ static void
+ in_processEvent(struct inotify_event *ev)
+ {
+-	wd_map_t *etry;
+-	lstn_t *pLstn;
+-	int iRet;
+-	int ftIdx;
+-	int wd;
+-
+ 	if(ev->mask & IN_IGNORED) {
+-		goto done;
+-	} else if(ev->mask & IN_MOVED_FROM) {
+-		/* Find wd entry and remove it */
+-		etry =  wdmapLookup(ev->wd);
+-		if(etry != NULL) {
+-			ftIdx = fileTableSearchNoWildcard(&dirs[etry->dirIdx].active, (uchar*)ev->name);
+-			DBGPRINTF("imfile: IN_MOVED_FROM Event (ftIdx=%d, name=%s)\n", ftIdx, ev->name);
+-			if(ftIdx >= 0) {
+-				/* Find listener and wd table index*/
+-				pLstn = dirs[etry->dirIdx].active.listeners[ftIdx].pLstn;
+-				wd = wdmapLookupListner(pLstn);
+-
+-				/* Remove file from inotify watch */
+-				iRet = inotify_rm_watch(ino_fd, wd); /* Note this will TRIGGER IN_IGNORED Event! */
+-				if (iRet != 0) {
+-					DBGPRINTF("imfile: inotify_rm_watch error %d (ftIdx=%d, wd=%d, name=%s)\n", errno, ftIdx, wd, ev->name);
+-				} else {
+-					DBGPRINTF("imfile: inotify_rm_watch successfully removed file from watch (ftIdx=%d, wd=%d, name=%s)\n", ftIdx, wd, ev->name);
+-				}
+-				in_removeFile(etry->dirIdx, pLstn);
+-				DBGPRINTF("imfile: IN_MOVED_FROM Event file removed file (wd=%d, name=%s)\n", wd, ev->name);
+-			}
+-		}
++		DBGPRINTF("imfile: got IN_IGNORED event\n");
+ 		goto done;
+ 	}
+-	etry =  wdmapLookup(ev->wd);
++
++	DBGPRINTF("in_processEvent process Event %x for %s\n", ev->mask, ev->name);
++	const wd_map_t *const etry =  wdmapLookup(ev->wd);
+ 	if(etry == NULL) {
+-		DBGPRINTF("imfile: could not lookup wd %d\n", ev->wd);
++		LogMsg(0, RS_RET_INTERNAL_ERROR, LOG_WARNING, "imfile: internal error? "
++			"inotify provided watch descriptor %d which we could not find "
++			"in our tables - ignored", ev->wd);
+ 		goto done;
+ 	}
+-	if(etry->pLstn == NULL) { /* directory? */
+-		in_handleDirEvent(ev, etry->dirIdx);
++	DBGPRINTF("in_processEvent process Event %x is_file %d, act->name '%s'\n",
++		ev->mask, etry->act->edge->is_file, etry->act->name);
++
++	if((ev->mask & IN_MOVED_FROM)) {
++		flag_in_move(etry->act->edge->node->edges, ev->name);
++	}
++	if(ev->mask & (IN_MOVED_FROM | IN_MOVED_TO))  {
++		fs_node_walk(etry->act->edge->node, poll_tree);
++	} else if(etry->act->edge->is_file && !(etry->act->is_symlink)) {
++		in_handleFileEvent(ev, etry); // esentially poll_file()!
+ 	} else {
+-		in_handleFileEvent(ev, etry);
++		fs_node_walk(etry->act->edge->node, poll_tree);
+ 	}
+ done:	return;
+ }
+ 
+-static void
+-in_do_timeout_processing(void)
+-{
+-	int i;
+-	DBGPRINTF("imfile: readTimeouts are configured, checking if some apply\n");
+-
+-	for(i = 0 ; i < nWdmap ; ++i) {
+-		dbgprintf("imfile: wdmap %d, plstn %p\n", i, wdmap[i].pLstn);
+-		lstn_t *const pLstn = wdmap[i].pLstn;
+-		if(pLstn != NULL && strmReadMultiLine_isTimedOut(pLstn->pStrm)) {
+-			dbgprintf("imfile: wdmap %d, timeout occured\n", i);
+-			pollFile(pLstn, NULL);
+-		}
+-	}
+-
+-}
+-
+ 
+ /* Monitor files in inotify mode */
+ #if !defined(_AIX)
+@@ -1940,14 +2062,16 @@ do_inotify(void)
+ 	DEFiRet;
+ 
+ 	CHKiRet(wdmapInit());
+-	CHKiRet(dirsInit());
+ 	ino_fd = inotify_init();
+-        if(ino_fd < 0) {
+-            errmsg.LogError(1, RS_RET_INOTIFY_INIT_FAILED, "imfile: Init inotify instance failed ");
+-            return RS_RET_INOTIFY_INIT_FAILED;
+-        }
+-	DBGPRINTF("imfile: inotify fd %d\n", ino_fd);
+-	in_setupInitialWatches();
++	if(ino_fd < 0) {
++		LogError(errno, RS_RET_INOTIFY_INIT_FAILED, "imfile: Init inotify "
++			"instance failed ");
++		return RS_RET_INOTIFY_INIT_FAILED;
++	}
++	DBGPRINTF("inotify fd %d\n", ino_fd);
++
++	/* do watch initialization */
++	fs_node_walk(runModConf->conf_tree, poll_tree);
+ 
+ 	while(glbl.GetGlobalInputTermState() == 0) {
+ 		if(runModConf->haveReadTimeouts) {
+@@ -1959,7 +2083,8 @@ do_inotify(void)
+ 				r = poll(&pollfd, 1, runModConf->timeoutGranularity);
+ 			} while(r  == -1 && errno == EINTR);
+ 			if(r == 0) {
+-				in_do_timeout_processing();
++				DBGPRINTF("readTimeouts are configured, checking if some apply\n");
++				fs_node_walk(runModConf->conf_tree, poll_timeouts);
+ 				continue;
+ 			} else if (r == -1) {
+ 				char errStr[1024];
+@@ -2035,49 +2160,96 @@ CODESTARTwillRun
+ 	CHKiRet(prop.Construct(&pInputName));
+ 	CHKiRet(prop.SetString(pInputName, UCHAR_CONSTANT("imfile"), sizeof("imfile") - 1));
+ 	CHKiRet(prop.ConstructFinalize(pInputName));
+-
+ finalize_it:
+ ENDwillRun
+ 
++// TODO: refactor this into a generically-usable "atomic file creation" utility for
++// all kinds of "state files"
++static rsRetVal
++atomicWriteStateFile(const char *fn, const char *content)
++{
++	DEFiRet;
++	const int fd = open(fn, O_CLOEXEC | O_NOCTTY | O_WRONLY | O_CREAT | O_TRUNC, 0600);
++	if(fd < 0) {
++		LogError(errno, RS_RET_IO_ERROR, "imfile: cannot open state file '%s' for "
++			"persisting file state - some data will probably be duplicated "
++			"on next startup", fn);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++	const size_t toWrite = strlen(content);
++	const ssize_t w = write(fd, content, toWrite);
++	if(w != (ssize_t) toWrite) {
++		LogError(errno, RS_RET_IO_ERROR, "imfile: partial write to state file '%s' "
++			"this may cause trouble in the future. We will try to delete the "
++			"state file, as this provides most consistent state", fn);
++		unlink(fn);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++finalize_it:
++	if(fd >= 0) {
++		close(fd);
++	}
++	RETiRet;
++}
++
++
+ /* This function persists information for a specific file being monitored.
+  * To do so, it simply persists the stream object. We do NOT abort on error
+  * iRet as that makes matters worse (at least we can try persisting the others...).
+  * rgerhards, 2008-02-13
+  */
+ static rsRetVal
+-persistStrmState(lstn_t *pLstn)
++persistStrmState(act_obj_t *const act)
+ {
+ 	DEFiRet;
+-	strm_t *psSF = NULL; /* state file (stream) */
+-	size_t lenDir;
+ 	uchar statefile[MAXFNAME];
++	uchar statefname[MAXFNAME];
++
++	uchar *const statefn = getStateFileName(act, statefile, sizeof(statefile));
++	getFullStateFileName(statefn, statefname, sizeof(statefname));
++	DBGPRINTF("persisting state for '%s', state file '%s'\n", act->name, statefname);
++
++	struct json_object *jval = NULL;
++	struct json_object *json = NULL;
++	CHKmalloc(json = json_object_new_object());
++	jval = json_object_new_string((char*) act->name);
++	json_object_object_add(json, "filename", jval);
++	jval = json_object_new_int(strmGetPrevWasNL(act->pStrm));
++	json_object_object_add(json, "prev_was_nl", jval);
++
++	/* we access some data items a bit dirty, as we need to refactor the whole
++	 * thing in any case - TODO
++	 */
++	jval = json_object_new_int64(act->pStrm->iCurrOffs);
++	json_object_object_add(json, "curr_offs", jval);
++	jval = json_object_new_int64(act->pStrm->strtOffs);
++	json_object_object_add(json, "strt_offs", jval);
+ 
+-	uchar *const statefn = getStateFileName(pLstn, statefile, sizeof(statefile));
+-	DBGPRINTF("imfile: persisting state for '%s' to file '%s'\n",
+-		  pLstn->pszFileName, statefn);
+-	CHKiRet(strm.Construct(&psSF));
+-	lenDir = ustrlen(glbl.GetWorkDir());
+-	if(lenDir > 0)
+-		CHKiRet(strm.SetDir(psSF, glbl.GetWorkDir(), lenDir));
+-	CHKiRet(strm.SettOperationsMode(psSF, STREAMMODE_WRITE_TRUNC));
+-	CHKiRet(strm.SetsType(psSF, STREAMTYPE_FILE_SINGLE));
+-	CHKiRet(strm.SetFName(psSF, statefn, strlen((char*) statefn)));
+-	CHKiRet(strm.ConstructFinalize(psSF));
++	const uchar *const prevLineSegment = strmGetPrevLineSegment(act->pStrm);
++	if(prevLineSegment != NULL) {
++		jval = json_object_new_string((const char*) prevLineSegment);
++		json_object_object_add(json, "prev_line_segment", jval);
++	}
+ 
+-	CHKiRet(strm.Serialize(pLstn->pStrm, psSF));
+-	CHKiRet(strm.Flush(psSF));
++	const uchar *const prevMsgSegment = strmGetPrevMsgSegment(act->pStrm);
++	if(prevMsgSegment != NULL) {
++		jval = json_object_new_string((const char*) prevMsgSegment);
++		json_object_object_add(json, "prev_msg_segment", jval);
++	}
+ 
+-	CHKiRet(strm.Destruct(&psSF));
++	const char *jstr =  json_object_to_json_string_ext(json, JSON_C_TO_STRING_SPACED);
+ 
+-finalize_it:
+-	if(psSF != NULL)
+-		strm.Destruct(&psSF);
++	CHKiRet(atomicWriteStateFile((const char*)statefname, jstr));
++	json_object_put(json);
+ 
++finalize_it:
+ 	if(iRet != RS_RET_OK) {
+ 		errmsg.LogError(0, iRet, "imfile: could not persist state "
+ 				"file %s - data may be repeated on next "
+ 				"startup. Is WorkDirectory set?",
+-				statefn);
++				statefname);
+ 	}
+ 
+ 	RETiRet;
+@@ -2089,11 +2261,6 @@ finalize_it:
+  */
+ BEGINafterRun
+ CODESTARTafterRun
+-	while(runModConf->pRootLstn != NULL) {
+-		/* Note: lstnDel() reasociates root! */
+-		lstnDel(runModConf->pRootLstn);
+-	}
+-
+ 	if(pInputName != NULL)
+ 		prop.Destruct(&pInputName);
+ ENDafterRun
+@@ -2118,12 +2285,6 @@ CODESTARTmodExit
+ 	objRelease(prop, CORE_COMPONENT);
+ 	objRelease(ruleset, CORE_COMPONENT);
+ #ifdef HAVE_INOTIFY_INIT
+-	/* we use these vars only in inotify mode */
+-	if(dirs != NULL) {
+-		free(dirs->active.listeners);
+-		free(dirs->configured.listeners);
+-		free(dirs);
+-	}
+ 	free(wdmap);
+ #endif
+ ENDmodExit
+diff --git a/runtime/msg.c b/runtime/msg.c
+index a885d2368bbaeea90a6e92dc0d569d169b1dd2e5..f45d6175283097974023905fc072508a18a8270a 100644
+--- a/runtime/msg.c
++++ b/runtime/msg.c
+@@ -4890,6 +4890,28 @@ finalize_it:
+ 	RETiRet;
+ }
+ 
++rsRetVal
++msgAddMultiMetadata(smsg_t *const __restrict__ pMsg,
++	       const uchar ** __restrict__ metaname,
++	       const uchar ** __restrict__ metaval,
++	       const int count)
++{
++	DEFiRet;
++	int i = 0 ;
++	struct json_object *const json = json_object_new_object();
++	CHKmalloc(json);
++	for ( i = 0 ; i < count ; i++ ) {
++		struct json_object *const jval = json_object_new_string((char*)metaval[i]);
++		if(jval == NULL) {
++			json_object_put(json);
++			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++ 		}
++		json_object_object_add(json, (const char *const)metaname[i], jval);
++	}
++	iRet = msgAddJSON(pMsg, (uchar*)"!metadata", json, 0, 0);
++finalize_it:
++	RETiRet;
++}
+ 
+ static struct json_object *
+ jsonDeepCopy(struct json_object *src)
+diff --git a/runtime/msg.h b/runtime/msg.h
+index 6521e19b28b013f0d06e357bdb0f33a94dab638b..0e92da43398156f4871b2e567a242cb089f67a08 100644
+--- a/runtime/msg.h
++++ b/runtime/msg.h
+@@ -195,6 +195,7 @@ int getPRIi(const smsg_t * const pM);
+ void getRawMsg(smsg_t *pM, uchar **pBuf, int *piLen);
+ rsRetVal msgAddJSON(smsg_t *pM, uchar *name, struct json_object *json, int force_reset, int sharedReference);
+ rsRetVal msgAddMetadata(smsg_t *msg, uchar *metaname, uchar *metaval);
++rsRetVal msgAddMultiMetadata(smsg_t *msg, const uchar **metaname, const uchar **metaval, const int count);
+ rsRetVal MsgGetSeverity(smsg_t *pThis, int *piSeverity);
+ rsRetVal MsgDeserialize(smsg_t *pMsg, strm_t *pStrm);
+ rsRetVal MsgSetPropsViaJSON(smsg_t *__restrict__ const pMsg, const uchar *__restrict__ const json);
+diff --git a/runtime/stream.c b/runtime/stream.c
+index 701144c0e39d6fbcf9dd63fe60421e1dcd6f01c6..fb1ff11d1890bbaee107658dd3568c2bc67c223d 100644
+--- a/runtime/stream.c
++++ b/runtime/stream.c
+@@ -91,6 +91,41 @@ static rsRetVal strmSeekCurrOffs(strm_t *pThis);
+ 
+ /* methods */
+ 
++/* note: this may return NULL if not line segment is currently set  */
++// TODO: due to the cstrFinalize() this is not totally clean, albeit for our
++// current use case it does not hurt -- refactor! rgerhards, 2018-03-27
++const uchar *
++strmGetPrevLineSegment(strm_t *const pThis)
++{
++	const uchar *ret = NULL;
++	if(pThis->prevLineSegment != NULL) {
++		cstrFinalize(pThis->prevLineSegment);
++		ret = rsCStrGetSzStrNoNULL(pThis->prevLineSegment);
++	}
++	return ret;
++}
++/* note: this may return NULL if not line segment is currently set  */
++// TODO: due to the cstrFinalize() this is not totally clean, albeit for our
++// current use case it does not hurt -- refactor! rgerhards, 2018-03-27
++const uchar *
++strmGetPrevMsgSegment(strm_t *const pThis)
++{
++	const uchar *ret = NULL;
++	if(pThis->prevMsgSegment != NULL) {
++		cstrFinalize(pThis->prevMsgSegment);
++		ret = rsCStrGetSzStrNoNULL(pThis->prevMsgSegment);
++	}
++	return ret;
++}
++
++
++int
++strmGetPrevWasNL(const strm_t *const pThis)
++{
++	return pThis->bPrevWasNL;
++}
++
++
+ /* output (current) file name for debug log purposes. Falls back to various
+  * levels of impreciseness if more precise name is not known.
+  */
+@@ -242,17 +277,18 @@ doPhysOpen(strm_t *pThis)
+ 	}
+ 
+ 	pThis->fd = open((char*)pThis->pszCurrFName, iFlags | O_LARGEFILE, pThis->tOpenMode);
++	const int errno_save = errno; /* dbgprintf can mangle it! */
+ 	DBGPRINTF("file '%s' opened as #%d with mode %d\n", pThis->pszCurrFName,
+ 		  pThis->fd, (int) pThis->tOpenMode);
+ 	if(pThis->fd == -1) {
+-		char errStr[1024];
+-		int err = errno;
+-		rs_strerror_r(err, errStr, sizeof(errStr));
+-		DBGOPRINT((obj_t*) pThis, "open error %d, file '%s': %s\n", errno, pThis->pszCurrFName, errStr);
+-		if(err == ENOENT)
+-			ABORT_FINALIZE(RS_RET_FILE_NOT_FOUND);
+-		else
+-			ABORT_FINALIZE(RS_RET_FILE_OPEN_ERROR);
++		const rsRetVal errcode = (errno_save == ENOENT)
++			? RS_RET_FILE_NOT_FOUND : RS_RET_FILE_OPEN_ERROR;
++		if(pThis->fileNotFoundError) {
++			LogError(errno_save, errcode, "file '%s': open error", pThis->pszCurrFName);
++		} else {
++			DBGPRINTF("file '%s': open error", pThis->pszCurrFName);
++		}
++		ABORT_FINALIZE(errcode);
+ 	}
+ 
+ 	if(pThis->tOperationsMode == STREAMMODE_READ) {
+@@ -344,6 +380,8 @@ static rsRetVal strmOpenFile(strm_t *pThis)
+ 
+ 	if(pThis->fd != -1)
+ 		ABORT_FINALIZE(RS_RET_OK);
++
++	free(pThis->pszCurrFName);
+ 	pThis->pszCurrFName = NULL; /* used to prevent mem leak in case of error */
+ 
+ 	if(pThis->pszFName == NULL)
+@@ -733,11 +771,11 @@ static rsRetVal strmUnreadChar(strm_t *pThis, uchar c)
+  * a line, but following lines that are indented are part of the same log entry
+  */
+ static rsRetVal
+-strmReadLine(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF, uint32_t trimLineOverBytes)
++strmReadLine(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF,
++	uint32_t trimLineOverBytes, int64 *const strtOffs)
+ {
+         uchar c;
+ 	uchar finished;
+-	rsRetVal readCharRet;
+         DEFiRet;
+ 
+         ASSERT(pThis != NULL);
+@@ -756,12 +794,7 @@ strmReadLine(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF, uint
+         if(mode == 0) {
+ 		while(c != '\n') {
+ 			CHKiRet(cstrAppendChar(*ppCStr, c));
+-			readCharRet = strmReadChar(pThis, &c);
+-			if((readCharRet == RS_RET_TIMED_OUT) ||
+-			   (readCharRet == RS_RET_EOF) ) { /* end reached without \n? */
+-				CHKiRet(rsCStrConstructFromCStr(&pThis->prevLineSegment, *ppCStr));
+-                	}
+-                	CHKiRet(readCharRet);
++			CHKiRet(strmReadChar(pThis, &c));
+         	}
+ 		if (trimLineOverBytes > 0 && (uint32_t) cstrLen(*ppCStr) > trimLineOverBytes) {
+ 			/* Truncate long line at trimLineOverBytes position */
+@@ -850,12 +883,19 @@ strmReadLine(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF, uint
+ 	}
+ 
+ finalize_it:
+-        if(iRet != RS_RET_OK && *ppCStr != NULL) {
+-		if(cstrLen(*ppCStr) > 0) {
+-		/* we may have an empty string in an unsuccsfull poll or after restart! */
+-			rsCStrConstructFromCStr(&pThis->prevLineSegment, *ppCStr);
++        if(iRet == RS_RET_OK) {
++		if(strtOffs != NULL) {
++			*strtOffs = pThis->strtOffs;
++		}
++		pThis->strtOffs = pThis->iCurrOffs; /* we are at begin of next line */
++	} else {
++		if(*ppCStr != NULL) {
++			if(cstrLen(*ppCStr) > 0) {
++			/* we may have an empty string in an unsuccesfull poll or after restart! */
++				rsCStrConstructFromCStr(&pThis->prevLineSegment, *ppCStr);
++			}
++			cstrDestruct(ppCStr);
+ 		}
+-                cstrDestruct(ppCStr);
+ 	}
+ 
+         RETiRet;
+@@ -882,7 +922,8 @@ strmReadMultiLine_isTimedOut(const strm_t *const __restrict__ pThis)
+  * added 2015-05-12 rgerhards
+  */
+ rsRetVal
+-strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg, const sbool bEscapeLF)
++strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg, const sbool bEscapeLF,
++	int64 *const strtOffs)
+ {
+         uchar c;
+ 	uchar finished = 0;
+@@ -946,16 +987,24 @@ strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg, const sbool bEs
+ 	} while(finished == 0);
+ 
+ finalize_it:
+-	if(   pThis->readTimeout
+-	   && (iRet != RS_RET_OK)
+-	   && (pThis->prevMsgSegment != NULL)
+-	   && (tCurr > pThis->lastRead + pThis->readTimeout)) {
+-		CHKiRet(rsCStrConstructFromCStr(ppCStr, pThis->prevMsgSegment));
+-		cstrDestruct(&pThis->prevMsgSegment);
+-		pThis->lastRead = tCurr;
+-		dbgprintf("stream: generated msg based on timeout: %s\n", cstrGetSzStrNoNULL(*ppCStr));
+-			FINALIZE;
+-		iRet = RS_RET_OK;
++	*strtOffs = pThis->strtOffs;
++	if(thisLine != NULL) {
++		cstrDestruct(&thisLine);
++	}
++	if(iRet == RS_RET_OK) {
++		pThis->strtOffs = pThis->iCurrOffs; /* we are at begin of next line */
++	} else {
++		if(   pThis->readTimeout
++		   && (pThis->prevMsgSegment != NULL)
++		   && (tCurr > pThis->lastRead + pThis->readTimeout)) {
++			CHKiRet(rsCStrConstructFromCStr(ppCStr, pThis->prevMsgSegment));
++			cstrDestruct(&pThis->prevMsgSegment);
++			pThis->lastRead = tCurr;
++			pThis->strtOffs = pThis->iCurrOffs; /* we are at begin of next line */
++			dbgprintf("stream: generated msg based on timeout: %s\n", cstrGetSzStrNoNULL(*ppCStr));
++				FINALIZE;
++			iRet = RS_RET_OK;
++		}
+ 	}
+         RETiRet;
+ }
+@@ -974,7 +1023,10 @@ BEGINobjConstruct(strm) /* be sure to specify the object type also in END macro!
+ 	pThis->pszSizeLimitCmd = NULL;
+ 	pThis->prevLineSegment = NULL;
+ 	pThis->prevMsgSegment = NULL;
++	pThis->strtOffs = 0;
++	pThis->ignoringMsg = 0;
+ 	pThis->bPrevWasNL = 0;
++	pThis->fileNotFoundError = 1;
+ ENDobjConstruct(strm)
+ 
+ 
+@@ -1686,7 +1738,7 @@ static rsRetVal strmSeek(strm_t *pThis, off64_t offs)
+ 		DBGPRINTF("strmSeek: error %lld seeking to offset %lld\n", i, (long long) offs);
+ 		ABORT_FINALIZE(RS_RET_IO_ERROR);
+ 	}
+-	pThis->iCurrOffs = offs; /* we are now at *this* offset */
++	pThis->strtOffs = pThis->iCurrOffs = offs; /* we are now at *this* offset */
+ 	pThis->iBufPtr = 0; /* buffer invalidated */
+ 
+ finalize_it:
+@@ -1738,7 +1790,7 @@ strmMultiFileSeek(strm_t *pThis, unsigned int FNum, off64_t offs, off64_t *bytes
+ 	} else {
+ 		*bytesDel = 0;
+ 	}
+-	pThis->iCurrOffs = offs;
++	pThis->strtOffs = pThis->iCurrOffs = offs;
+ 
+ finalize_it:
+ 	RETiRet;
+@@ -1763,7 +1815,7 @@ static rsRetVal strmSeekCurrOffs(strm_t *pThis)
+ 
+ 	/* As the cryprov may use CBC or similiar things, we need to read skip data */
+ 	targetOffs = pThis->iCurrOffs;
+-	pThis->iCurrOffs = 0;
++	pThis->strtOffs = pThis->iCurrOffs = 0;
+ 	DBGOPRINT((obj_t*) pThis, "encrypted, doing skip read of %lld bytes\n",
+ 		(long long) targetOffs);
+ 	while(targetOffs != pThis->iCurrOffs) {
+@@ -1935,6 +1987,12 @@ static rsRetVal strmSetiMaxFiles(strm_t *pThis, int iNewVal)
+ 	return RS_RET_OK;
+ }
+ 
++static rsRetVal strmSetFileNotFoundError(strm_t *pThis, int pFileNotFoundError)
++{
++	pThis->fileNotFoundError = pFileNotFoundError;
++	return RS_RET_OK;
++}
++
+ 
+ /* set the stream's file prefix
+  * The passed-in string is duplicated. So if the caller does not need
+@@ -2076,6 +2134,9 @@ static rsRetVal strmSerialize(strm_t *pThis, strm_t *pStrm)
+ 	l = pThis->inode;
+ 	objSerializeSCALAR_VAR(pStrm, inode, INT64, l);
+ 
++	l = pThis->strtOffs;
++	objSerializeSCALAR_VAR(pStrm, strtOffs, INT64, l);
++
+ 	if(pThis->prevLineSegment != NULL) {
+ 		cstrFinalize(pThis->prevLineSegment);
+ 		objSerializePTR(pStrm, prevLineSegment, CSTR);
+@@ -2188,8 +2249,12 @@ static rsRetVal strmSetProperty(strm_t *pThis, var_t *pProp)
+ 		pThis->iCurrOffs = pProp->val.num;
+  	} else if(isProp("inode")) {
+ 		pThis->inode = (ino_t) pProp->val.num;
++ 	} else if(isProp("strtOffs")) {
++		pThis->strtOffs = pProp->val.num;
+  	} else if(isProp("iMaxFileSize")) {
+ 		CHKiRet(strmSetiMaxFileSize(pThis, pProp->val.num));
++ 	} else if(isProp("fileNotFoundError")) {
++		CHKiRet(strmSetFileNotFoundError(pThis, pProp->val.num));
+  	} else if(isProp("iMaxFiles")) {
+ 		CHKiRet(strmSetiMaxFiles(pThis, pProp->val.num));
+  	} else if(isProp("iFileNumDigits")) {
+@@ -2253,6 +2318,7 @@ CODESTARTobjQueryInterface(strm)
+ 	pIf->WriteChar = strmWriteChar;
+ 	pIf->WriteLong = strmWriteLong;
+ 	pIf->SetFName = strmSetFName;
++	pIf->SetFileNotFoundError = strmSetFileNotFoundError;
+ 	pIf->SetDir = strmSetDir;
+ 	pIf->Flush = strmFlush;
+ 	pIf->RecordBegin = strmRecordBegin;
+diff --git a/runtime/stream.h b/runtime/stream.h
+index 1eee34979db34620b82e6351111864645187b035..bcb81a14f60f9effa52fffa42d18d66c484ae86d 100644
+--- a/runtime/stream.h
++++ b/runtime/stream.h
+@@ -159,6 +159,10 @@ typedef struct strm_s {
+ 	sbool	bIsTTY;		/* is this a tty file? */
+ 	cstr_t *prevLineSegment; /* for ReadLine, previous, unprocessed part of file */
+ 	cstr_t *prevMsgSegment; /* for ReadMultiLine, previous, yet unprocessed part of msg */
++	int64 strtOffs;		/* start offset in file for current line/msg */
++	int fileNotFoundError;
++	int noRepeatedErrorOutput; /* if a file is missing the Error is only given once */
++	int ignoringMsg;
+ } strm_t;
+ 
+ 
+@@ -174,6 +178,7 @@ BEGINinterface(strm) /* name must also be changed in ENDinterface macro! */
+ 	rsRetVal (*Write)(strm_t *const pThis, const uchar *const pBuf, size_t lenBuf);
+ 	rsRetVal (*WriteChar)(strm_t *pThis, uchar c);
+ 	rsRetVal (*WriteLong)(strm_t *pThis, long i);
++	rsRetVal (*SetFileNotFoundError)(strm_t *pThis, int pFileNotFoundError);
+ 	rsRetVal (*SetFName)(strm_t *pThis, uchar *pszPrefix, size_t iLenPrefix);
+ 	rsRetVal (*SetDir)(strm_t *pThis, uchar *pszDir, size_t iLenDir);
+ 	rsRetVal (*Flush)(strm_t *pThis);
+@@ -198,7 +203,8 @@ BEGINinterface(strm) /* name must also be changed in ENDinterface macro! */
+ 	INTERFACEpropSetMeth(strm, iFlushInterval, int);
+ 	INTERFACEpropSetMeth(strm, pszSizeLimitCmd, uchar*);
+ 	/* v6 added */
+-	rsRetVal (*ReadLine)(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF, uint32_t trimLineOverBytes);
++	rsRetVal (*ReadLine)(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF,
++		uint32_t trimLineOverBytes, int64 *const strtOffs);
+ 	/* v7 added  2012-09-14 */
+ 	INTERFACEpropSetMeth(strm, bVeryReliableZip, int);
+ 	/* v8 added  2013-03-21 */
+@@ -207,19 +213,24 @@ BEGINinterface(strm) /* name must also be changed in ENDinterface macro! */
+ 	INTERFACEpropSetMeth(strm, cryprov, cryprov_if_t*);
+ 	INTERFACEpropSetMeth(strm, cryprovData, void*);
+ ENDinterface(strm)
+-#define strmCURR_IF_VERSION 12 /* increment whenever you change the interface structure! */
++#define strmCURR_IF_VERSION 13 /* increment whenever you change the interface structure! */
+ /* V10, 2013-09-10: added new parameter bEscapeLF, changed mode to uint8_t (rgerhards) */
+ /* V11, 2015-12-03: added new parameter bReopenOnTruncate */
+ /* V12, 2015-12-11: added new parameter trimLineOverBytes, changed mode to uint32_t */
++/* V13, 2017-09-06: added new parameter strtoffs to ReadLine() */
+ 
+ #define strmGetCurrFileNum(pStrm) ((pStrm)->iCurrFNum)
+ 
+ /* prototypes */
+ PROTOTYPEObjClassInit(strm);
+ rsRetVal strmMultiFileSeek(strm_t *pThis, unsigned int fileNum, off64_t offs, off64_t *bytesDel);
+-rsRetVal strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg, sbool bEscapeLF);
++rsRetVal strmReadMultiLine(strm_t *pThis, cstr_t **ppCStr, regex_t *preg,
++	sbool bEscapeLF, int64 *const strtOffs);
+ int strmReadMultiLine_isTimedOut(const strm_t *const __restrict__ pThis);
+ void strmDebugOutBuf(const strm_t *const pThis);
+ void strmSetReadTimeout(strm_t *const __restrict__ pThis, const int val);
++const uchar * strmGetPrevLineSegment(strm_t *const pThis);
++const uchar * strmGetPrevMsgSegment(strm_t *const pThis);
++int strmGetPrevWasNL(const strm_t *const pThis);
+ 
+ #endif /* #ifndef STREAM_H_INCLUDED */
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1538372-imjournal-duplicates.patch b/SOURCES/rsyslog-8.24.0-rhbz1538372-imjournal-duplicates.patch
new file mode 100644
index 0000000..1788d6c
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1538372-imjournal-duplicates.patch
@@ -0,0 +1,370 @@
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Wed, 25 Jul 2018 15:05:01 -0500
+
+modification and merge of below patches for RHEL consumers, 
+also modified journal invalidate/rotation handling to keep possibility
+to continue after switch of persistent journal
+original:
+%From 3bede5ba768975c8b6fe3d1f3e11075910f52fdd Mon Sep 17 00:00:00 2001
+%From: Jiri Vymazal <jvymazal@redhat.com>
+%Date: Wed, 7 Mar 2018 11:57:29 +0100
+%Subject: [PATCH] Fetching cursor on readJournal() and simplified pollJournal()
+%
+%Fetching journal cursor in persistJournal could cause us to save
+%invalid cursor leading to duplicating messages further on, now we are
+%saving it on each readJournal() where we now that the state is good.
+%This result in simplyfing persisJournalState() a bit as well.
+%
+%pollJournal() is now cleaner and faster, correctly handles INVALIDATE
+%status from journald and is able to continue polling after journal
+%flush. Also reduced POLL_TIMEOUT a bit as it caused rsyslog to exit
+%with error in corner cases for some ppc when left at full second.
+plus
+%
+%From a99f9b4b42d261c384aee09306fc421df2cca7a5 Mon Sep 17 00:00:00 2001
+%From: Peter Portante <peter.a.portante@gmail.com>
+%Date: Wed, 24 Jan 2018 19:34:41 -0500
+%Subject: [PATCH] Proposed fix for handling journal correctly
+%
+%The fix is to immediately setup the inotify file descriptor via
+%`sd_journal_get_fd()` right after a journal open, and then
+%periodically call `sd_journal_process()` to give the client API
+%library a chance to detect deleted journal files on disk that need to
+%be closed so they can be properly erased by the file system.
+%
+%We remove the open/close dance and simplify that code as a result.
+%
+%Fixes issue #2436.
+and also:
+%From 27f96c84d34ee000fbb5d45b00233f2ec3cf2d8a Mon Sep 17 00:00:00 2001
+%From: Rainer Gerhards <rgerhards@adiscon.com>
+%Date: Tue, 24 Oct 2017 16:14:13 +0200
+%Subject: [PATCH] imjournal bugfix: do not disable itself on error
+%
+%If some functions calls inside the main loop failed, imjournal exited
+%with an error code, actually disabling all logging from the journal.
+%This was probably never intended.
+%
+%This patch makes imjournal recover the situation instead.
+%
+%closes https://github.com/rsyslog/rsyslog/issues/1895
+---
+ plugins/imjournal/imjournal.c | 211 ++++++++++++++++++++++--------------------
+ 1 file changed, 110 insertions(+), 102 deletions(-)
+
+--- a/plugins/imjournal/imjournal.c
++++ b/plugins/imjournal/imjournal.c
+@@ -80,6 +80,7 @@ static struct configSettings_s {
+ 	int iDfltFacility;
+ 	int bUseJnlPID;
+	char *dfltTag;
++	int bWorkAroundJournalBug;
+ } cs;
+ 
+ static rsRetVal facilityHdlr(uchar **pp, void *pVal);
+@@ -95,6 +96,7 @@ static struct cnfparamdescr modpdescr[] = {
+ 	{ "defaultfacility", eCmdHdlrString, 0 },
+ 	{ "usepidfromsystem", eCmdHdlrBinary, 0 },
+	{ "defaulttag", eCmdHdlrGetWord, 0 },
++	{ "workaroundjournalbug", eCmdHdlrBinary, 0 }
+ };
+ static struct cnfparamblk modpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -114,6 +114,10 @@ /* module-global parameters */
+ static const char *pid_field_name;	/* read-only after startup */
+ static ratelimit_t *ratelimiter = NULL;
+ static sd_journal *j;
++static int j_inotify_fd;
++static char *last_cursor = NULL;
++
++#define J_PROCESS_PERIOD 1024  /* Call sd_journal_process() every 1,024 records */
+ 
+ static rsRetVal persistJournalState(void);
+ static rsRetVal loadJournalState(void);
+@@ -123,6 +127,14 @@ openJournal(sd_journal** jj)
+ 
+ 	if (sd_journal_open(jj, SD_JOURNAL_LOCAL_ONLY) < 0)
+ 		iRet = RS_RET_IO_ERROR;
++	int r;
++
++	if ((r = sd_journal_get_fd(j)) < 0) {
++		errmsg.LogError(-r, RS_RET_IO_ERROR, "imjournal: sd_journal_get_fd() failed");
++		iRet = RS_RET_IO_ERROR;
++	} else {
++		j_inotify_fd = r;
++	}	
+ 	RETiRet;
+ }
+ 
+@@ -132,6 +144,7 @@ closeJournal(sd_journal** jj)
+ 		persistJournalState();
+ 	}
+ 	sd_journal_close(*jj);
++	j_inotify_fd = 0;
+ }
+ 
+ 
+@@ -262,6 +275,7 @@ readjournal(void)
+ 	char *message = NULL;
+ 	char *sys_iden = NULL;
+ 	char *sys_iden_help = NULL;
++	char *c = NULL;
+ 
+ 	const void *get;
+ 	const void *pidget;
+@@ -433,6 +437,15 @@ readjournal(void)
+ 		tv.tv_usec = timestamp % 1000000;
+ 	}
+ 
++	if (cs.bWorkAroundJournalBug) {
++		/* save journal cursor (at this point we can be sure it is valid) */
++		sd_journal_get_cursor(j, &c);
++		if (c) {
++			free(last_cursor);
++			last_cursor = c;
++		}
++	}
++
+ 	/* submit message */
+ 	enqMsg((uchar *)message, (uchar *) sys_iden_help, facility, severity, &tv, json, 0);
+ 
+@@ -413,44 +433,49 @@ persistJournalState (void)
+ 	DEFiRet;
+ 	FILE *sf; /* state file */
+ 	char tmp_sf[MAXFNAME];
+-	char *cursor;
+ 	int ret = 0;
+ 
+-	/* On success, sd_journal_get_cursor()  returns 1 in systemd
+-	   197 or older and 0 in systemd 198 or newer */
+-	if ((ret = sd_journal_get_cursor(j, &cursor)) >= 0) {
+-               /* we create a temporary name by adding a ".tmp"
+-                * suffix to the end of our state file's name
+-                */
+-               snprintf(tmp_sf, sizeof(tmp_sf), "%s.tmp", cs.stateFile);
+-               if ((sf = fopen(tmp_sf, "wb")) != NULL) {
+-			if (fprintf(sf, "%s", cursor) < 0) {
+-				iRet = RS_RET_IO_ERROR;
+-			}
+-			fclose(sf);
+-			free(cursor);
+-                       /* change the name of the file to the configured one */
+-                       if (iRet == RS_RET_OK && rename(tmp_sf, cs.stateFile) == -1) {
+-                               char errStr[256];
+-                               rs_strerror_r(errno, errStr, sizeof(errStr));
+-                               iRet = RS_RET_IO_ERROR;
+-                               errmsg.LogError(0, iRet, "rename() failed: "
+-                                       "'%s', new path: '%s'\n", errStr, cs.stateFile);
+-                       }
++	if (cs.bWorkAroundJournalBug) {
++		if (!last_cursor)
++			ABORT_FINALIZE(RS_RET_OK);
+ 
+-		} else {
+-			char errStr[256];
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_FOPEN_FAILURE, "fopen() failed: "
+-				"'%s', path: '%s'\n", errStr, tmp_sf);
+-			iRet = RS_RET_FOPEN_FAILURE;
+-		}
+-	} else {
++	} else if ((ret = sd_journal_get_cursor(j, &last_cursor)) < 0) {
+ 		char errStr[256];
+ 		rs_strerror_r(-(ret), errStr, sizeof(errStr));
+ 		errmsg.LogError(0, RS_RET_ERR, "sd_journal_get_cursor() failed: '%s'\n", errStr);
+-		iRet = RS_RET_ERR;
++		ABORT_FINALIZE(RS_RET_ERR);
+	}
++	/* we create a temporary name by adding a ".tmp"
++	 * suffix to the end of our state file's name
++	 */
++	snprintf(tmp_sf, sizeof(tmp_sf), "%s.tmp", cs.stateFile);
++
++	sf = fopen(tmp_sf, "wb");
++	if (!sf) {
++		errmsg.LogError(errno, RS_RET_FOPEN_FAILURE, "imjournal: fopen() failed for path: '%s'", tmp_sf);
++		ABORT_FINALIZE(RS_RET_FOPEN_FAILURE);
++	}
++
++	ret = fputs(last_cursor, sf);
++	if (ret < 0) {
++		errmsg.LogError(errno, RS_RET_IO_ERROR, "imjournal: failed to save cursor to: '%s'", tmp_sf);
++		ret = fclose(sf);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++	ret = fclose(sf);
++	if (ret < 0) {
++		errmsg.LogError(errno, RS_RET_IO_ERROR, "imjournal: fclose() failed for path: '%s'", tmp_sf);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++	ret = rename(tmp_sf, cs.stateFile);
++	if (ret < 0) {
++		errmsg.LogError(errno, iRet, "imjournal: rename() failed for new path: '%s'", cs.stateFile);
++		ABORT_FINALIZE(RS_RET_IO_ERROR);
++	}
++
++finalize_it:
+ 	RETiRet;
+ }
+ 
+@@ -473,64 +473,26 @@
+  * except for the special handling of EINTR.
+  */
+ 
+-#define POLL_TIMEOUT 1000 /* timeout for poll is 1s */
++#define POLL_TIMEOUT 900000 /* timeout for poll is 900ms */
+ 
+ static rsRetVal
+ pollJournal(void)
+ {
+ 	DEFiRet;
+-	struct pollfd pollfd;
+-	int pr = 0;
+-	int jr = 0;
+-
+-	pollfd.fd = sd_journal_get_fd(j);
+-	pollfd.events = sd_journal_get_events(j);
+-	pr = poll(&pollfd, 1, POLL_TIMEOUT);
+-	if (pr == -1) {
+-		if (errno == EINTR) {
+-			/* EINTR is also received during termination
+-			 * so return now to check the term state.
+-			 */
+-			ABORT_FINALIZE(RS_RET_OK);
+-		} else {
+-			char errStr[256];
+-
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_ERR,
+-				"poll() failed: '%s'", errStr);
+-			ABORT_FINALIZE(RS_RET_ERR);
+-		}
+-	}
++	int r;
+ 
++	r = sd_journal_wait(j, POLL_TIMEOUT);
+ 
+-	jr = sd_journal_process(j);
+-	
+-	if (pr == 1 && jr == SD_JOURNAL_INVALIDATE) {
+-		/* do not persist stateFile sd_journal_get_cursor will fail! */
+-		char* tmp = cs.stateFile;
+-		cs.stateFile = NULL;
++	if (r == SD_JOURNAL_INVALIDATE) {
+ 		closeJournal(&j);
+-		cs.stateFile = tmp;
+ 
+ 		iRet = openJournal(&j);
+-		if (iRet != RS_RET_OK) {
+-			char errStr[256];
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_IO_ERROR,
+-				"sd_journal_open() failed: '%s'", errStr);
++		if (iRet != RS_RET_OK)
+ 			ABORT_FINALIZE(RS_RET_ERR);
+-		}
+ 
+-		if(cs.stateFile != NULL){
++		if (cs.stateFile)
+ 			iRet = loadJournalState();
+-		}
+-		LogMsg(0, RS_RET_OK, LOG_NOTICE, "imjournal: journal reloaded...");
+-	} else if (jr < 0) {
+-		char errStr[256];
+-		rs_strerror_r(errno, errStr, sizeof(errStr));
+-		errmsg.LogError(0, RS_RET_ERR,
+-			"sd_journal_process() failed: '%s'", errStr);
+-		ABORT_FINALIZE(RS_RET_ERR);
++		errmsg.LogMsg(0, RS_RET_OK, LOG_NOTICE, "imjournal: journal reloaded...");
+ 	}
+ 
+ finalize_it:
+@@ -631,8 +612,17 @@ loadJournalState(void)
+ 	RETiRet;
+ }
+ 
++static void
++tryRecover(void) {
++	errmsg.LogMsg(0, RS_RET_OK, LOG_INFO, "imjournal: trying to recover from unexpected "
++		"journal error");
++	closeJournal(&j);
++	srSleep(10, 0);	// do not hammer machine with too-frequent retries
++	openJournal(&j);
++}
++
+ BEGINrunInput
+-	int count = 0;
++	uint64_t count = 0;
+ CODESTARTrunInput
+ 	CHKiRet(ratelimitNew(&ratelimiter, "imjournal", NULL));
+ 	dbgprintf("imjournal: ratelimiting burst %d, interval %d\n", cs.ratelimitBurst,
+@@ -665,26 +655,38 @@ CODESTARTrunInput
+ 
+ 		r = sd_journal_next(j);
+ 		if (r < 0) {
+-			char errStr[256];
+-
+-			rs_strerror_r(errno, errStr, sizeof(errStr));
+-			errmsg.LogError(0, RS_RET_ERR,
+-				"sd_journal_next() failed: '%s'", errStr);
+-			ABORT_FINALIZE(RS_RET_ERR);
++			tryRecover();
++			continue;
+ 		}
+ 
+ 		if (r == 0) {
+ 			/* No new messages, wait for activity. */
+-			CHKiRet(pollJournal());
++			if (pollJournal() != RS_RET_OK) {
++ 				tryRecover();
++ 			}
+ 			continue;
+ 		}
+ 
+-		CHKiRet(readjournal());
++		if (readjournal() != RS_RET_OK) {
++ 			tryRecover();
++ 			continue;
++ 		}
++
++		count++;
++
++		if ((count % J_PROCESS_PERIOD) == 0) {
++			/* Give the journal a periodic chance to detect rotated journal files to be cleaned up. */
++			r = sd_journal_process(j);
++			if (r < 0) {
++				errmsg.LogError(-r, RS_RET_ERR, "imjournal: sd_journal_process() failed");
++				tryRecover();
++				continue;
++			}
++		}
++
+ 		if (cs.stateFile) { /* can't persist without a state file */
+ 			/* TODO: This could use some finer metric. */
+-			count++;
+-			if (count == cs.iPersistStateInterval) {
+-				count = 0;
++			if ((count % cs.iPersistStateInterval) == 0) {
+ 				persistJournalState();
+ 			}
+ 		}
+@@ -901,6 +909,8 @@ CODESTARTsetModCnf
+ 			cs.bUseJnlPID = (int) pvals[i].val.d.n;
+		} else if (!strcmp(modpblk.descr[i].name, "defaulttag")) {
+			cs.dfltTag = (char *)es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if (!strcmp(modpblk.descr[i].name, "workaroundjournalbug")) {
++			cs.bWorkAroundJournalBug = (int) pvals[i].val.d.n;
+ 		} else {
+ 			dbgprintf("imjournal: program error, non-handled "
+ 				"param '%s' in beginCnfLoad\n", modpblk.descr[i].name);
+@@ -961,6 +971,8 @@ CODEmodInit_QueryRegCFSLineHdlr
+		NULL, &cs.bUseJnlPID, STD_LOADABLE_MODULE_ID));
+	CHKiRet(omsdRegCFSLineHdlr((uchar *)"imjournaldefaulttag", 0, eCmdHdlrGetWord,
+		NULL, &cs.dfltTag, STD_LOADABLE_MODULE_ID));
++	CHKiRet(omsdRegCFSLineHdlr((uchar *)"workaroundjournalbug", 0, eCmdHdlrBinary,
++		NULL, &cs.bWorkAroundJournalBug, STD_LOADABLE_MODULE_ID));
+ ENDmodInit
+ /* vim:set ai:
+  */
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1539193-mmkubernetes-new-plugin.patch b/SOURCES/rsyslog-8.24.0-rhbz1539193-mmkubernetes-new-plugin.patch
new file mode 100644
index 0000000..f37557c
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1539193-mmkubernetes-new-plugin.patch
@@ -0,0 +1,1659 @@
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Mon, 28 Jun 2018 12:07:55 +0100
+Subject: Kubernetes Metadata plugin - mmkubernetes
+
+This plugin is used to annotate records logged by Kubernetes containers.
+It will add the namespace uuid, pod uuid, pod and namespace labels and
+annotations, and other metadata associated with the pod and namespace.
+It will work with either log files in `/var/log/containers/*.log` or
+with journald entries with `CONTAINER_NAME` and `CONTAINER_ID_FULL`.
+
+For usage and configuration see syslog-doc
+
+*Credits*
+
+This work is based on https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter
+and has many of the same features.
+
+(cherry picked from commit a6264bf8f91975c9bc0fc602dcdc6881486f1579)
+(cherry picked from commit b8e68366422052dca9e0a9409baa410f20ae88c8)
+
+(cherry picked from commit 77886e21292d8220f93b3404236da0e8f7159255)
+(cherry picked from commit e4d1c7b3832eedc8a1545c2ee6bf022f545d0c76)
+(cherry picked from commit 3d9f820642b0edc78da0b5bed818590dcd31fa9c)
+(cherry picked from commit 1d49aac5cb101704486bfb065fac362ca69f06bc)
+(cherry picked from commit fc2ad45f78dd666b8c9e706ad88c17aaff146d2d)
+(cherry picked from commit 8cf87f64f6c74a4544112ec7fddc5bf4d43319a7)
+---
+ Makefile.am                                      |    5 +
+ configure.ac                                     |   35 +
+ contrib/mmkubernetes/Makefile.am                 |    6 +
+ contrib/mmkubernetes/k8s_container_name.rulebase |    3 +
+ contrib/mmkubernetes/k8s_filename.rulebase       |    2 +
+ contrib/mmkubernetes/mmkubernetes.c              | 1491 +++++++++++++++++++++++
+ contrib/mmkubernetes/sample.conf                 |    7 +
+ 7 files changed, 1549 insertions(+)
+ create mode 100644 contrib/mmkubernetes/Makefile.am
+ create mode 100644 contrib/mmkubernetes/k8s_container_name.rulebase
+ create mode 100644 contrib/mmkubernetes/k8s_filename.rulebase
+ create mode 100644 contrib/mmkubernetes/mmkubernetes.c
+ create mode 100644 contrib/mmkubernetes/sample.conf
+
+diff --git a/Makefile.am b/Makefile.am
+index a276ef9ea..b58ebaf93 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -275,6 +275,11 @@ if ENABLE_OMTCL
+ SUBDIRS += contrib/omtcl
+ endif
+ 
++# mmkubernetes
++if ENABLE_MMKUBERNETES
++SUBDIRS += contrib/mmkubernetes
++endif
++
+ # tests are added as last element, because tests may need different
+ # modules that need to be generated first
+ SUBDIRS += tests
+diff --git a/configure.ac b/configure.ac
+index a9411f4be..c664222b9 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -1889,6 +1889,39 @@ AM_CONDITIONAL(ENABLE_OMTCL, test x$enable_omtcl = xyes)
+ 
+ # END TCL SUPPORT
+ 
++# mmkubernetes - Kubernetes metadata support
++
++AC_ARG_ENABLE(mmkubernetes,
++        [AS_HELP_STRING([--enable-mmkubernetes],
++            [Enable compilation of the mmkubernetes module @<:@default=no@:>@])],
++        [case "${enableval}" in
++         yes) enable_mmkubernetes="yes" ;;
++          no) enable_mmkubernetes="no" ;;
++           *) AC_MSG_ERROR(bad value ${enableval} for --enable-mmkubernetes) ;;
++         esac],
++        [enable_mmkubernetes=no]
++)
++if test "x$enable_mmkubernetes" = "xyes"; then
++        PKG_CHECK_MODULES([CURL], [libcurl])
++        PKG_CHECK_MODULES(LIBLOGNORM, lognorm >= 2.0.3)
++
++        save_CFLAGS="$CFLAGS"
++        save_LIBS="$LIBS"
++
++        CFLAGS="$CFLAGS $LIBLOGNORM_CFLAGS"
++        LIBS="$LIBS $LIBLOGNORM_LIBS"
++
++        AC_CHECK_FUNC([ln_loadSamplesFromString],
++                      [AC_DEFINE([HAVE_LOADSAMPLESFROMSTRING], [1], [Define if ln_loadSamplesFromString exists.])],
++                      [AC_DEFINE([NO_LOADSAMPLESFROMSTRING], [1], [Define if ln_loadSamplesFromString does not exist.])])
++
++        CFLAGS="$save_CFLAGS"
++        LIBS="$save_LIBS"
++fi
++AM_CONDITIONAL(ENABLE_MMKUBERNETES, test x$enable_mmkubernetes = xyes)
++
++# END Kubernetes metadata support
++
+ # man pages
+ AC_CHECKING([if required man pages already exist])
+ have_to_generate_man_pages="no"
+@@ -2016,6 +2035,7 @@ AC_CONFIG_FILES([Makefile \
+ 		contrib/omhttpfs/Makefile \
+ 		contrib/omamqp1/Makefile \
+ 		contrib/omtcl/Makefile \
++		contrib/mmkubernetes/Makefile \
+ 		tests/Makefile])
+ AC_OUTPUT
+ 
+@@ -2090,6 +2110,7 @@ echo "    mmrfc5424addhmac enabled:                 $enable_mmrfc5424addhmac"
+ echo "    mmpstrucdata enabled:                     $enable_mmpstrucdata"
+ echo "    mmsequence enabled:                       $enable_mmsequence"
+ echo "    mmdblookup enabled:                       $enable_mmdblookup"
++echo "    mmkubernetes enabled:                     $enable_mmkubernetes"
+ echo
+ echo "---{ database support }---"
+ echo "    MySql support enabled:                    $enable_mysql"
+diff --git a/contrib/mmkubernetes/Makefile.am b/contrib/mmkubernetes/Makefile.am
+new file mode 100644
+index 000000000..3dcc235a6
+--- /dev/null
++++ b/contrib/mmkubernetes/Makefile.am
+@@ -0,0 +1,6 @@
++pkglib_LTLIBRARIES = mmkubernetes.la
++
++mmkubernetes_la_SOURCES = mmkubernetes.c
++mmkubernetes_la_CPPFLAGS = $(RSRT_CFLAGS) $(PTHREADS_CFLAGS) $(CURL_CFLAGS) $(LIBLOGNORM_CFLAGS)
++mmkubernetes_la_LDFLAGS = -module -avoid-version
++mmkubernetes_la_LIBADD = $(CURL_LIBS) $(LIBLOGNORM_LIBS)
+diff --git a/contrib/mmkubernetes/k8s_container_name.rulebase b/contrib/mmkubernetes/k8s_container_name.rulebase
+new file mode 100644
+index 000000000..35fbb317c
+--- /dev/null
++++ b/contrib/mmkubernetes/k8s_container_name.rulebase
+@@ -0,0 +1,3 @@
++version=2
++rule=:%k8s_prefix:char-to:_%_%container_name:char-to:.%.%container_hash:char-to:_%_%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%
++rule=:%k8s_prefix:char-to:_%_%container_name:char-to:_%_%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%
+diff --git a/contrib/mmkubernetes/k8s_filename.rulebase b/contrib/mmkubernetes/k8s_filename.rulebase
+new file mode 100644
+index 000000000..24c0d9138
+--- /dev/null
++++ b/contrib/mmkubernetes/k8s_filename.rulebase
+@@ -0,0 +1,2 @@
++version=2
++rule=:/var/log/containers/%pod_name:char-to:_%_%namespace_name:char-to:_%_%container_name_and_id:char-to:.%.log
+diff --git a/contrib/mmkubernetes/mmkubernetes.c b/contrib/mmkubernetes/mmkubernetes.c
+new file mode 100644
+index 000000000..5012c54f6
+--- /dev/null
++++ b/contrib/mmkubernetes/mmkubernetes.c
+@@ -0,0 +1,1491 @@
++/* mmkubernetes.c
++ * This is a message modification module. It uses metadata obtained
++ * from the message to query Kubernetes and obtain additional metadata
++ * relating to the container instance.
++ *
++ * Inspired by:
++ * https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter
++ *
++ * NOTE: read comments in module-template.h for details on the calling interface!
++ *
++ * Copyright 2016 Red Hat Inc.
++ *
++ * This file is part of rsyslog.
++ *
++ * Licensed under the Apache License, Version 2.0 (the "License");
++ * you may not use this file except in compliance with the License.
++ * You may obtain a copy of the License at
++ *
++ *       http://www.apache.org/licenses/LICENSE-2.0
++ *       -or-
++ *       see COPYING.ASL20 in the source distribution
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS,
++ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++ * See the License for the specific language governing permissions and
++ * limitations under the License.
++ */
++
++/* needed for asprintf */
++#ifndef _GNU_SOURCE
++#  define _GNU_SOURCE
++#endif
++
++#include "config.h"
++#include "rsyslog.h"
++#include <stdio.h>
++#include <stdarg.h>
++#include <stdlib.h>
++#include <string.h>
++#include <assert.h>
++#include <errno.h>
++#include <unistd.h>
++#include <sys/stat.h>
++#include <libestr.h>
++#include <liblognorm.h>
++#include <json.h>
++#include <curl/curl.h>
++#include <curl/easy.h>
++#include <pthread.h>
++#include "conf.h"
++#include "syslogd-types.h"
++#include "module-template.h"
++#include "errmsg.h"
++#include "regexp.h"
++#include "hashtable.h"
++#include "srUtils.h"
++
++/* static data */
++MODULE_TYPE_OUTPUT /* this is technically an output plugin */
++MODULE_TYPE_KEEP /* releasing the module would cause a leak through libcurl */
++MODULE_CNFNAME("mmkubernetes")
++DEF_OMOD_STATIC_DATA
++DEFobjCurrIf(errmsg)
++DEFobjCurrIf(regexp)
++
++#define HAVE_LOADSAMPLESFROMSTRING 1
++#if defined(NO_LOADSAMPLESFROMSTRING)
++#undef HAVE_LOADSAMPLESFROMSTRING
++#endif
++/* original from fluentd plugin:
++ * 'var\.log\.containers\.(?<pod_name>[a-z0-9]([-a-z0-9]*[a-z0-9])?\
++ *   (\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?<namespace>[^_]+)_\
++ *   (?<container_name>.+)-(?<docker_id>[a-z0-9]{64})\.log$'
++ * this is for _tag_ match, not actual filename match - in_tail turns filename
++ * into a fluentd tag
++ */
++#define DFLT_FILENAME_LNRULES "rule=:/var/log/containers/%pod_name:char-to:_%_"\
++	"%namespace_name:char-to:_%_%container_name:char-to:-%-%container_id:char-to:.%.log"
++#define DFLT_FILENAME_RULEBASE "/etc/rsyslog.d/k8s_filename.rulebase"
++/* original from fluentd plugin:
++ *   '^(?<name_prefix>[^_]+)_(?<container_name>[^\._]+)\
++ *     (\.(?<container_hash>[^_]+))?_(?<pod_name>[^_]+)_\
++ *     (?<namespace>[^_]+)_[^_]+_[^_]+$'
++ */
++#define DFLT_CONTAINER_LNRULES "rule=:%k8s_prefix:char-to:_%_%container_name:char-to:.%."\
++	"%container_hash:char-to:_%_"\
++	"%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%\n"\
++	"rule=:%k8s_prefix:char-to:_%_%container_name:char-to:_%_"\
++	"%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%"
++#define DFLT_CONTAINER_RULEBASE "/etc/rsyslog.d/k8s_container_name.rulebase"
++#define DFLT_SRCMD_PATH "$!metadata!filename"
++#define DFLT_DSTMD_PATH "$!"
++#define DFLT_DE_DOT 1 /* true */
++#define DFLT_DE_DOT_SEPARATOR "_"
++#define DFLT_CONTAINER_NAME "$!CONTAINER_NAME" /* name of variable holding CONTAINER_NAME value */
++#define DFLT_CONTAINER_ID_FULL "$!CONTAINER_ID_FULL" /* name of variable holding CONTAINER_ID_FULL value */
++#define DFLT_KUBERNETES_URL "https://kubernetes.default.svc.cluster.local:443"
++
++static struct cache_s {
++	const uchar *kbUrl;
++	struct hashtable *mdHt;
++	struct hashtable *nsHt;
++	pthread_mutex_t *cacheMtx;
++} **caches;
++
++typedef struct {
++	int nmemb;
++	uchar **patterns;
++	regex_t *regexps;
++} annotation_match_t;
++
++/* module configuration data */
++struct modConfData_s {
++	rsconf_t *pConf;	/* our overall config object */
++	uchar *kubernetesUrl;	/* scheme, host, port, and optional path prefix for Kubernetes API lookups */
++	uchar *srcMetadataPath;	/* where to get data for kubernetes queries */
++	uchar *dstMetadataPath;	/* where to put metadata obtained from kubernetes */
++	uchar *caCertFile; /* File holding the CA cert (+optional chain) of CA that issued the Kubernetes server cert */
++	sbool allowUnsignedCerts; /* For testing/debugging - do not check for CA certs (CURLOPT_SSL_VERIFYPEER FALSE) */
++	uchar *token; /* The token value to use to authenticate to Kubernetes - takes precedence over tokenFile */
++	uchar *tokenFile; /* The file whose contents is the token value to use to authenticate to Kubernetes */
++	sbool de_dot; /* If true (default), convert '.' characters in labels & annotations to de_dot_separator */
++	uchar *de_dot_separator; /* separator character (default '_') to use for de_dotting */
++	size_t de_dot_separator_len; /* length of separator character */
++	annotation_match_t annotation_match; /* annotation keys must match these to be included in record */
++	char *fnRules; /* lognorm rules for container log filename match */
++	uchar *fnRulebase; /* lognorm rulebase filename for container log filename match */
++	char *contRules; /* lognorm rules for CONTAINER_NAME value match */
++	uchar *contRulebase; /* lognorm rulebase filename for CONTAINER_NAME value match */
++};
++
++/* action (instance) configuration data */
++typedef struct _instanceData {
++	uchar *kubernetesUrl;	/* scheme, host, port, and optional path prefix for Kubernetes API lookups */
++	msgPropDescr_t *srcMetadataDescr;	/* where to get data for kubernetes queries */
++	uchar *dstMetadataPath;	/* where to put metadata obtained from kubernetes */
++	uchar *caCertFile; /* File holding the CA cert (+optional chain) of CA that issued the Kubernetes server cert */
++	sbool allowUnsignedCerts; /* For testing/debugging - do not check for CA certs (CURLOPT_SSL_VERIFYPEER FALSE) */
++	uchar *token; /* The token value to use to authenticate to Kubernetes - takes precedence over tokenFile */
++	uchar *tokenFile; /* The file whose contents is the token value to use to authenticate to Kubernetes */
++	sbool de_dot; /* If true (default), convert '.' characters in labels & annotations to de_dot_separator */
++	uchar *de_dot_separator; /* separator character (default '_') to use for de_dotting */
++	size_t de_dot_separator_len; /* length of separator character */
++	annotation_match_t annotation_match; /* annotation keys must match these to be included in record */
++	char *fnRules; /* lognorm rules for container log filename match */
++	uchar *fnRulebase; /* lognorm rulebase filename for container log filename match */
++	ln_ctx fnCtxln;	/**< context to be used for liblognorm */
++	char *contRules; /* lognorm rules for CONTAINER_NAME value match */
++	uchar *contRulebase; /* lognorm rulebase filename for CONTAINER_NAME value match */
++	ln_ctx contCtxln;	/**< context to be used for liblognorm */
++	msgPropDescr_t *contNameDescr; /* CONTAINER_NAME field */
++	msgPropDescr_t *contIdFullDescr; /* CONTAINER_ID_FULL field */
++	struct cache_s *cache;
++} instanceData;
++
++typedef struct wrkrInstanceData {
++	instanceData *pData;
++	CURL *curlCtx;
++	struct curl_slist *curlHdr;
++	char *curlRply;
++	size_t curlRplyLen;
++} wrkrInstanceData_t;
++
++/* module parameters (v6 config format) */
++static struct cnfparamdescr modpdescr[] = {
++	{ "kubernetesurl", eCmdHdlrString, 0 },
++	{ "srcmetadatapath", eCmdHdlrString, 0 },
++	{ "dstmetadatapath", eCmdHdlrString, 0 },
++	{ "tls.cacert", eCmdHdlrString, 0 },
++	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
++	{ "token", eCmdHdlrString, 0 },
++	{ "tokenfile", eCmdHdlrString, 0 },
++	{ "annotation_match", eCmdHdlrArray, 0 },
++	{ "de_dot", eCmdHdlrBinary, 0 },
++	{ "de_dot_separator", eCmdHdlrString, 0 },
++	{ "filenamerulebase", eCmdHdlrString, 0 },
++	{ "containerrulebase", eCmdHdlrString, 0 }
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	,
++	{ "filenamerules", eCmdHdlrArray, 0 },
++	{ "containerrules", eCmdHdlrArray, 0 }
++#endif
++};
++static struct cnfparamblk modpblk = {
++	CNFPARAMBLK_VERSION,
++	sizeof(modpdescr)/sizeof(struct cnfparamdescr),
++	modpdescr
++};
++
++/* action (instance) parameters (v6 config format) */
++static struct cnfparamdescr actpdescr[] = {
++	{ "kubernetesurl", eCmdHdlrString, 0 },
++	{ "srcmetadatapath", eCmdHdlrString, 0 },
++	{ "dstmetadatapath", eCmdHdlrString, 0 },
++	{ "tls.cacert", eCmdHdlrString, 0 },
++	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
++	{ "token", eCmdHdlrString, 0 },
++	{ "tokenfile", eCmdHdlrString, 0 },
++	{ "annotation_match", eCmdHdlrArray, 0 },
++	{ "de_dot", eCmdHdlrBinary, 0 },
++	{ "de_dot_separator", eCmdHdlrString, 0 },
++	{ "filenamerulebase", eCmdHdlrString, 0 },
++	{ "containerrulebase", eCmdHdlrString, 0 }
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	,
++	{ "filenamerules", eCmdHdlrArray, 0 },
++	{ "containerrules", eCmdHdlrArray, 0 }
++#endif
++};
++static struct cnfparamblk actpblk =
++	{ CNFPARAMBLK_VERSION,
++	  sizeof(actpdescr)/sizeof(struct cnfparamdescr),
++	  actpdescr
++	};
++
++static modConfData_t *loadModConf = NULL;	/* modConf ptr to use for the current load process */
++static modConfData_t *runModConf = NULL;	/* modConf ptr to use for the current exec process */
++
++static void free_annotationmatch(annotation_match_t *match) {
++	if (match) {
++		for(int ii = 0 ; ii < match->nmemb; ++ii) {
++			if (match->patterns)
++				free(match->patterns[ii]);
++			if (match->regexps)
++				regexp.regfree(&match->regexps[ii]);
++		}
++		free(match->patterns);
++		match->patterns = NULL;
++		free(match->regexps);
++		match->regexps = NULL;
++		match->nmemb = 0;
++	}
++}
++
++static int init_annotationmatch(annotation_match_t *match, struct cnfarray *ar) {
++	DEFiRet;
++
++	match->nmemb = ar->nmemb;
++	CHKmalloc(match->patterns = calloc(sizeof(uchar*), match->nmemb));
++	CHKmalloc(match->regexps = calloc(sizeof(regex_t), match->nmemb));
++	for(int jj = 0; jj < ar->nmemb; ++jj) {
++		int rexret = 0;
++		match->patterns[jj] = (uchar*)es_str2cstr(ar->arr[jj], NULL);
++		rexret = regexp.regcomp(&match->regexps[jj],
++				(char *)match->patterns[jj], REG_EXTENDED|REG_NOSUB);
++		if (0 != rexret) {
++			char errMsg[512];
++			regexp.regerror(rexret, &match->regexps[jj], errMsg, sizeof(errMsg));
++			iRet = RS_RET_CONFIG_ERROR;
++			errmsg.LogError(0, iRet,
++					"error: could not compile annotation_match string [%s]"
++					" into an extended regexp - %d: %s\n",
++					match->patterns[jj], rexret, errMsg);
++			break;
++		}
++	}
++finalize_it:
++	if (iRet)
++		free_annotationmatch(match);
++	RETiRet;
++}
++
++static int copy_annotationmatch(annotation_match_t *src, annotation_match_t *dest) {
++	DEFiRet;
++
++	dest->nmemb = src->nmemb;
++	CHKmalloc(dest->patterns = malloc(sizeof(uchar*) * dest->nmemb));
++	CHKmalloc(dest->regexps = calloc(sizeof(regex_t), dest->nmemb));
++	for(int jj = 0 ; jj < src->nmemb ; ++jj) {
++		CHKmalloc(dest->patterns[jj] = (uchar*)strdup((char *)src->patterns[jj]));
++		/* assumes was already successfully compiled */
++		regexp.regcomp(&dest->regexps[jj], (char *)dest->patterns[jj], REG_EXTENDED|REG_NOSUB);
++	}
++finalize_it:
++    if (iRet)
++    	free_annotationmatch(dest);
++	RETiRet;
++}
++
++/* takes a hash of annotations and returns another json object hash containing only the
++ * keys that match - this logic is taken directly from fluent-plugin-kubernetes_metadata_filter
++ * except that we do not add the key multiple times to the object to be returned
++ */
++static struct json_object *match_annotations(annotation_match_t *match,
++		struct json_object *annotations) {
++	struct json_object *ret = NULL;
++
++	for (int jj = 0; jj < match->nmemb; ++jj) {
++		struct json_object_iterator it = json_object_iter_begin(annotations);
++		struct json_object_iterator itEnd = json_object_iter_end(annotations);
++		for (;!json_object_iter_equal(&it, &itEnd); json_object_iter_next(&it)) {
++			const char *const key = json_object_iter_peek_name(&it);
++			if (!ret || !fjson_object_object_get_ex(ret, key, NULL)) {
++				if (!regexp.regexec(&match->regexps[jj], key, 0, NULL, 0)) {
++					if (!ret) {
++						ret = json_object_new_object();
++					}
++					json_object_object_add(ret, key,
++						json_object_get(json_object_iter_peek_value(&it)));
++				}
++			}
++		}
++	}
++	return ret;
++}
++
++/* This will take a hash of labels or annotations and will de_dot the keys.
++ * It will return a brand new hash.  AFAICT, there is no safe way to
++ * iterate over the hash while modifying it in place.
++ */
++static struct json_object *de_dot_json_object(struct json_object *jobj,
++		const char *delim, size_t delim_len) {
++	struct json_object *ret = NULL;
++	struct json_object_iterator it = json_object_iter_begin(jobj);
++	struct json_object_iterator itEnd = json_object_iter_end(jobj);
++	es_str_t *new_es_key = NULL;
++	DEFiRet;
++
++	ret = json_object_new_object();
++	while (!json_object_iter_equal(&it, &itEnd)) {
++		const char *const key = json_object_iter_peek_name(&it);
++		const char *cc = strstr(key, ".");
++		if (NULL == cc) {
++			json_object_object_add(ret, key,
++					json_object_get(json_object_iter_peek_value(&it)));
++		} else {
++			char *new_key = NULL;
++			const char *prevcc = key;
++			new_es_key = es_newStrFromCStr(key, (es_size_t)(cc-prevcc));
++			while (cc) {
++				if (es_addBuf(&new_es_key, (char *)delim, (es_size_t)delim_len))
++					ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++				cc += 1; /* one past . */
++				prevcc = cc; /* beginning of next substring */
++				if ((cc = strstr(prevcc, ".")) || (cc = strchr(prevcc, '\0'))) {
++					if (es_addBuf(&new_es_key, (char *)prevcc, (es_size_t)(cc-prevcc)))
++						ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++					if (!*cc)
++						cc = NULL; /* EOS - done */
++				}
++			}
++			new_key = es_str2cstr(new_es_key, NULL);
++			es_deleteStr(new_es_key);
++			new_es_key = NULL;
++			json_object_object_add(ret, new_key,
++					json_object_get(json_object_iter_peek_value(&it)));
++			free(new_key);
++		}
++		json_object_iter_next(&it);
++	}
++finalize_it:
++	if (iRet != RS_RET_OK) {
++		json_object_put(ret);
++		ret = NULL;
++	}
++	if (new_es_key)
++		es_deleteStr(new_es_key);
++	return ret;
++}
++
++/* given a "metadata" object field, do
++ * - make sure "annotations" field has only the matching keys
++ * - de_dot the "labels" and "annotations" fields keys
++ * This modifies the jMetadata object in place
++ */
++static void parse_labels_annotations(struct json_object *jMetadata,
++		annotation_match_t *match, sbool de_dot,
++		const char *delim, size_t delim_len) {
++	struct json_object *jo = NULL;
++
++	if (fjson_object_object_get_ex(jMetadata, "annotations", &jo)) {
++		if ((jo = match_annotations(match, jo)))
++			json_object_object_add(jMetadata, "annotations", jo);
++		else
++			json_object_object_del(jMetadata, "annotations");
++	}
++	/* dedot labels and annotations */
++	if (de_dot) {
++		struct json_object *jo2 = NULL;
++		if (fjson_object_object_get_ex(jMetadata, "annotations", &jo)) {
++			if ((jo2 = de_dot_json_object(jo, delim, delim_len))) {
++				json_object_object_add(jMetadata, "annotations", jo2);
++			}
++		}
++		if (fjson_object_object_get_ex(jMetadata, "labels", &jo)) {
++			if ((jo2 = de_dot_json_object(jo, delim, delim_len))) {
++				json_object_object_add(jMetadata, "labels", jo2);
++			}
++		}
++	}
++}
++
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++static int array_to_rules(struct cnfarray *ar, char **rules) {
++	DEFiRet;
++	es_str_t *tmpstr = NULL;
++	es_size_t size = 0;
++
++	if (rules == NULL)
++		FINALIZE;
++	*rules = NULL;
++	if (!ar->nmemb)
++		FINALIZE;
++	for (int jj = 0; jj < ar->nmemb; jj++)
++		size += es_strlen(ar->arr[jj]);
++	if (!size)
++		FINALIZE;
++	CHKmalloc(tmpstr = es_newStr(size));
++	CHKiRet((es_addStr(&tmpstr, ar->arr[0])));
++	CHKiRet((es_addBufConstcstr(&tmpstr, "\n")));
++	for(int jj=1; jj < ar->nmemb; ++jj) {
++		CHKiRet((es_addStr(&tmpstr, ar->arr[jj])));
++		CHKiRet((es_addBufConstcstr(&tmpstr, "\n")));
++	}
++	CHKiRet((es_addBufConstcstr(&tmpstr, "\0")));
++	CHKmalloc(*rules = es_str2cstr(tmpstr, NULL));
++finalize_it:
++	if (tmpstr) {
++		es_deleteStr(tmpstr);
++	}
++    if (iRet != RS_RET_OK) {
++    	free(*rules);
++    	*rules = NULL;
++    }
++	RETiRet;
++}
++#endif
++
++/* callback for liblognorm error messages */
++static void
++errCallBack(void __attribute__((unused)) *cookie, const char *msg,
++	    size_t __attribute__((unused)) lenMsg)
++{
++	errmsg.LogError(0, RS_RET_ERR_LIBLOGNORM, "liblognorm error: %s", msg);
++}
++
++static rsRetVal
++set_lnctx(ln_ctx *ctxln, char *instRules, uchar *instRulebase, char *modRules, uchar *modRulebase)
++{
++	DEFiRet;
++	if (ctxln == NULL)
++		FINALIZE;
++	CHKmalloc(*ctxln = ln_initCtx());
++	ln_setErrMsgCB(*ctxln, errCallBack, NULL);
++	if(instRules) {
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		if(ln_loadSamplesFromString(*ctxln, instRules) !=0) {
++			errmsg.LogError(0, RS_RET_NO_RULEBASE, "error: normalization rules '%s' "
++					"could not be loaded", instRules);
++			ABORT_FINALIZE(RS_RET_ERR_LIBLOGNORM_SAMPDB_LOAD);
++		}
++#else
++		(void)instRules;
++#endif
++	} else if(instRulebase) {
++		if(ln_loadSamples(*ctxln, (char*) instRulebase) != 0) {
++			errmsg.LogError(0, RS_RET_NO_RULEBASE, "error: normalization rulebase '%s' "
++					"could not be loaded", instRulebase);
++			ABORT_FINALIZE(RS_RET_ERR_LIBLOGNORM_SAMPDB_LOAD);
++		}
++	} else if(modRules) {
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		if(ln_loadSamplesFromString(*ctxln, modRules) !=0) {
++			errmsg.LogError(0, RS_RET_NO_RULEBASE, "error: normalization rules '%s' "
++					"could not be loaded", modRules);
++			ABORT_FINALIZE(RS_RET_ERR_LIBLOGNORM_SAMPDB_LOAD);
++		}
++#else
++		(void)modRules;
++#endif
++	} else if(modRulebase) {
++		if(ln_loadSamples(*ctxln, (char*) modRulebase) != 0) {
++			errmsg.LogError(0, RS_RET_NO_RULEBASE, "error: normalization rulebase '%s' "
++					"could not be loaded", modRulebase);
++			ABORT_FINALIZE(RS_RET_ERR_LIBLOGNORM_SAMPDB_LOAD);
++		}
++	}
++finalize_it:
++	if (iRet != RS_RET_OK){
++		ln_exitCtx(*ctxln);
++		*ctxln = NULL;
++	}
++	RETiRet;
++}
++
++BEGINbeginCnfLoad
++CODESTARTbeginCnfLoad
++	loadModConf = pModConf;
++	pModConf->pConf = pConf;
++ENDbeginCnfLoad
++
++
++BEGINsetModCnf
++	struct cnfparamvals *pvals = NULL;
++	int i;
++	FILE *fp;
++	int ret;
++CODESTARTsetModCnf
++	pvals = nvlstGetParams(lst, &modpblk, NULL);
++	if(pvals == NULL) {
++		errmsg.LogError(0, RS_RET_MISSING_CNFPARAMS, "mmkubernetes: "
++			"error processing module config parameters [module(...)]");
++		ABORT_FINALIZE(RS_RET_MISSING_CNFPARAMS);
++	}
++
++	if(Debug) {
++		dbgprintf("module (global) param blk for mmkubernetes:\n");
++		cnfparamsPrint(&modpblk, pvals);
++	}
++
++	loadModConf->de_dot = DFLT_DE_DOT;
++	for(i = 0 ; i < modpblk.nParams ; ++i) {
++		if(!pvals[i].bUsed) {
++			continue;
++		} else if(!strcmp(modpblk.descr[i].name, "kubernetesurl")) {
++			free(loadModConf->kubernetesUrl);
++			loadModConf->kubernetesUrl = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(modpblk.descr[i].name, "srcmetadatapath")) {
++			free(loadModConf->srcMetadataPath);
++			loadModConf->srcMetadataPath = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			/* todo: sanitize the path */
++		} else if(!strcmp(modpblk.descr[i].name, "dstmetadatapath")) {
++			free(loadModConf->dstMetadataPath);
++			loadModConf->dstMetadataPath = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			/* todo: sanitize the path */
++		} else if(!strcmp(modpblk.descr[i].name, "tls.cacert")) {
++			free(loadModConf->caCertFile);
++			loadModConf->caCertFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->caCertFile, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: certificate file %s couldn't be accessed: %s\n",
++						loadModConf->caCertFile, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(modpblk.descr[i].name, "allowunsignedcerts")) {
++			loadModConf->allowUnsignedCerts = pvals[i].val.d.n;
++		} else if(!strcmp(modpblk.descr[i].name, "token")) {
++			free(loadModConf->token);
++			loadModConf->token = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(modpblk.descr[i].name, "tokenfile")) {
++			free(loadModConf->tokenFile);
++			loadModConf->tokenFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->tokenFile, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: token file %s couldn't be accessed: %s\n",
++						loadModConf->tokenFile, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(modpblk.descr[i].name, "annotation_match")) {
++			free_annotationmatch(&loadModConf->annotation_match);
++			if ((ret = init_annotationmatch(&loadModConf->annotation_match, pvals[i].val.d.ar)))
++				ABORT_FINALIZE(ret);
++		} else if(!strcmp(modpblk.descr[i].name, "de_dot")) {
++			loadModConf->de_dot = pvals[i].val.d.n;
++		} else if(!strcmp(modpblk.descr[i].name, "de_dot_separator")) {
++			free(loadModConf->de_dot_separator);
++			loadModConf->de_dot_separator = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		} else if(!strcmp(modpblk.descr[i].name, "filenamerules")) {
++			free(loadModConf->fnRules);
++			CHKiRet((array_to_rules(pvals[i].val.d.ar, &loadModConf->fnRules)));
++#endif
++		} else if(!strcmp(modpblk.descr[i].name, "filenamerulebase")) {
++			free(loadModConf->fnRulebase);
++			loadModConf->fnRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->fnRulebase, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: filenamerulebase file %s couldn't be accessed: %s\n",
++						loadModConf->fnRulebase, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		} else if(!strcmp(modpblk.descr[i].name, "containerrules")) {
++			free(loadModConf->contRules);
++			CHKiRet((array_to_rules(pvals[i].val.d.ar, &loadModConf->contRules)));
++#endif
++		} else if(!strcmp(modpblk.descr[i].name, "containerrulebase")) {
++			free(loadModConf->contRulebase);
++			loadModConf->contRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->contRulebase, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: containerrulebase file %s couldn't be accessed: %s\n",
++						loadModConf->contRulebase, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else {
++			dbgprintf("mmkubernetes: program error, non-handled "
++				"param '%s' in module() block\n", modpblk.descr[i].name);
++			/* todo: error message? */
++		}
++	}
++
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	if (loadModConf->fnRules && loadModConf->fnRulebase) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++				"mmkubernetes: only 1 of filenamerules or filenamerulebase may be used");
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++	if (loadModConf->contRules && loadModConf->contRulebase) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++				"mmkubernetes: only 1 of containerrules or containerrulebase may be used");
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++#endif
++
++	/* set defaults */
++	if(loadModConf->srcMetadataPath == NULL)
++		loadModConf->srcMetadataPath = (uchar *) strdup(DFLT_SRCMD_PATH);
++	if(loadModConf->dstMetadataPath == NULL)
++		loadModConf->dstMetadataPath = (uchar *) strdup(DFLT_DSTMD_PATH);
++	if(loadModConf->de_dot_separator == NULL)
++		loadModConf->de_dot_separator = (uchar *) strdup(DFLT_DE_DOT_SEPARATOR);
++	if(loadModConf->de_dot_separator)
++		loadModConf->de_dot_separator_len = strlen((const char *)loadModConf->de_dot_separator);
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	if (loadModConf->fnRules == NULL && loadModConf->fnRulebase == NULL)
++		loadModConf->fnRules = strdup(DFLT_FILENAME_LNRULES);
++	if (loadModConf->contRules == NULL && loadModConf->contRulebase == NULL)
++		loadModConf->contRules = strdup(DFLT_CONTAINER_LNRULES);
++#else
++	if (loadModConf->fnRulebase == NULL)
++		loadModConf->fnRulebase = (uchar *)strdup(DFLT_FILENAME_RULEBASE);
++	if (loadModConf->contRulebase == NULL)
++		loadModConf->contRulebase = (uchar *)strdup(DFLT_CONTAINER_RULEBASE);
++#endif
++	caches = calloc(1, sizeof(struct cache_s *));
++
++finalize_it:
++	if(pvals != NULL)
++		cnfparamvalsDestruct(pvals, &modpblk);
++ENDsetModCnf
++
++
++BEGINcreateInstance
++CODESTARTcreateInstance
++ENDcreateInstance
++
++
++BEGINfreeInstance
++CODESTARTfreeInstance
++	free(pData->kubernetesUrl);
++	msgPropDescrDestruct(pData->srcMetadataDescr);
++	free(pData->srcMetadataDescr);
++	free(pData->dstMetadataPath);
++	free(pData->caCertFile);
++	free(pData->token);
++	free(pData->tokenFile);
++	free(pData->fnRules);
++	free(pData->fnRulebase);
++	ln_exitCtx(pData->fnCtxln);
++	free(pData->contRules);
++	free(pData->contRulebase);
++	ln_exitCtx(pData->contCtxln);
++	free_annotationmatch(&pData->annotation_match);
++	free(pData->de_dot_separator);
++	msgPropDescrDestruct(pData->contNameDescr);
++	free(pData->contNameDescr);
++	msgPropDescrDestruct(pData->contIdFullDescr);
++	free(pData->contIdFullDescr);
++ENDfreeInstance
++
++static size_t curlCB(char *data, size_t size, size_t nmemb, void *usrptr)
++{
++	DEFiRet;
++	wrkrInstanceData_t *pWrkrData = (wrkrInstanceData_t *) usrptr;
++	char * buf;
++	size_t newlen;
++
++	newlen = pWrkrData->curlRplyLen + size * nmemb;
++	CHKmalloc(buf = realloc(pWrkrData->curlRply, newlen));
++	memcpy(buf + pWrkrData->curlRplyLen, data, size * nmemb);
++	pWrkrData->curlRply = buf;
++	pWrkrData->curlRplyLen = newlen;
++
++finalize_it:
++	if (iRet != RS_RET_OK) {
++		return 0;
++	}
++	return size * nmemb;
++}
++
++BEGINcreateWrkrInstance
++CODESTARTcreateWrkrInstance
++	CURL *ctx;
++	struct curl_slist *hdr = NULL;
++	char *tokenHdr = NULL;
++	FILE *fp = NULL;
++	char *token = NULL;
++
++	hdr = curl_slist_append(hdr, "Content-Type: text/json; charset=utf-8");
++	if (pWrkrData->pData->token) {
++		if ((-1 == asprintf(&tokenHdr, "Authorization: Bearer %s", pWrkrData->pData->token)) ||
++			(!tokenHdr)) {
++			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++		}
++	} else if (pWrkrData->pData->tokenFile) {
++		struct stat statbuf;
++		fp = fopen((const char*)pWrkrData->pData->tokenFile, "r");
++		if (fp && !fstat(fileno(fp), &statbuf)) {
++			size_t bytesread;
++			CHKmalloc(token = malloc((statbuf.st_size+1)*sizeof(char)));
++			if (0 < (bytesread = fread(token, sizeof(char), statbuf.st_size, fp))) {
++				token[bytesread] = '\0';
++				if ((-1 == asprintf(&tokenHdr, "Authorization: Bearer %s", token)) ||
++					(!tokenHdr)) {
++					ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++				}
++			}
++			free(token);
++			token = NULL;
++		}
++		if (fp) {
++			fclose(fp);
++			fp = NULL;
++		}
++	}
++	if (tokenHdr) {
++		hdr = curl_slist_append(hdr, tokenHdr);
++		free(tokenHdr);
++	}
++	pWrkrData->curlHdr = hdr;
++	ctx = curl_easy_init();
++	curl_easy_setopt(ctx, CURLOPT_HTTPHEADER, hdr);
++	curl_easy_setopt(ctx, CURLOPT_WRITEFUNCTION, curlCB);
++	curl_easy_setopt(ctx, CURLOPT_WRITEDATA, pWrkrData);
++	if(pWrkrData->pData->caCertFile)
++		curl_easy_setopt(ctx, CURLOPT_CAINFO, pWrkrData->pData->caCertFile);
++	if(pWrkrData->pData->allowUnsignedCerts)
++		curl_easy_setopt(ctx, CURLOPT_SSL_VERIFYPEER, 0);
++
++	pWrkrData->curlCtx = ctx;
++finalize_it:
++	free(token);
++	if (fp) {
++		fclose(fp);
++	}
++ENDcreateWrkrInstance
++
++
++BEGINfreeWrkrInstance
++CODESTARTfreeWrkrInstance
++	curl_easy_cleanup(pWrkrData->curlCtx);
++	curl_slist_free_all(pWrkrData->curlHdr);
++ENDfreeWrkrInstance
++
++
++static struct cache_s *cacheNew(const uchar *url)
++{
++	struct cache_s *cache;
++
++	if (NULL == (cache = calloc(1, sizeof(struct cache_s)))) {
++		FINALIZE;
++	}
++	cache->kbUrl = url;
++	cache->mdHt = create_hashtable(100, hash_from_string,
++		key_equals_string, (void (*)(void *)) json_object_put);
++	cache->nsHt = create_hashtable(100, hash_from_string,
++		key_equals_string, (void (*)(void *)) json_object_put);
++	cache->cacheMtx = malloc(sizeof(pthread_mutex_t));
++	if (!cache->mdHt || !cache->nsHt || !cache->cacheMtx) {
++		free (cache);
++		cache = NULL;
++		FINALIZE;
++	}
++	pthread_mutex_init(cache->cacheMtx, NULL);
++
++finalize_it:
++	return cache;
++}
++
++
++static void cacheFree(struct cache_s *cache)
++{
++	hashtable_destroy(cache->mdHt, 1);
++	hashtable_destroy(cache->nsHt, 1);
++	pthread_mutex_destroy(cache->cacheMtx);
++	free(cache->cacheMtx);
++	free(cache);
++}
++
++
++BEGINnewActInst
++	struct cnfparamvals *pvals = NULL;
++	int i;
++	FILE *fp;
++	char *rxstr = NULL;
++	char *srcMetadataPath = NULL;
++CODESTARTnewActInst
++	DBGPRINTF("newActInst (mmkubernetes)\n");
++
++	pvals = nvlstGetParams(lst, &actpblk, NULL);
++	if(pvals == NULL) {
++		errmsg.LogError(0, RS_RET_MISSING_CNFPARAMS, "mmkubernetes: "
++			"error processing config parameters [action(...)]");
++		ABORT_FINALIZE(RS_RET_MISSING_CNFPARAMS);
++	}
++
++	if(Debug) {
++		dbgprintf("action param blk in mmkubernetes:\n");
++		cnfparamsPrint(&actpblk, pvals);
++	}
++
++	CODE_STD_STRING_REQUESTnewActInst(1)
++	CHKiRet(OMSRsetEntry(*ppOMSR, 0, NULL, OMSR_TPL_AS_MSG));
++	CHKiRet(createInstance(&pData));
++
++	pData->de_dot = loadModConf->de_dot;
++	pData->allowUnsignedCerts = loadModConf->allowUnsignedCerts;
++	for(i = 0 ; i < actpblk.nParams ; ++i) {
++		if(!pvals[i].bUsed) {
++			continue;
++		} else if(!strcmp(actpblk.descr[i].name, "kubernetesurl")) {
++			free(pData->kubernetesUrl);
++			pData->kubernetesUrl = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(actpblk.descr[i].name, "srcmetadatapath")) {
++			msgPropDescrDestruct(pData->srcMetadataDescr);
++			free(pData->srcMetadataDescr);
++			CHKmalloc(pData->srcMetadataDescr = MALLOC(sizeof(msgPropDescr_t)));
++			srcMetadataPath = es_str2cstr(pvals[i].val.d.estr, NULL);
++			CHKiRet(msgPropDescrFill(pData->srcMetadataDescr, (uchar *)srcMetadataPath,
++				strlen(srcMetadataPath)));
++			/* todo: sanitize the path */
++		} else if(!strcmp(actpblk.descr[i].name, "dstmetadatapath")) {
++			free(pData->dstMetadataPath);
++			pData->dstMetadataPath = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			/* todo: sanitize the path */
++		} else if(!strcmp(actpblk.descr[i].name, "tls.cacert")) {
++			free(pData->caCertFile);
++			pData->caCertFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->caCertFile, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: certificate file %s couldn't be accessed: %s\n",
++						pData->caCertFile, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "allowunsignedcerts")) {
++			pData->allowUnsignedCerts = pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "token")) {
++			free(pData->token);
++			pData->token = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++		} else if(!strcmp(actpblk.descr[i].name, "tokenfile")) {
++			free(pData->tokenFile);
++			pData->tokenFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->tokenFile, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: token file %s couldn't be accessed: %s\n",
++						pData->tokenFile, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "annotation_match")) {
++			free_annotationmatch(&pData->annotation_match);
++			if (RS_RET_OK != (iRet = init_annotationmatch(&pData->annotation_match, pvals[i].val.d.ar)))
++				ABORT_FINALIZE(iRet);
++		} else if(!strcmp(actpblk.descr[i].name, "de_dot")) {
++			pData->de_dot = pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "de_dot_separator")) {
++			free(pData->de_dot_separator);
++			pData->de_dot_separator = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		} else if(!strcmp(modpblk.descr[i].name, "filenamerules")) {
++			free(pData->fnRules);
++			CHKiRet((array_to_rules(pvals[i].val.d.ar, &pData->fnRules)));
++#endif
++		} else if(!strcmp(modpblk.descr[i].name, "filenamerulebase")) {
++			free(pData->fnRulebase);
++			pData->fnRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->fnRulebase, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: filenamerulebase file %s couldn't be accessed: %s\n",
++						pData->fnRulebase, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++		} else if(!strcmp(modpblk.descr[i].name, "containerrules")) {
++			free(pData->contRules);
++			CHKiRet((array_to_rules(pvals[i].val.d.ar, &pData->contRules)));
++#endif
++		} else if(!strcmp(modpblk.descr[i].name, "containerrulebase")) {
++			free(pData->contRulebase);
++			pData->contRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->contRulebase, "r");
++			if(fp == NULL) {
++				char errStr[1024];
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				errmsg.LogError(0, iRet,
++						"error: containerrulebase file %s couldn't be accessed: %s\n",
++						pData->contRulebase, errStr);
++				ABORT_FINALIZE(iRet);
++			} else {
++				fclose(fp);
++			}
++		} else {
++			dbgprintf("mmkubernetes: program error, non-handled "
++				"param '%s' in action() block\n", actpblk.descr[i].name);
++			/* todo: error message? */
++		}
++	}
++
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	if (pData->fnRules && pData->fnRulebase) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++		    "mmkubernetes: only 1 of filenamerules or filenamerulebase may be used");
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++	if (pData->contRules && pData->contRulebase) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++			"mmkubernetes: only 1 of containerrules or containerrulebase may be used");
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++#endif
++	CHKiRet(set_lnctx(&pData->fnCtxln, pData->fnRules, pData->fnRulebase,
++			loadModConf->fnRules, loadModConf->fnRulebase));
++	CHKiRet(set_lnctx(&pData->contCtxln, pData->contRules, pData->contRulebase,
++			loadModConf->contRules, loadModConf->contRulebase));
++
++	if(pData->kubernetesUrl == NULL) {
++		if(loadModConf->kubernetesUrl == NULL) {
++			CHKmalloc(pData->kubernetesUrl = (uchar *) strdup(DFLT_KUBERNETES_URL));
++		} else {
++			CHKmalloc(pData->kubernetesUrl = (uchar *) strdup((char *) loadModConf->kubernetesUrl));
++		}
++	}
++	if(pData->srcMetadataDescr == NULL) {
++		CHKmalloc(pData->srcMetadataDescr = MALLOC(sizeof(msgPropDescr_t)));
++		CHKiRet(msgPropDescrFill(pData->srcMetadataDescr, loadModConf->srcMetadataPath,
++			strlen((char *)loadModConf->srcMetadataPath)));
++	}
++	if(pData->dstMetadataPath == NULL)
++		pData->dstMetadataPath = (uchar *) strdup((char *) loadModConf->dstMetadataPath);
++	if(pData->caCertFile == NULL && loadModConf->caCertFile)
++		pData->caCertFile = (uchar *) strdup((char *) loadModConf->caCertFile);
++	if(pData->token == NULL && loadModConf->token)
++		pData->token = (uchar *) strdup((char *) loadModConf->token);
++	if(pData->tokenFile == NULL && loadModConf->tokenFile)
++		pData->tokenFile = (uchar *) strdup((char *) loadModConf->tokenFile);
++	if(pData->de_dot_separator == NULL && loadModConf->de_dot_separator)
++		pData->de_dot_separator = (uchar *) strdup((char *) loadModConf->de_dot_separator);
++	if((pData->annotation_match.nmemb == 0) && (loadModConf->annotation_match.nmemb > 0))
++		copy_annotationmatch(&loadModConf->annotation_match, &pData->annotation_match);
++
++	if(pData->de_dot_separator)
++		pData->de_dot_separator_len = strlen((const char *)pData->de_dot_separator);
++
++	CHKmalloc(pData->contNameDescr = MALLOC(sizeof(msgPropDescr_t)));
++	CHKiRet(msgPropDescrFill(pData->contNameDescr, (uchar*) DFLT_CONTAINER_NAME,
++			strlen(DFLT_CONTAINER_NAME)));
++	CHKmalloc(pData->contIdFullDescr = MALLOC(sizeof(msgPropDescr_t)));
++	CHKiRet(msgPropDescrFill(pData->contIdFullDescr, (uchar*) DFLT_CONTAINER_ID_FULL,
++			strlen(DFLT_CONTAINER_NAME)));
++
++	/* get the cache for this url */
++	for(i = 0; caches[i] != NULL; i++) {
++		if(!strcmp((char *) pData->kubernetesUrl, (char *) caches[i]->kbUrl))
++			break;
++	}
++	if(caches[i] != NULL) {
++		pData->cache = caches[i];
++	} else {
++		CHKmalloc(pData->cache = cacheNew(pData->kubernetesUrl));
++
++		CHKmalloc(caches = realloc(caches, (i + 2) * sizeof(struct cache_s *)));
++		caches[i] = pData->cache;
++		caches[i + 1] = NULL;
++	}
++CODE_STD_FINALIZERnewActInst
++	if(pvals != NULL)
++		cnfparamvalsDestruct(pvals, &actpblk);
++	free(rxstr);
++	free(srcMetadataPath);
++ENDnewActInst
++
++
++/* legacy config format is not supported */
++BEGINparseSelectorAct
++CODESTARTparseSelectorAct
++CODE_STD_STRING_REQUESTparseSelectorAct(1)
++	if(strncmp((char *) p, ":mmkubernetes:", sizeof(":mmkubernetes:") - 1)) {
++		errmsg.LogError(0, RS_RET_LEGA_ACT_NOT_SUPPORTED,
++			"mmkubernetes supports only v6+ config format, use: "
++			"action(type=\"mmkubernetes\" ...)");
++	}
++	ABORT_FINALIZE(RS_RET_CONFLINE_UNPROCESSED);
++CODE_STD_FINALIZERparseSelectorAct
++ENDparseSelectorAct
++
++
++BEGINendCnfLoad
++CODESTARTendCnfLoad
++ENDendCnfLoad
++
++
++BEGINcheckCnf
++CODESTARTcheckCnf
++ENDcheckCnf
++
++
++BEGINactivateCnf
++CODESTARTactivateCnf
++	runModConf = pModConf;
++ENDactivateCnf
++
++
++BEGINfreeCnf
++CODESTARTfreeCnf
++	int i;
++
++	free(pModConf->kubernetesUrl);
++	free(pModConf->srcMetadataPath);
++	free(pModConf->dstMetadataPath);
++	free(pModConf->caCertFile);
++	free(pModConf->token);
++	free(pModConf->tokenFile);
++	free(pModConf->de_dot_separator);
++	free(pModConf->fnRules);
++	free(pModConf->fnRulebase);
++	free(pModConf->contRules);
++	free(pModConf->contRulebase);
++	free_annotationmatch(&pModConf->annotation_match);
++	for(i = 0; caches[i] != NULL; i++)
++		cacheFree(caches[i]);
++	free(caches);
++ENDfreeCnf
++
++
++BEGINdbgPrintInstInfo
++CODESTARTdbgPrintInstInfo
++	dbgprintf("mmkubernetes\n");
++	dbgprintf("\tkubernetesUrl='%s'\n", pData->kubernetesUrl);
++	dbgprintf("\tsrcMetadataPath='%s'\n", pData->srcMetadataDescr->name);
++	dbgprintf("\tdstMetadataPath='%s'\n", pData->dstMetadataPath);
++	dbgprintf("\ttls.cacert='%s'\n", pData->caCertFile);
++	dbgprintf("\tallowUnsignedCerts='%d'\n", pData->allowUnsignedCerts);
++	dbgprintf("\ttoken='%s'\n", pData->token);
++	dbgprintf("\ttokenFile='%s'\n", pData->tokenFile);
++	dbgprintf("\tde_dot='%d'\n", pData->de_dot);
++	dbgprintf("\tde_dot_separator='%s'\n", pData->de_dot_separator);
++	dbgprintf("\tfilenamerulebase='%s'\n", pData->fnRulebase);
++	dbgprintf("\tcontainerrulebase='%s'\n", pData->contRulebase);
++#if HAVE_LOADSAMPLESFROMSTRING == 1
++	dbgprintf("\tfilenamerules='%s'\n", pData->fnRules);
++	dbgprintf("\tcontainerrules='%s'\n", pData->contRules);
++#endif
++ENDdbgPrintInstInfo
++
++
++BEGINtryResume
++CODESTARTtryResume
++ENDtryResume
++
++static rsRetVal
++extractMsgMetadata(smsg_t *pMsg, instanceData *pData, struct json_object **json)
++{
++	DEFiRet;
++	uchar *filename = NULL, *container_name = NULL, *container_id_full = NULL;
++	rs_size_t fnLen, container_name_len, container_id_full_len;
++	unsigned short freeFn = 0, free_container_name = 0, free_container_id_full = 0;
++	int lnret;
++	struct json_object *cnid = NULL;
++
++	if (!json)
++		FINALIZE;
++	*json = NULL;
++	/* extract metadata from the CONTAINER_NAME field and see if CONTAINER_ID_FULL is present */
++	container_name = MsgGetProp(pMsg, NULL, pData->contNameDescr,
++				    &container_name_len, &free_container_name, NULL);
++	container_id_full = MsgGetProp(
++		pMsg, NULL, pData->contIdFullDescr, &container_id_full_len, &free_container_id_full, NULL);
++
++	if (container_name && container_id_full && container_name_len && container_id_full_len) {
++		dbgprintf("mmkubernetes: CONTAINER_NAME: '%s'  CONTAINER_ID_FULL: '%s'.\n",
++			  container_name, container_id_full);
++		if ((lnret = ln_normalize(pData->contCtxln, (char*)container_name,
++					  container_name_len, json))) {
++			if (LN_WRONGPARSER != lnret) {
++				LogMsg(0, RS_RET_ERR, LOG_ERR,
++					"mmkubernetes: error parsing container_name [%s]: [%d]",
++					container_name, lnret);
++
++				ABORT_FINALIZE(RS_RET_ERR);
++			}
++			/* else assume parser didn't find a match and fall through */
++		} else if (fjson_object_object_get_ex(*json, "pod_name", NULL) &&
++			fjson_object_object_get_ex(*json, "namespace_name", NULL) &&
++			fjson_object_object_get_ex(*json, "container_name", NULL)) {
++			/* if we have fields for pod name, namespace name, container name,
++			 * and container id, we are good to go */
++			/* add field for container id */
++			json_object_object_add(*json, "container_id",
++				json_object_new_string_len((const char *)container_id_full,
++							   container_id_full_len));
++			ABORT_FINALIZE(RS_RET_OK);
++		}
++	}
++
++	/* extract metadata from the file name */
++	filename = MsgGetProp(pMsg, NULL, pData->srcMetadataDescr, &fnLen, &freeFn, NULL);
++	if((filename == NULL) || (fnLen == 0))
++		ABORT_FINALIZE(RS_RET_NOT_FOUND);
++
++	dbgprintf("mmkubernetes: filename: '%s' len %d.\n", filename, fnLen);
++	if ((lnret = ln_normalize(pData->fnCtxln, (char*)filename, fnLen, json))) {
++		if (LN_WRONGPARSER != lnret) {
++			LogMsg(0, RS_RET_ERR, LOG_ERR,
++				"mmkubernetes: error parsing container_name [%s]: [%d]",
++				filename, lnret);
++
++			ABORT_FINALIZE(RS_RET_ERR);
++		} else {
++			/* no match */
++			ABORT_FINALIZE(RS_RET_NOT_FOUND);
++		}
++}
++	/* if we have fields for pod name, namespace name, container name,
++	 * and container id, we are good to go */
++	if (fjson_object_object_get_ex(*json, "pod_name", NULL) &&
++		fjson_object_object_get_ex(*json, "namespace_name", NULL) &&
++		fjson_object_object_get_ex(*json, "container_name_and_id", &cnid)) {
++		/* parse container_name_and_id into container_name and container_id */
++		const char *container_name_and_id = json_object_get_string(cnid);
++		const char *last_dash = NULL;
++		if (container_name_and_id && (last_dash = strrchr(container_name_and_id, '-')) &&
++			*(last_dash + 1) && (last_dash != container_name_and_id)) {
++			json_object_object_add(*json, "container_name",
++				json_object_new_string_len(container_name_and_id,
++							   (int)(last_dash-container_name_and_id)));
++			json_object_object_add(*json, "container_id",
++					json_object_new_string(last_dash + 1));
++			ABORT_FINALIZE(RS_RET_OK);
++		}
++	}
++	ABORT_FINALIZE(RS_RET_NOT_FOUND);
++finalize_it:
++	if(freeFn)
++		free(filename);
++	if (free_container_name)
++		free(container_name);
++	if (free_container_id_full)
++		free(container_id_full);
++	if (iRet != RS_RET_OK) {
++		json_object_put(*json);
++		*json = NULL;
++	}
++	RETiRet;
++}
++
++
++static rsRetVal
++queryKB(wrkrInstanceData_t *pWrkrData, char *url, struct json_object **rply)
++{
++	DEFiRet;
++	CURLcode ccode;
++	struct json_tokener *jt = NULL;
++	struct json_object *jo;
++	long resp_code = 400;
++
++	/* query kubernetes for pod info */
++	ccode = curl_easy_setopt(pWrkrData->curlCtx, CURLOPT_URL, url);
++	if(ccode != CURLE_OK)
++		ABORT_FINALIZE(RS_RET_ERR);
++	if(CURLE_OK != (ccode = curl_easy_perform(pWrkrData->curlCtx))) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: failed to connect to [%s] - %d:%s\n",
++			      url, ccode, curl_easy_strerror(ccode));
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(CURLE_OK != (ccode = curl_easy_getinfo(pWrkrData->curlCtx,
++					CURLINFO_RESPONSE_CODE, &resp_code))) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: could not get response code from query to [%s] - %d:%s\n",
++			      url, ccode, curl_easy_strerror(ccode));
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code == 401) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: Unauthorized: not allowed to view url - "
++			      "check token/auth credentials [%s]\n",
++			      url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code == 403) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: Forbidden: no access - "
++			      "check permissions to view url [%s]\n",
++			      url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code == 404) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: Not Found: the resource does not exist at url [%s]\n",
++			      url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code == 429) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: Too Many Requests: the server is too heavily loaded "
++			      "to provide the data for the requested url [%s]\n",
++			      url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	if(resp_code != 200) {
++		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++			      "mmkubernetes: server returned unexpected code [%ld] for url [%s]\n",
++			      resp_code, url);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	/* parse retrieved data */
++	jt = json_tokener_new();
++	json_tokener_reset(jt);
++	jo = json_tokener_parse_ex(jt, pWrkrData->curlRply, pWrkrData->curlRplyLen);
++	json_tokener_free(jt);
++	if(!json_object_is_type(jo, json_type_object)) {
++		json_object_put(jo);
++		jo = NULL;
++		errmsg.LogMsg(0, RS_RET_JSON_PARSE_ERR, LOG_INFO,
++			      "mmkubernetes: unable to parse string as JSON:[%.*s]\n",
++			      (int)pWrkrData->curlRplyLen, pWrkrData->curlRply);
++		ABORT_FINALIZE(RS_RET_JSON_PARSE_ERR);
++	}
++
++	dbgprintf("mmkubernetes: queryKB reply:\n%s\n",
++		json_object_to_json_string_ext(jo, JSON_C_TO_STRING_PRETTY));
++
++	*rply = jo;
++
++finalize_it:
++	if(pWrkrData->curlRply != NULL) {
++		free(pWrkrData->curlRply);
++		pWrkrData->curlRply = NULL;
++		pWrkrData->curlRplyLen = 0;
++	}
++	RETiRet;
++}
++
++
++/* versions < 8.16.0 don't support BEGINdoAction_NoStrings */
++#if defined(BEGINdoAction_NoStrings)
++BEGINdoAction_NoStrings
++	smsg_t **ppMsg = (smsg_t **) pMsgData;
++	smsg_t *pMsg = ppMsg[0];
++#else
++BEGINdoAction
++	smsg_t *pMsg = (smsg_t*) ppString[0];
++#endif
++	const char *podName = NULL, *ns = NULL, *containerName = NULL,
++		*containerID = NULL;
++	char *mdKey = NULL;
++	struct json_object *jMetadata = NULL, *jMetadataCopy = NULL, *jMsgMeta = NULL,
++			*jo = NULL;
++	int add_ns_metadata = 0;
++CODESTARTdoAction
++	CHKiRet_Hdlr(extractMsgMetadata(pMsg, pWrkrData->pData, &jMsgMeta)) {
++		ABORT_FINALIZE((iRet == RS_RET_NOT_FOUND) ? RS_RET_OK : iRet);
++	}
++
++	if (fjson_object_object_get_ex(jMsgMeta, "pod_name", &jo))
++		podName = json_object_get_string(jo);
++	if (fjson_object_object_get_ex(jMsgMeta, "namespace_name", &jo))
++		ns = json_object_get_string(jo);
++	if (fjson_object_object_get_ex(jMsgMeta, "container_name", &jo))
++		containerName = json_object_get_string(jo);
++	if (fjson_object_object_get_ex(jMsgMeta, "container_id", &jo))
++		containerID = json_object_get_string(jo);
++	assert(podName != NULL);
++	assert(ns != NULL);
++	assert(containerName != NULL);
++	assert(containerID != NULL);
++
++	dbgprintf("mmkubernetes:\n  podName: '%s'\n  namespace: '%s'\n  containerName: '%s'\n"
++		"  containerID: '%s'\n", podName, ns, containerName, containerID);
++
++	/* check cache for metadata */
++	if ((-1 == asprintf(&mdKey, "%s_%s_%s", ns, podName, containerName)) ||
++		(!mdKey)) {
++		ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++	}
++	pthread_mutex_lock(pWrkrData->pData->cache->cacheMtx);
++	jMetadata = hashtable_search(pWrkrData->pData->cache->mdHt, mdKey);
++
++	if(jMetadata == NULL) {
++		char *url = NULL;
++		struct json_object *jReply = NULL, *jo2 = NULL, *jNsMeta = NULL, *jPodData = NULL;
++
++		/* check cache for namespace metadata */
++		jNsMeta = hashtable_search(pWrkrData->pData->cache->nsHt, (char *)ns);
++
++		if(jNsMeta == NULL) {
++			/* query kubernetes for namespace info */
++			/* todo: move url definitions elsewhere */
++			if ((-1 == asprintf(&url, "%s/api/v1/namespaces/%s",
++				 (char *) pWrkrData->pData->kubernetesUrl, ns)) ||
++				(!url)) {
++				pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++				ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++			}
++			iRet = queryKB(pWrkrData, url, &jReply);
++			free(url);
++			/* todo: implement support for the .orphaned namespace */
++			if (iRet != RS_RET_OK) {
++				json_object_put(jReply);
++				jReply = NULL;
++				pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++				FINALIZE;
++			}
++
++			if(fjson_object_object_get_ex(jReply, "metadata", &jNsMeta)) {
++				jNsMeta = json_object_get(jNsMeta);
++				parse_labels_annotations(jNsMeta, &pWrkrData->pData->annotation_match,
++					pWrkrData->pData->de_dot,
++					(const char *)pWrkrData->pData->de_dot_separator,
++					pWrkrData->pData->de_dot_separator_len);
++				add_ns_metadata = 1;
++			} else {
++				/* namespace with no metadata??? */
++				errmsg.LogMsg(0, RS_RET_ERR, LOG_INFO,
++					      "mmkubernetes: namespace [%s] has no metadata!\n", ns);
++				jNsMeta = NULL;
++			}
++
++			json_object_put(jReply);
++			jReply = NULL;
++		}
++
++		if ((-1 == asprintf(&url, "%s/api/v1/namespaces/%s/pods/%s",
++			 (char *) pWrkrData->pData->kubernetesUrl, ns, podName)) ||
++			(!url)) {
++			pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++			ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++		}
++		iRet = queryKB(pWrkrData, url, &jReply);
++		free(url);
++		if(iRet != RS_RET_OK) {
++			if(jNsMeta && add_ns_metadata) {
++				hashtable_insert(pWrkrData->pData->cache->nsHt, strdup(ns), jNsMeta);
++			}
++			json_object_put(jReply);
++			jReply = NULL;
++			pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++			FINALIZE;
++		}
++
++		jo = json_object_new_object();
++		if(jNsMeta && fjson_object_object_get_ex(jNsMeta, "uid", &jo2))
++			json_object_object_add(jo, "namespace_id", json_object_get(jo2));
++		if(jNsMeta && fjson_object_object_get_ex(jNsMeta, "labels", &jo2))
++			json_object_object_add(jo, "namespace_labels", json_object_get(jo2));
++		if(jNsMeta && fjson_object_object_get_ex(jNsMeta, "annotations", &jo2))
++			json_object_object_add(jo, "namespace_annotations", json_object_get(jo2));
++		if(jNsMeta && fjson_object_object_get_ex(jNsMeta, "creationTimestamp", &jo2))
++			json_object_object_add(jo, "creation_timestamp", json_object_get(jo2));
++		if(fjson_object_object_get_ex(jReply, "metadata", &jPodData)) {
++			if(fjson_object_object_get_ex(jPodData, "uid", &jo2))
++				json_object_object_add(jo, "pod_id", json_object_get(jo2));
++			parse_labels_annotations(jPodData, &pWrkrData->pData->annotation_match,
++				pWrkrData->pData->de_dot,
++				(const char *)pWrkrData->pData->de_dot_separator,
++				pWrkrData->pData->de_dot_separator_len);
++			if(fjson_object_object_get_ex(jPodData, "annotations", &jo2))
++				json_object_object_add(jo, "annotations", json_object_get(jo2));
++			if(fjson_object_object_get_ex(jPodData, "labels", &jo2))
++				json_object_object_add(jo, "labels", json_object_get(jo2));
++		}
++		if(fjson_object_object_get_ex(jReply, "spec", &jPodData)) {
++			if(fjson_object_object_get_ex(jPodData, "nodeName", &jo2)) {
++				json_object_object_add(jo, "host", json_object_get(jo2));
++			}
++		}
++		json_object_put(jReply);
++		jReply = NULL;
++
++		if (fjson_object_object_get_ex(jMsgMeta, "pod_name", &jo2))
++			json_object_object_add(jo, "pod_name", json_object_get(jo2));
++		if (fjson_object_object_get_ex(jMsgMeta, "namespace_name", &jo2))
++			json_object_object_add(jo, "namespace_name", json_object_get(jo2));
++		if (fjson_object_object_get_ex(jMsgMeta, "container_name", &jo2))
++			json_object_object_add(jo, "container_name", json_object_get(jo2));
++		json_object_object_add(jo, "master_url",
++			json_object_new_string((const char *)pWrkrData->pData->kubernetesUrl));
++		jMetadata = json_object_new_object();
++		json_object_object_add(jMetadata, "kubernetes", jo);
++		jo = json_object_new_object();
++		if (fjson_object_object_get_ex(jMsgMeta, "container_id", &jo2))
++			json_object_object_add(jo, "container_id", json_object_get(jo2));
++		json_object_object_add(jMetadata, "docker", jo);
++
++		hashtable_insert(pWrkrData->pData->cache->mdHt, mdKey, jMetadata);
++		mdKey = NULL;
++		if(jNsMeta && add_ns_metadata) {
++			hashtable_insert(pWrkrData->pData->cache->nsHt, strdup(ns), jNsMeta);
++			ns = NULL;
++		}
++	}
++
++	/* make a copy of the metadata for the msg to own */
++	/* todo: use json_object_deep_copy when implementation available in libfastjson */
++	/* yes, this is expensive - but there is no other way to make this thread safe - we
++	 * can't allow the msg to have a shared pointer to an element inside the cache,
++	 * outside of the cache lock
++	 */
++	jMetadataCopy = json_tokener_parse(json_object_get_string(jMetadata));
++	pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++	/* the +1 is there to skip the leading '$' */
++	msgAddJSON(pMsg, (uchar *) pWrkrData->pData->dstMetadataPath + 1, jMetadataCopy, 0, 0);
++
++finalize_it:
++	json_object_put(jMsgMeta);
++	free(mdKey);
++ENDdoAction
++
++
++BEGINisCompatibleWithFeature
++CODESTARTisCompatibleWithFeature
++ENDisCompatibleWithFeature
++
++
++/* all the macros bellow have to be in a specific order */
++BEGINmodExit
++CODESTARTmodExit
++	curl_global_cleanup();
++
++	objRelease(regexp, LM_REGEXP_FILENAME);
++	objRelease(errmsg, CORE_COMPONENT);
++ENDmodExit
++
++
++BEGINqueryEtryPt
++CODESTARTqueryEtryPt
++CODEqueryEtryPt_STD_OMOD_QUERIES
++CODEqueryEtryPt_STD_OMOD8_QUERIES
++CODEqueryEtryPt_STD_CONF2_QUERIES
++CODEqueryEtryPt_STD_CONF2_setModCnf_QUERIES
++CODEqueryEtryPt_STD_CONF2_OMOD_QUERIES
++ENDqueryEtryPt
++
++
++BEGINmodInit()
++CODESTARTmodInit
++	*ipIFVersProvided = CURR_MOD_IF_VERSION; /* we only support the current interface specification */
++CODEmodInit_QueryRegCFSLineHdlr
++	DBGPRINTF("mmkubernetes: module compiled with rsyslog version %s.\n", VERSION);
++	CHKiRet(objUse(errmsg, CORE_COMPONENT));
++	CHKiRet(objUse(regexp, LM_REGEXP_FILENAME));
++
++	/* CURL_GLOBAL_ALL initializes more than is needed but the
++	 * libcurl documentation discourages use of other values
++	 */
++	curl_global_init(CURL_GLOBAL_ALL);
++ENDmodInit
+diff --git a/contrib/mmkubernetes/sample.conf b/contrib/mmkubernetes/sample.conf
+new file mode 100644
+index 000000000..4c400ed51
+--- /dev/null
++++ b/contrib/mmkubernetes/sample.conf
+@@ -0,0 +1,7 @@
++module(load="mmkubernetes") # see docs for all module and action parameters
++
++# $!metadata!filename added by imfile using addmetadata="on"
++# e.g. input(type="imfile" file="/var/log/containers/*.log" tag="kubernetes" addmetadata="on")
++# $!CONTAINER_NAME and $!CONTAINER_ID_FULL added by imjournal
++
++action(type="mmkubernetes")
+-- 
+2.14.4
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1559408-async-writer.patch b/SOURCES/rsyslog-8.24.0-rhbz1559408-async-writer.patch
new file mode 100644
index 0000000..53563de
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1559408-async-writer.patch
@@ -0,0 +1,14 @@
+diff -up ./runtime/stream.c.fix ./runtime/stream.c
+--- ./runtime/stream.c.fix	2018-06-25 17:39:39.223082288 +0200
++++ ./runtime/stream.c	2018-06-25 17:40:26.549846798 +0200
+@@ -1427,10 +1427,8 @@ asyncWriterThread(void *pPtr)
+ 			}
+ 			if(bTimedOut && pThis->iBufPtr > 0) {
+ 				/* if we timed out, we need to flush pending data */
+-				d_pthread_mutex_unlock(&pThis->mut);
+ 				strmFlushInternal(pThis, 1);
+ 				bTimedOut = 0;
+-				d_pthread_mutex_lock(&pThis->mut); 
+ 				continue;
+ 			}
+ 			bTimedOut = 0;
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-replace-cJSON-with-libfastjson.patch b/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-replace-cJSON-with-libfastjson.patch
new file mode 100644
index 0000000..37c7a95
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-replace-cJSON-with-libfastjson.patch
@@ -0,0 +1,1105 @@
+From 6267b5a57c432a3be68f362c571beb062d47b3a7 Mon Sep 17 00:00:00 2001
+From: PascalWithopf <pwithopf@adiscon.com>
+Date: Tue, 23 May 2017 15:32:34 +0200
+Subject: [PATCH 10/11] omelasticsearch: replace cJSON with libfastjson
+
+(cherry picked from commit 7982f50675471220c5ba035371a8f7537a50442b)
+(cherry picked from commit 0b09c29db0cec5a215a95d03cfc37a27e486811c)
+---
+ plugins/omelasticsearch/Makefile.am       |   3 +-
+ plugins/omelasticsearch/cJSON/cjson.c     | 525 ------------------------------
+ plugins/omelasticsearch/cJSON/cjson.h     | 130 --------
+ plugins/omelasticsearch/omelasticsearch.c | 171 +++++-----
+ 12 files changed, 84 insertions(+), 1323 deletions(-)
+ delete mode 100644 plugins/omelasticsearch/cJSON/cjson.c
+ delete mode 100644 plugins/omelasticsearch/cJSON/cjson.h
+
+diff --git a/plugins/omelasticsearch/Makefile.am b/plugins/omelasticsearch/Makefile.am
+index ba85a896d..2fadb74dc 100644
+--- a/plugins/omelasticsearch/Makefile.am
++++ b/plugins/omelasticsearch/Makefile.am
+@@ -1,7 +1,6 @@
+ pkglib_LTLIBRARIES = omelasticsearch.la
+ 
+-# TODO: replace cJSON
+-omelasticsearch_la_SOURCES = omelasticsearch.c cJSON/cjson.c  cJSON/cjson.h
++omelasticsearch_la_SOURCES = omelasticsearch.c
+ omelasticsearch_la_CPPFLAGS =  $(RSRT_CFLAGS) $(PTHREADS_CFLAGS)
+ omelasticsearch_la_LDFLAGS = -module -avoid-version
+ omelasticsearch_la_LIBADD =  $(CURL_LIBS) $(LIBM)
+diff --git a/plugins/omelasticsearch/cJSON/cjson.c b/plugins/omelasticsearch/cJSON/cjson.c
+deleted file mode 100644
+index 6f7d43a23..000000000
+--- a/plugins/omelasticsearch/cJSON/cjson.c
++++ /dev/null
+@@ -1,525 +0,0 @@
+-/*
+-  Copyright (c) 2009 Dave Gamble
+-
+-  Permission is hereby granted, free of charge, to any person obtaining a copy
+-  of this software and associated documentation files (the "Software"), to deal
+-  in the Software without restriction, including without limitation the rights
+-  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+-  copies of the Software, and to permit persons to whom the Software is
+-  furnished to do so, subject to the following conditions:
+-
+-  The above copyright notice and this permission notice shall be included in
+-  all copies or substantial portions of the Software.
+-
+-  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+-  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+-  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+-  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+-  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+-  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+-  THE SOFTWARE.
+-*/
+-
+-/* this code has several warnings, but we ignore them because
+- * this seems to work and we do not want to engage in that code body. If
+- * we really run into troubles, it is better to change to libfastjson, which
+- * we should do in the medium to long term anyhow...
+- */
+-#pragma GCC diagnostic ignored "-Wmissing-prototypes"
+-#pragma GCC diagnostic ignored "-Wredundant-decls"
+-#pragma GCC diagnostic ignored "-Wstrict-prototypes"
+-#pragma GCC diagnostic ignored "-Wswitch-default"
+-#pragma GCC diagnostic ignored "-Wold-style-definition"
+-
+-/* cJSON */
+-/* JSON parser in C. */
+-
+-#include <string.h>
+-#include <stdio.h>
+-#include <math.h>
+-#include <stdlib.h>
+-#include <float.h>
+-#include <limits.h>
+-#include <ctype.h>
+-#include "cjson.h"
+-
+-static const char *ep;
+-
+-const char *cJSON_GetErrorPtr() {return ep;}
+-
+-static int cJSON_strcasecmp(const char *s1,const char *s2)
+-{
+-	if (!s1) return (s1==s2)?0:1;if (!s2) return 1;
+-	for(; tolower(*s1) == tolower(*s2); ++s1, ++s2)	if(*s1 == 0)	return 0;
+-	return tolower(*(const unsigned char *)s1) - tolower(*(const unsigned char *)s2);
+-}
+-
+-static void *(*cJSON_malloc)(size_t sz) = malloc;
+-static void (*cJSON_free)(void *ptr) = free;
+-
+-static char* cJSON_strdup(const char* str)
+-{
+-      size_t len;
+-      char* copy;
+-
+-      len = strlen(str) + 1;
+-      if (!(copy = (char*)cJSON_malloc(len))) return 0;
+-      memcpy(copy,str,len);
+-      return copy;
+-}
+-
+-void cJSON_InitHooks(cJSON_Hooks* hooks)
+-{
+-    if (!hooks) { /* Reset hooks */
+-        cJSON_malloc = malloc;
+-        cJSON_free = free;
+-        return;
+-    }
+-
+-	cJSON_malloc = (hooks->malloc_fn)?hooks->malloc_fn:malloc;
+-	cJSON_free	 = (hooks->free_fn)?hooks->free_fn:free;
+-}
+-
+-/* Internal constructor. */
+-static cJSON *cJSON_New_Item()
+-{
+-	cJSON* node = (cJSON*)cJSON_malloc(sizeof(cJSON));
+-	if (node) memset(node,0,sizeof(cJSON));
+-	return node;
+-}
+-
+-/* Delete a cJSON structure. */
+-void cJSON_Delete(cJSON *c)
+-{
+-	cJSON *next;
+-	while (c)
+-	{
+-		next=c->next;
+-		if (!(c->type&cJSON_IsReference) && c->child) cJSON_Delete(c->child);
+-		if (!(c->type&cJSON_IsReference) && c->valuestring) cJSON_free(c->valuestring);
+-		if (c->string) cJSON_free(c->string);
+-		cJSON_free(c);
+-		c=next;
+-	}
+-}
+-
+-/* Parse the input text to generate a number, and populate the result into item. */
+-static const char *parse_number(cJSON *item,const char *num)
+-{
+-	double n=0,sign=1,scale=0;int subscale=0,signsubscale=1;
+-
+-	/* Could use sscanf for this? */
+-	if (*num=='-') sign=-1,num++;	/* Has sign? */
+-	if (*num=='0') num++;			/* is zero */
+-	if (*num>='1' && *num<='9')	do	n=(n*10.0)+(*num++ -'0');	while (*num>='0' && *num<='9');	/* Number? */
+-	if (*num=='.' && num[1]>='0' && num[1]<='9') {num++;		do	n=(n*10.0)+(*num++ -'0'),scale--; while (*num>='0' && *num<='9');}	/* Fractional part? */
+-	if (*num=='e' || *num=='E')		/* Exponent? */
+-	{	num++;if (*num=='+') num++;	else if (*num=='-') signsubscale=-1,num++;		/* With sign? */
+-		while (*num>='0' && *num<='9') subscale=(subscale*10)+(*num++ - '0');	/* Number? */
+-	}
+-
+-	n=sign*n*pow(10.0,(scale+subscale*signsubscale));	/* number = +/- number.fraction * 10^+/- exponent */
+-	
+-	item->valuedouble=n;
+-	item->valueint=(int)n;
+-	item->type=cJSON_Number;
+-	return num;
+-}
+-
+-/* Render the number nicely from the given item into a string. */
+-char *cJSON_print_number(cJSON *item)
+-{
+-	char *str;
+-	double d=item->valuedouble;
+-	if (fabs(((double)item->valueint)-d)<=DBL_EPSILON && d<=INT_MAX && d>=INT_MIN)
+-	{
+-		str=(char*)cJSON_malloc(21);	/* 2^64+1 can be represented in 21 chars. */
+-		if (str) sprintf(str,"%d",item->valueint);
+-	}
+-	else
+-	{
+-		str=(char*)cJSON_malloc(64);	/* This is a nice tradeoff. */
+-		if (str)
+-		{
+-			if (fabs(floor(d)-d)<=DBL_EPSILON)			sprintf(str,"%.0f",d);
+-			else if (fabs(d)<1.0e-6 || fabs(d)>1.0e9)	sprintf(str,"%e",d);
+-			else										sprintf(str,"%f",d);
+-		}
+-	}
+-	return str;
+-}
+-
+-/* Parse the input text into an unescaped cstring, and populate item. */
+-static const unsigned char firstByteMark[7] = { 0x00, 0x00, 0xC0, 0xE0, 0xF0, 0xF8, 0xFC };
+-static const char *parse_string(cJSON *item,const char *str)
+-{
+-	const char *ptr=str+1;char *ptr2;char *out;int len=0;unsigned uc,uc2;
+-	if (*str!='\"') {ep=str;return 0;}	/* not a string! */
+-	
+-	while (*ptr!='\"' && *ptr && ++len) if (*ptr++ == '\\') ptr++;	/* Skip escaped quotes. */
+-	
+-	out=(char*)cJSON_malloc(len+1);	/* This is how long we need for the string, roughly. */
+-	if (!out) return 0;
+-	
+-	ptr=str+1;ptr2=out;
+-	while (*ptr!='\"' && *ptr)
+-	{
+-		if (*ptr!='\\') *ptr2++=*ptr++;
+-		else
+-		{
+-			ptr++;
+-			switch (*ptr)
+-			{
+-				case 'b': *ptr2++='\b';	break;
+-				case 'f': *ptr2++='\f';	break;
+-				case 'n': *ptr2++='\n';	break;
+-				case 'r': *ptr2++='\r';	break;
+-				case 't': *ptr2++='\t';	break;
+-				case 'u':	 /* transcode utf16 to utf8. */
+-					sscanf(ptr+1,"%4x",&uc);ptr+=4;	/* get the unicode char. */
+-
+-					if ((uc>=0xDC00 && uc<=0xDFFF) || uc==0)	break;	// check for invalid.
+-
+-					if (uc>=0xD800 && uc<=0xDBFF)	// UTF16 surrogate pairs.
+-					{
+-						if (ptr[1]!='\\' || ptr[2]!='u')	break;	// missing second-half of surrogate.
+-						sscanf(ptr+3,"%4x",&uc2);ptr+=6;
+-						if (uc2<0xDC00 || uc2>0xDFFF)		break;	// invalid second-half of surrogate.
+-						uc=0x10000 | ((uc&0x3FF)<<10) | (uc2&0x3FF);
+-					}
+-
+-					len=4;if (uc<0x80) len=1;else if (uc<0x800) len=2;else if (uc<0x10000) len=3; ptr2+=len;
+-					
+-					switch (len) {
+-						case 4: *--ptr2 =((uc | 0x80) & 0xBF); uc >>= 6;
+-						case 3: *--ptr2 =((uc | 0x80) & 0xBF); uc >>= 6;
+-						case 2: *--ptr2 =((uc | 0x80) & 0xBF); uc >>= 6;
+-						case 1: *--ptr2 =(uc | firstByteMark[len]);
+-					}
+-					ptr2+=len;
+-					break;
+-				default:  *ptr2++=*ptr; break;
+-			}
+-			ptr++;
+-		}
+-	}
+-	*ptr2=0;
+-	if (*ptr=='\"') ptr++;
+-	item->valuestring=out;
+-	item->type=cJSON_String;
+-	return ptr;
+-}
+-
+-/* Render the cstring provided to an escaped version that can be printed. */
+-static char *print_string_ptr(const char *str)
+-{
+-	const char *ptr;char *ptr2,*out;int len=0;unsigned char token;
+-	
+-	if (!str) return cJSON_strdup("");
+-	ptr=str;while ((token=*ptr) && ++len) {if (strchr("\"\\\b\f\n\r\t",token)) len++; else if (token<32) len+=5;ptr++;}
+-	
+-	out=(char*)cJSON_malloc(len+3);
+-	if (!out) return 0;
+-
+-	ptr2=out;ptr=str;
+-	*ptr2++='\"';
+-	while (*ptr)
+-	{
+-		if ((unsigned char)*ptr>31 && *ptr!='\"' && *ptr!='\\') *ptr2++=*ptr++;
+-		else
+-		{
+-			*ptr2++='\\';
+-			switch (token=*ptr++)
+-			{
+-				case '\\':	*ptr2++='\\';	break;
+-				case '\"':	*ptr2++='\"';	break;
+-				case '\b':	*ptr2++='b';	break;
+-				case '\f':	*ptr2++='f';	break;
+-				case '\n':	*ptr2++='n';	break;
+-				case '\r':	*ptr2++='r';	break;
+-				case '\t':	*ptr2++='t';	break;
+-				default: sprintf(ptr2,"u%04x",token);ptr2+=5;	break;	/* escape and print */
+-			}
+-		}
+-	}
+-	*ptr2++='\"';*ptr2++=0;
+-	return out;
+-}
+-/* Invote print_string_ptr (which is useful) on an item. */
+-static char *print_string(cJSON *item)	{return print_string_ptr(item->valuestring);}
+-
+-/* Predeclare these prototypes. */
+-static const char *parse_value(cJSON *item,const char *value);
+-static char *print_value(cJSON *item,int depth,int fmt);
+-static const char *parse_array(cJSON *item,const char *value);
+-static char *print_array(cJSON *item,int depth,int fmt);
+-static const char *parse_object(cJSON *item,const char *value);
+-static char *print_object(cJSON *item,int depth,int fmt);
+-
+-/* Utility to jump whitespace and cr/lf */
+-static const char *skip(const char *in) {while (in && *in && (unsigned char)*in<=32) in++; return in;}
+-
+-/* Parse an object - create a new root, and populate. */
+-cJSON *cJSON_Parse(const char *value)
+-{
+-	cJSON *c=cJSON_New_Item();
+-	ep=0;
+-	if (!c) return 0;       /* memory fail */
+-
+-	if (!parse_value(c,skip(value))) {cJSON_Delete(c);return 0;}
+-	return c;
+-}
+-
+-/* Render a cJSON item/entity/structure to text. */
+-char *cJSON_Print(cJSON *item)				{return print_value(item,0,1);}
+-char *cJSON_PrintUnformatted(cJSON *item)	{return print_value(item,0,0);}
+-
+-/* Parser core - when encountering text, process appropriately. */
+-static const char *parse_value(cJSON *item,const char *value)
+-{
+-	if (!value)						return 0;	/* Fail on null. */
+-	if (!strncmp(value,"null",4))	{ item->type=cJSON_NULL;  return value+4; }
+-	if (!strncmp(value,"false",5))	{ item->type=cJSON_False; return value+5; }
+-	if (!strncmp(value,"true",4))	{ item->type=cJSON_True; item->valueint=1;	return value+4; }
+-	if (*value=='\"')				{ return parse_string(item,value); }
+-	if (*value=='-' || (*value>='0' && *value<='9'))	{ return parse_number(item,value); }
+-	if (*value=='[')				{ return parse_array(item,value); }
+-	if (*value=='{')				{ return parse_object(item,value); }
+-
+-	ep=value;return 0;	/* failure. */
+-}
+-
+-/* Render a value to text. */
+-static char *print_value(cJSON *item,int depth,int fmt)
+-{
+-	char *out=0;
+-	if (!item) return 0;
+-	switch ((item->type)&255)
+-	{
+-		case cJSON_NULL:	out=cJSON_strdup("null");	break;
+-		case cJSON_False:	out=cJSON_strdup("false");break;
+-		case cJSON_True:	out=cJSON_strdup("true"); break;
+-		case cJSON_Number:	out=cJSON_print_number(item);break;
+-		case cJSON_String:	out=print_string(item);break;
+-		case cJSON_Array:	out=print_array(item,depth,fmt);break;
+-		case cJSON_Object:	out=print_object(item,depth,fmt);break;
+-	}
+-	return out;
+-}
+-
+-/* Build an array from input text. */
+-static const char *parse_array(cJSON *item,const char *value)
+-{
+-	cJSON *child;
+-	if (*value!='[')	{ep=value;return 0;}	/* not an array! */
+-
+-	item->type=cJSON_Array;
+-	value=skip(value+1);
+-	if (*value==']') return value+1;	/* empty array. */
+-
+-	item->child=child=cJSON_New_Item();
+-	if (!item->child) return 0;		 /* memory fail */
+-	value=skip(parse_value(child,skip(value)));	/* skip any spacing, get the value. */
+-	if (!value) return 0;
+-
+-	while (*value==',')
+-	{
+-		cJSON *new_item;
+-		if (!(new_item=cJSON_New_Item())) return 0; 	/* memory fail */
+-		child->next=new_item;new_item->prev=child;child=new_item;
+-		value=skip(parse_value(child,skip(value+1)));
+-		if (!value) return 0;	/* memory fail */
+-	}
+-
+-	if (*value==']') return value+1;	/* end of array */
+-	ep=value;return 0;	/* malformed. */
+-}
+-
+-/* Render an array to text */
+-static char *print_array(cJSON *item,int depth,int fmt)
+-{
+-	char **entries;
+-	char *out=0,*ptr,*ret;int len=5;
+-	cJSON *child=item->child;
+-	int numentries=0,i=0,fail=0;
+-	
+-	/* How many entries in the array? */
+-	while (child) numentries++,child=child->next;
+-	/* Allocate an array to hold the values for each */
+-	entries=(char**)cJSON_malloc(numentries*sizeof(char*));
+-	if (!entries) return 0;
+-	memset(entries,0,numentries*sizeof(char*));
+-	/* Retrieve all the results: */
+-	child=item->child;
+-	while (child && !fail)
+-	{
+-		ret=print_value(child,depth+1,fmt);
+-		entries[i++]=ret;
+-		if (ret) len+=strlen(ret)+2+(fmt?1:0); else fail=1;
+-		child=child->next;
+-	}
+-	
+-	/* If we didn't fail, try to malloc the output string */
+-	if (!fail) out=(char*)cJSON_malloc(len);
+-	/* If that fails, we fail. */
+-	if (!out) fail=1;
+-
+-	/* Handle failure. */
+-	if (fail)
+-	{
+-		for (i=0;i<numentries;i++) if (entries[i]) cJSON_free(entries[i]);
+-		cJSON_free(entries);
+-		return 0;
+-	}
+-	
+-	/* Compose the output array. */
+-	*out='[';
+-	ptr=out+1;*ptr=0;
+-	for (i=0;i<numentries;i++)
+-	{
+-		strcpy(ptr,entries[i]);ptr+=strlen(entries[i]);
+-		if (i!=numentries-1) {*ptr++=',';if(fmt)*ptr++=' ';*ptr=0;}
+-		cJSON_free(entries[i]);
+-	}
+-	cJSON_free(entries);
+-	*ptr++=']';*ptr++=0;
+-	return out;	
+-}
+-
+-/* Build an object from the text. */
+-static const char *parse_object(cJSON *item,const char *value)
+-{
+-	cJSON *child;
+-	if (*value!='{')	{ep=value;return 0;}	/* not an object! */
+-	
+-	item->type=cJSON_Object;
+-	value=skip(value+1);
+-	if (*value=='}') return value+1;	/* empty array. */
+-	
+-	item->child=child=cJSON_New_Item();
+-	if (!item->child) return 0;
+-	value=skip(parse_string(child,skip(value)));
+-	if (!value) return 0;
+-	child->string=child->valuestring;child->valuestring=0;
+-	if (*value!=':') {ep=value;return 0;}	/* fail! */
+-	value=skip(parse_value(child,skip(value+1)));	/* skip any spacing, get the value. */
+-	if (!value) return 0;
+-	
+-	while (*value==',')
+-	{
+-		cJSON *new_item;
+-		if (!(new_item=cJSON_New_Item()))	return 0; /* memory fail */
+-		child->next=new_item;new_item->prev=child;child=new_item;
+-		value=skip(parse_string(child,skip(value+1)));
+-		if (!value) return 0;
+-		child->string=child->valuestring;child->valuestring=0;
+-		if (*value!=':') {ep=value;return 0;}	/* fail! */
+-		value=skip(parse_value(child,skip(value+1)));	/* skip any spacing, get the value. */
+-		if (!value) return 0;
+-	}
+-	
+-	if (*value=='}') return value+1;	/* end of array */
+-	ep=value;return 0;	/* malformed. */
+-}
+-
+-/* Render an object to text. */
+-static char *print_object(cJSON *item,int depth,int fmt)
+-{
+-	char **entries=0,**names=0;
+-	char *out=0,*ptr,*ret,*str;int len=7,i=0,j;
+-	cJSON *child=item->child;
+-	int numentries=0,fail=0;
+-	/* Count the number of entries. */
+-	while (child) numentries++,child=child->next;
+-	/* Allocate space for the names and the objects */
+-	entries=(char**)cJSON_malloc(numentries*sizeof(char*));
+-	if (!entries) return 0;
+-	names=(char**)cJSON_malloc(numentries*sizeof(char*));
+-	if (!names) {cJSON_free(entries);return 0;}
+-	memset(entries,0,sizeof(char*)*numentries);
+-	memset(names,0,sizeof(char*)*numentries);
+-
+-	/* Collect all the results into our arrays: */
+-	child=item->child;depth++;if (fmt) len+=depth;
+-	while (child)
+-	{
+-		names[i]=str=print_string_ptr(child->string);
+-		entries[i++]=ret=print_value(child,depth,fmt);
+-		if (str && ret) len+=strlen(ret)+strlen(str)+2+(fmt?2+depth:0); else fail=1;
+-		child=child->next;
+-	}
+-	
+-	/* Try to allocate the output string */
+-	if (!fail) out=(char*)cJSON_malloc(len);
+-	if (!out) fail=1;
+-
+-	/* Handle failure */
+-	if (fail)
+-	{
+-		for (i=0;i<numentries;i++) {if (names[i]) cJSON_free(names[i]);if (entries[i]) cJSON_free(entries[i]);}
+-		cJSON_free(names);cJSON_free(entries);
+-		return 0;
+-	}
+-	
+-	/* Compose the output: */
+-	*out='{';ptr=out+1;if (fmt)*ptr++='\n';*ptr=0;
+-	for (i=0;i<numentries;i++)
+-	{
+-		if (fmt) for (j=0;j<depth;j++) *ptr++='\t';
+-		strcpy(ptr,names[i]);ptr+=strlen(names[i]);
+-		*ptr++=':';if (fmt) *ptr++='\t';
+-		strcpy(ptr,entries[i]);ptr+=strlen(entries[i]);
+-		if (i!=numentries-1) *ptr++=',';
+-		if (fmt) *ptr++='\n';*ptr=0;
+-		cJSON_free(names[i]);cJSON_free(entries[i]);
+-	}
+-	
+-	cJSON_free(names);cJSON_free(entries);
+-	if (fmt) for (i=0;i<depth-1;i++) *ptr++='\t';
+-	*ptr++='}';*ptr++=0;
+-	return out;	
+-}
+-
+-/* Get Array size/item / object item. */
+-int    cJSON_GetArraySize(cJSON *array)							{cJSON *c=array->child;int i=0;while(c)i++,c=c->next;return i;}
+-cJSON *cJSON_GetArrayItem(cJSON *array,int item)				{cJSON *c=array->child;  while (c && item>0) item--,c=c->next; return c;}
+-cJSON *cJSON_GetObjectItem(cJSON *object,const char *string)	{cJSON *c=object->child; while (c && cJSON_strcasecmp(c->string,string)) c=c->next; return c;}
+-
+-/* Utility for array list handling. */
+-static void suffix_object(cJSON *prev,cJSON *item) {prev->next=item;item->prev=prev;}
+-/* Utility for handling references. */
+-static cJSON *create_reference(cJSON *item) {cJSON *ref=cJSON_New_Item();if (!ref) return 0;memcpy(ref,item,sizeof(cJSON));ref->string=0;ref->type|=cJSON_IsReference;ref->next=ref->prev=0;return ref;}
+-
+-/* Add item to array/object. */
+-void   cJSON_AddItemToArray(cJSON *array, cJSON *item)						{cJSON *c=array->child;if (!item) return; if (!c) {array->child=item;} else {while (c && c->next) c=c->next; suffix_object(c,item);}}
+-void   cJSON_AddItemToObject(cJSON *object,const char *string,cJSON *item)	{if (!item) return; if (item->string) cJSON_free(item->string);item->string=cJSON_strdup(string);cJSON_AddItemToArray(object,item);}
+-void	cJSON_AddItemReferenceToArray(cJSON *array, cJSON *item)						{cJSON_AddItemToArray(array,create_reference(item));}
+-void	cJSON_AddItemReferenceToObject(cJSON *object,const char *string,cJSON *item)	{cJSON_AddItemToObject(object,string,create_reference(item));}
+-
+-cJSON *cJSON_DetachItemFromArray(cJSON *array,int which)			{cJSON *c=array->child;while (c && which>0) c=c->next,which--;if (!c) return 0;
+-	if (c->prev) c->prev->next=c->next;if (c->next) c->next->prev=c->prev;if (c==array->child) array->child=c->next;c->prev=c->next=0;return c;}
+-void   cJSON_DeleteItemFromArray(cJSON *array,int which)			{cJSON_Delete(cJSON_DetachItemFromArray(array,which));}
+-cJSON *cJSON_DetachItemFromObject(cJSON *object,const char *string) {int i=0;cJSON *c=object->child;while (c && cJSON_strcasecmp(c->string,string)) i++,c=c->next;if (c) return cJSON_DetachItemFromArray(object,i);return 0;}
+-void   cJSON_DeleteItemFromObject(cJSON *object,const char *string) {cJSON_Delete(cJSON_DetachItemFromObject(object,string));}
+-
+-/* Replace array/object items with new ones. */
+-void   cJSON_ReplaceItemInArray(cJSON *array,int which,cJSON *newitem)		{cJSON *c=array->child;while (c && which>0) c=c->next,which--;if (!c) return;
+-	newitem->next=c->next;newitem->prev=c->prev;if (newitem->next) newitem->next->prev=newitem;
+-	if (c==array->child) array->child=newitem; else newitem->prev->next=newitem;c->next=c->prev=0;cJSON_Delete(c);}
+-void   cJSON_ReplaceItemInObject(cJSON *object,const char *string,cJSON *newitem){int i=0;cJSON *c=object->child;while(c && cJSON_strcasecmp(c->string,string))i++,c=c->next;if(c){newitem->string=cJSON_strdup(string);cJSON_ReplaceItemInArray(object,i,newitem);}}
+-
+-/* Create basic types: */
+-cJSON *cJSON_CreateNull()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_NULL;return item;}
+-cJSON *cJSON_CreateTrue()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_True;return item;}
+-cJSON *cJSON_CreateFalse()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_False;return item;}
+-cJSON *cJSON_CreateBool(int b)					{cJSON *item=cJSON_New_Item();if(item)item->type=b?cJSON_True:cJSON_False;return item;}
+-cJSON *cJSON_CreateNumber(double num)			{cJSON *item=cJSON_New_Item();if(item){item->type=cJSON_Number;item->valuedouble=num;item->valueint=(int)num;}return item;}
+-cJSON *cJSON_CreateString(const char *string)	{cJSON *item=cJSON_New_Item();if(item){item->type=cJSON_String;item->valuestring=cJSON_strdup(string);}return item;}
+-cJSON *cJSON_CreateArray()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_Array;return item;}
+-cJSON *cJSON_CreateObject()						{cJSON *item=cJSON_New_Item();if(item)item->type=cJSON_Object;return item;}
+-
+-/* Create Arrays: */
+-cJSON *cJSON_CreateIntArray(int *numbers,int count)				{int i;cJSON *n=0,*p=0,*a=cJSON_CreateArray();for(i=0;a && i<count;i++){n=cJSON_CreateNumber(numbers[i]);if(!i)a->child=n;else suffix_object(p,n);p=n;}return a;}
+-cJSON *cJSON_CreateFloatArray(float *numbers,int count)			{int i;cJSON *n=0,*p=0,*a=cJSON_CreateArray();for(i=0;a && i<count;i++){n=cJSON_CreateNumber(numbers[i]);if(!i)a->child=n;else suffix_object(p,n);p=n;}return a;}
+-cJSON *cJSON_CreateDoubleArray(double *numbers,int count)		{int i;cJSON *n=0,*p=0,*a=cJSON_CreateArray();for(i=0;a && i<count;i++){n=cJSON_CreateNumber(numbers[i]);if(!i)a->child=n;else suffix_object(p,n);p=n;}return a;}
+-cJSON *cJSON_CreateStringArray(const char **strings,int count)	{int i;cJSON *n=0,*p=0,*a=cJSON_CreateArray();for(i=0;a && i<count;i++){n=cJSON_CreateString(strings[i]);if(!i)a->child=n;else suffix_object(p,n);p=n;}return a;}
+diff --git a/plugins/omelasticsearch/cJSON/cjson.h b/plugins/omelasticsearch/cJSON/cjson.h
+deleted file mode 100644
+index a621720ce..000000000
+--- a/plugins/omelasticsearch/cJSON/cjson.h
++++ /dev/null
+@@ -1,130 +0,0 @@
+-/*
+-  Copyright (c) 2009 Dave Gamble
+- 
+-  Permission is hereby granted, free of charge, to any person obtaining a copy
+-  of this software and associated documentation files (the "Software"), to deal
+-  in the Software without restriction, including without limitation the rights
+-  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+-  copies of the Software, and to permit persons to whom the Software is
+-  furnished to do so, subject to the following conditions:
+- 
+-  The above copyright notice and this permission notice shall be included in
+-  all copies or substantial portions of the Software.
+- 
+-  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+-  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+-  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+-  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+-  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+-  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+-  THE SOFTWARE.
+-*/
+-
+-#ifndef cJSON__h
+-#define cJSON__h
+-
+-#ifdef __cplusplus
+-extern "C"
+-{
+-#endif
+-
+-/* cJSON Types: */
+-#define cJSON_False 0
+-#define cJSON_True 1
+-#define cJSON_NULL 2
+-#define cJSON_Number 3
+-#define cJSON_String 4
+-#define cJSON_Array 5
+-#define cJSON_Object 6
+-	
+-#define cJSON_IsReference 256
+-
+-/* The cJSON structure: */
+-typedef struct cJSON {
+-	struct cJSON *next,*prev;	/* next/prev allow you to walk array/object chains. Alternatively, use GetArraySize/GetArrayItem/GetObjectItem */
+-	struct cJSON *child;		/* An array or object item will have a child pointer pointing to a chain of the items in the array/object. */
+-
+-	int type;					/* The type of the item, as above. */
+-
+-	char *valuestring;			/* The item's string, if type==cJSON_String */
+-	int valueint;				/* The item's number, if type==cJSON_Number */
+-	double valuedouble;			/* The item's number, if type==cJSON_Number */
+-
+-	char *string;				/* The item's name string, if this item is the child of, or is in the list of subitems of an object. */
+-} cJSON;
+-
+-typedef struct cJSON_Hooks {
+-      void *(*malloc_fn)(size_t sz);
+-      void (*free_fn)(void *ptr);
+-} cJSON_Hooks;
+-
+-/* Supply malloc, realloc and free functions to cJSON */
+-extern void cJSON_InitHooks(cJSON_Hooks* hooks);
+-
+-
+-/* Supply a block of JSON, and this returns a cJSON object you can interrogate. Call cJSON_Delete when finished. */
+-extern cJSON *cJSON_Parse(const char *value);
+-/* Render a cJSON entity to text for transfer/storage. Free the char* when finished. */
+-extern char  *cJSON_Print(cJSON *item);
+-/* Render a cJSON entity to text for transfer/storage without any formatting. Free the char* when finished. */
+-extern char  *cJSON_PrintUnformatted(cJSON *item);
+-/* Delete a cJSON entity and all subentities. */
+-extern void   cJSON_Delete(cJSON *c);
+-
+-/* Returns the number of items in an array (or object). */
+-extern int	  cJSON_GetArraySize(cJSON *array);
+-/* Retrieve item number "item" from array "array". Returns NULL if unsuccessful. */
+-extern cJSON *cJSON_GetArrayItem(cJSON *array,int item);
+-/* Get item "string" from object. Case insensitive. */
+-extern cJSON *cJSON_GetObjectItem(cJSON *object,const char *string);
+-
+-/* For analysing failed parses. This returns a pointer to the parse error. You'll probably need to look a few chars back to make sense of it. Defined when cJSON_Parse() returns 0. 0 when cJSON_Parse() succeeds. */
+-extern const char *cJSON_GetErrorPtr();
+-	
+-/* These calls create a cJSON item of the appropriate type. */
+-extern cJSON *cJSON_CreateNull();
+-extern cJSON *cJSON_CreateTrue();
+-extern cJSON *cJSON_CreateFalse();
+-extern cJSON *cJSON_CreateBool(int b);
+-extern cJSON *cJSON_CreateNumber(double num);
+-extern cJSON *cJSON_CreateString(const char *string);
+-extern cJSON *cJSON_CreateArray();
+-extern cJSON *cJSON_CreateObject();
+-
+-/* These utilities create an Array of count items. */
+-extern cJSON *cJSON_CreateIntArray(int *numbers,int count);
+-extern cJSON *cJSON_CreateFloatArray(float *numbers,int count);
+-extern cJSON *cJSON_CreateDoubleArray(double *numbers,int count);
+-extern cJSON *cJSON_CreateStringArray(const char **strings,int count);
+-
+-/* Append item to the specified array/object. */
+-extern void cJSON_AddItemToArray(cJSON *array, cJSON *item);
+-extern void	cJSON_AddItemToObject(cJSON *object,const char *string,cJSON *item);
+-/* Append reference to item to the specified array/object. Use this when you want to add an existing cJSON to a new cJSON, but don't want to corrupt your existing cJSON. */
+-extern void cJSON_AddItemReferenceToArray(cJSON *array, cJSON *item);
+-extern void	cJSON_AddItemReferenceToObject(cJSON *object,const char *string,cJSON *item);
+-
+-/* Remove/Detatch items from Arrays/Objects. */
+-extern cJSON *cJSON_DetachItemFromArray(cJSON *array,int which);
+-extern void   cJSON_DeleteItemFromArray(cJSON *array,int which);
+-extern cJSON *cJSON_DetachItemFromObject(cJSON *object,const char *string);
+-extern void   cJSON_DeleteItemFromObject(cJSON *object,const char *string);
+-	
+-/* Update array items. */
+-extern void cJSON_ReplaceItemInArray(cJSON *array,int which,cJSON *newitem);
+-extern void cJSON_ReplaceItemInObject(cJSON *object,const char *string,cJSON *newitem);
+-
+-/* rger: added helpers */
+-
+-char *cJSON_print_number(cJSON *item);
+-#define cJSON_AddNullToObject(object,name)	cJSON_AddItemToObject(object, name, cJSON_CreateNull())
+-#define cJSON_AddTrueToObject(object,name)	cJSON_AddItemToObject(object, name, cJSON_CreateTrue())
+-#define cJSON_AddFalseToObject(object,name)		cJSON_AddItemToObject(object, name, cJSON_CreateFalse())
+-#define cJSON_AddNumberToObject(object,name,n)	cJSON_AddItemToObject(object, name, cJSON_CreateNumber(n))
+-#define cJSON_AddStringToObject(object,name,s)	cJSON_AddItemToObject(object, name, cJSON_CreateString(s))
+-
+-#ifdef __cplusplus
+-}
+-#endif
+-
+-#endif
+diff --git a/plugins/omelasticsearch/omelasticsearch.c b/plugins/omelasticsearch/omelasticsearch.c
+index 88bd5e16c..ed2b47535 100644
+--- a/plugins/omelasticsearch/omelasticsearch.c
++++ b/plugins/omelasticsearch/omelasticsearch.c
+@@ -41,7 +41,7 @@
+ #if defined(__FreeBSD__)
+ #include <unistd.h>
+ #endif
+-#include "cJSON/cjson.h"
++#include <json.h>
+ #include "conf.h"
+ #include "syslogd-types.h"
+ #include "srUtils.h"
+@@ -626,29 +626,29 @@ finalize_it:
+  * Dumps entire bulk request and response in error log
+  */
+ static rsRetVal
+-getDataErrorDefault(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRoot,uchar *reqmsg,char **rendered)
++getDataErrorDefault(wrkrInstanceData_t *pWrkrData,fjson_object **pReplyRoot,uchar *reqmsg,char **rendered)
+ {
+ 	DEFiRet;
+-	cJSON *req=0;
+-	cJSON *errRoot=0;
+-	cJSON *replyRoot = *pReplyRoot;
++	fjson_object *req=NULL;
++	fjson_object *errRoot=NULL;
++	fjson_object *replyRoot = *pReplyRoot;
+ 
+-	if((req=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+-	cJSON_AddItemToObject(req, "url", cJSON_CreateString((char*)pWrkrData->restURL));
+-	cJSON_AddItemToObject(req, "postdata", cJSON_CreateString((char*)reqmsg));
++	if((req=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	fjson_object_object_add(req, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	fjson_object_object_add(req, "postdata", fjson_object_new_string((char*)reqmsg));
+ 
+-	if((errRoot=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+-	cJSON_AddItemToObject(errRoot, "request", req);
+-	cJSON_AddItemToObject(errRoot, "reply", replyRoot);
+-	*rendered = cJSON_Print(errRoot);
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	fjson_object_object_add(errRoot, "request", req);
++	fjson_object_object_add(errRoot, "reply", replyRoot);
++	*rendered = strdup((char*)fjson_object_to_json_string(errRoot));
+ 
+-	req=0;
+-	cJSON_Delete(errRoot);
++	req=NULL;
++	fjson_object_put(errRoot);
+ 
+ 	*pReplyRoot = NULL; /* tell caller not to delete once again! */
+ 
+ 	finalize_it:
+-		cJSON_Delete(req);
++		fjson_object_put(req);
+ 		RETiRet;
+ }
+ 
+@@ -703,8 +703,8 @@ finalize_it:
+ /*
+  * check the status of response from ES
+  */
+-static int checkReplyStatus(cJSON* ok) {
+-	return (ok == NULL || ok->type != cJSON_Number || ok->valueint < 0 || ok->valueint > 299);
++static int checkReplyStatus(fjson_object* ok) {
++	return (ok == NULL || !fjson_object_is_type(ok, fjson_type_int) || fjson_object_get_int(ok) < 0 || fjson_object_get_int(ok) > 299);
+ }
+ 
+ /*
+@@ -712,7 +712,7 @@ static int checkReplyStatus(cJSON* ok) {
+  */
+ typedef struct exeContext{
+ 	int statusCheckOnly;
+-	cJSON *errRoot;
++	fjson_object *errRoot;
+ 	rsRetVal (*prepareErrorFileContent)(struct exeContext *ctx,int itemStatus,char *request,char *response);
+ 
+ 
+@@ -722,25 +722,24 @@ typedef struct exeContext{
+  * get content to be written in error file using context passed
+  */
+ static rsRetVal
+-parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRoot,uchar *reqmsg,context *ctx)
++parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **pReplyRoot,uchar *reqmsg,context *ctx)
+ {
+ 	DEFiRet;
+-	cJSON *replyRoot = *pReplyRoot;
++	fjson_object *replyRoot = *pReplyRoot;
+ 	int i;
+ 	int numitems;
+-	cJSON *items=0;
++	fjson_object *items=NULL;
+ 
+ 
+ 	/*iterate over items*/
+-	items = cJSON_GetObjectItem(replyRoot, "items");
+-	if(items == NULL || items->type != cJSON_Array) {
++	if(!fjson_object_object_get_ex(replyRoot, "items", &items)) {
+ 		DBGPRINTF("omelasticsearch: error in elasticsearch reply: "
+ 			  "bulkmode insert does not return array, reply is: %s\n",
+ 			  pWrkrData->reply);
+ 		ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 	}
+ 
+-	numitems = cJSON_GetArraySize(items);
++	numitems = fjson_object_array_length(items);
+ 
+ 	DBGPRINTF("omelasticsearch: Entire request %s\n",reqmsg);
+ 	const char *lastReqRead= (char*)reqmsg;
+@@ -748,32 +747,32 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRo
+ 	DBGPRINTF("omelasticsearch: %d items in reply\n", numitems);
+ 	for(i = 0 ; i < numitems ; ++i) {
+ 
+-		cJSON *item=0;
+-		cJSON *result=0;
+-		cJSON *ok=0;
++		fjson_object *item=NULL;
++		fjson_object *result=NULL;
++		fjson_object *ok=NULL;
+ 		int itemStatus=0;
+-		item = cJSON_GetArrayItem(items, i);
++		item = fjson_object_array_get_idx(items, i);
+ 		if(item == NULL)  {
+ 			DBGPRINTF("omelasticsearch: error in elasticsearch reply: "
+ 				  "cannot obtain reply array item %d\n", i);
+ 			ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 		}
+-		result = item->child;
+-		if(result == NULL || result->type != cJSON_Object) {
++		fjson_object_object_get_ex(item, "create", &result);
++		if(result == NULL || !fjson_object_is_type(result, fjson_type_object)) {
+ 			DBGPRINTF("omelasticsearch: error in elasticsearch reply: "
+ 				  "cannot obtain 'result' item for #%d\n", i);
+ 			ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 		}
+ 
+-		ok = cJSON_GetObjectItem(result, "status");
++		fjson_object_object_get_ex(result, "status", &ok);
+ 		itemStatus = checkReplyStatus(ok);
+-
++		
+ 		char *request =0;
+ 		char *response =0;
+ 		if(ctx->statusCheckOnly)
+ 		{
+ 			if(itemStatus) {
+-				DBGPRINTF("omelasticsearch: error in elasticsearch reply: item %d, status is %d\n", i, ok->valueint);
++				DBGPRINTF("omelasticsearch: error in elasticsearch reply: item %d, status is %d\n", i, fjson_object_get_int(ok));
+ 				DBGPRINTF("omelasticsearch: status check found error.\n");
+ 				ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 			}
+@@ -786,13 +785,12 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRo
+ 				DBGPRINTF("omelasticsearch: Couldn't get post request\n");
+ 				ABORT_FINALIZE(RS_RET_ERR);
+ 			}
+-
+-			response = cJSON_PrintUnformatted(result);
++			response = (char*)fjson_object_to_json_string_ext(result, FJSON_TO_STRING_PLAIN);
+ 
+ 			if(response==NULL)
+ 			{
+ 				free(request);/*as its has been assigned.*/
+-				DBGPRINTF("omelasticsearch: Error getting cJSON_PrintUnformatted. Cannot continue\n");
++				DBGPRINTF("omelasticsearch: Error getting fjson_object_to_string_ext. Cannot continue\n");
+ 				ABORT_FINALIZE(RS_RET_ERR);
+ 			}
+ 
+@@ -801,7 +799,6 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,cJSON **pReplyRo
+ 
+ 			/*free memory in any case*/
+ 			free(request);
+-			free(response);
+ 
+ 			if(ret != RS_RET_OK)
+ 			{
+@@ -826,23 +823,23 @@ getDataErrorOnly(context *ctx,int itemStatus,char *request,char *response)
+ 	DEFiRet;
+ 	if(itemStatus)
+ 	{
+-		cJSON *onlyErrorResponses =0;
+-		cJSON *onlyErrorRequests=0;
++		fjson_object *onlyErrorResponses =NULL;
++		fjson_object *onlyErrorRequests=NULL;
+ 
+-		if((onlyErrorResponses=cJSON_GetObjectItem(ctx->errRoot, "reply")) == NULL)
++		if(!fjson_object_object_get_ex(ctx->errRoot, "reply", &onlyErrorResponses))
+ 		{
+ 			DBGPRINTF("omelasticsearch: Failed to get reply json array. Invalid context. Cannot continue\n");
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+-		cJSON_AddItemToArray(onlyErrorResponses, cJSON_CreateString(response));
++		fjson_object_array_add(onlyErrorResponses, fjson_object_new_string(response));
+ 
+-		if((onlyErrorRequests=cJSON_GetObjectItem(ctx->errRoot, "request")) == NULL)
++		if(!fjson_object_object_get_ex(ctx->errRoot, "request", &onlyErrorRequests))
+ 		{
+ 			DBGPRINTF("omelasticsearch: Failed to get request json array. Invalid context. Cannot continue\n");
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+ 
+-		cJSON_AddItemToArray(onlyErrorRequests, cJSON_CreateString(request));
++		fjson_object_array_add(onlyErrorRequests, fjson_object_new_string(request));
+ 
+ 	}
+ 
+@@ -861,24 +858,24 @@ getDataInterleaved(context *ctx,
+ 	char *response)
+ {
+ 	DEFiRet;
+-	cJSON *interleaved =0;
+-	if((interleaved=cJSON_GetObjectItem(ctx->errRoot, "response")) == NULL)
++	fjson_object *interleaved =NULL;
++	if(!fjson_object_object_get_ex(ctx->errRoot, "response", &interleaved))
+ 	{
+ 		DBGPRINTF("omelasticsearch: Failed to get response json array. Invalid context. Cannot continue\n");
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+ 
+-	cJSON *interleavedNode=0;
++	fjson_object *interleavedNode=NULL;
+ 	/*create interleaved node that has req and response json data*/
+-	if((interleavedNode=cJSON_CreateObject()) == NULL)
++	if((interleavedNode=fjson_object_new_object()) == NULL)
+ 	{
+ 		DBGPRINTF("omelasticsearch: Failed to create interleaved node. Cann't continue\n");
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+-	cJSON_AddItemToObject(interleavedNode,"request", cJSON_CreateString(request));
+-	cJSON_AddItemToObject(interleavedNode,"reply", cJSON_CreateString(response));
++	fjson_object_object_add(interleavedNode,"request", fjson_object_new_string(request));
++	fjson_object_object_add(interleavedNode,"reply", fjson_object_new_string(response));
+ 
+-	cJSON_AddItemToArray(interleaved, interleavedNode);
++	fjson_object_array_add(interleaved, interleavedNode);
+ 
+ 
+ 
+@@ -912,24 +909,24 @@ static rsRetVal
+ initializeErrorOnlyConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+ 	DEFiRet;
+ 	ctx->statusCheckOnly=0;
+-	cJSON *errRoot=0;
+-	cJSON *onlyErrorResponses =0;
+-	cJSON *onlyErrorRequests=0;
+-	if((errRoot=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	fjson_object *errRoot=NULL;
++	fjson_object *onlyErrorResponses =NULL;
++	fjson_object *onlyErrorRequests=NULL;
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+ 
+-	if((onlyErrorResponses=cJSON_CreateArray()) == NULL) {
+-		cJSON_Delete(errRoot);
++	if((onlyErrorResponses=fjson_object_new_array()) == NULL) {
++		fjson_object_put(errRoot);
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+-	if((onlyErrorRequests=cJSON_CreateArray()) == NULL) {
+-		cJSON_Delete(errRoot);
+-		cJSON_Delete(onlyErrorResponses);
++	if((onlyErrorRequests=fjson_object_new_array()) == NULL) {
++		fjson_object_put(errRoot);
++		fjson_object_put(onlyErrorResponses);
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+ 
+-	cJSON_AddItemToObject(errRoot, "url", cJSON_CreateString((char*)pWrkrData->restURL));
+-	cJSON_AddItemToObject(errRoot,"request",onlyErrorRequests);
+-	cJSON_AddItemToObject(errRoot, "reply", onlyErrorResponses);
++	fjson_object_object_add(errRoot, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	fjson_object_object_add(errRoot,"request",onlyErrorRequests);
++	fjson_object_object_add(errRoot, "reply", onlyErrorResponses);
+ 	ctx->errRoot = errRoot;
+ 	ctx->prepareErrorFileContent= &getDataErrorOnly;
+ 	finalize_it:
+@@ -943,17 +940,17 @@ static rsRetVal
+ initializeInterleavedConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+ 	DEFiRet;
+ 	ctx->statusCheckOnly=0;
+-	cJSON *errRoot=0;
+-	cJSON *interleaved =0;
+-	if((errRoot=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+-	if((interleaved=cJSON_CreateArray()) == NULL) {
+-		cJSON_Delete(errRoot);
++	fjson_object *errRoot=NULL;
++	fjson_object *interleaved =NULL;
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	if((interleaved=fjson_object_new_array()) == NULL) {
++		fjson_object_put(errRoot);
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+ 
+ 
+-	cJSON_AddItemToObject(errRoot, "url", cJSON_CreateString((char*)pWrkrData->restURL));
+-	cJSON_AddItemToObject(errRoot,"response",interleaved);
++	fjson_object_object_add(errRoot, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	fjson_object_object_add(errRoot,"response",interleaved);
+ 	ctx->errRoot = errRoot;
+ 	ctx->prepareErrorFileContent= &getDataInterleaved;
+ 	finalize_it:
+@@ -965,17 +962,17 @@ static rsRetVal
+ initializeErrorInterleavedConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+ 	DEFiRet;
+ 	ctx->statusCheckOnly=0;
+-	cJSON *errRoot=0;
+-	cJSON *interleaved =0;
+-	if((errRoot=cJSON_CreateObject()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
+-	if((interleaved=cJSON_CreateArray()) == NULL) {
+-		cJSON_Delete(errRoot);
++	fjson_object *errRoot=NULL;
++	fjson_object *interleaved =NULL;
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++	if((interleaved=fjson_object_new_array()) == NULL) {
++		fjson_object_put(errRoot);
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+ 
+ 
+-	cJSON_AddItemToObject(errRoot, "url", cJSON_CreateString((char*)pWrkrData->restURL));
+-	cJSON_AddItemToObject(errRoot,"response",interleaved);
++	fjson_object_object_add(errRoot, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	fjson_object_object_add(errRoot,"response",interleaved);
+ 	ctx->errRoot = errRoot;
+ 	ctx->prepareErrorFileContent= &getDataErrorOnlyInterleaved;
+ 	finalize_it:
+@@ -988,7 +985,7 @@ initializeErrorInterleavedConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+  * needs to be closed, HUP must be sent.
+  */
+ static rsRetVal
+-writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, cJSON **pReplyRoot, uchar *reqmsg)
++writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, fjson_object **pReplyRoot, uchar *reqmsg)
+ {
+ 	char *rendered = NULL;
+ 	size_t toWrite;
+@@ -1054,7 +1051,7 @@ writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, cJSON **pRepl
+ 			DBGPRINTF("omelasticsearch: error creating file content.\n");
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+-		rendered = cJSON_Print(ctx.errRoot);
++		rendered = (char*)fjson_object_to_json_string(ctx.errRoot);
+ 	}
+ 
+ 
+@@ -1084,14 +1081,13 @@ writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, cJSON **pRepl
+ finalize_it:
+ 	if(bMutLocked)
+ 		pthread_mutex_unlock(&pData->mutErrFile);
+-	cJSON_Delete(ctx.errRoot);
+-	free(rendered);
++	fjson_object_put(ctx.errRoot);
+ 	RETiRet;
+ }
+ 
+ 
+ static rsRetVal
+-checkResultBulkmode(wrkrInstanceData_t *pWrkrData, cJSON *root)
++checkResultBulkmode(wrkrInstanceData_t *pWrkrData, fjson_object *root)
+ {
+ 	DEFiRet;
+ 	context ctx;
+@@ -1111,11 +1107,11 @@ checkResultBulkmode(wrkrInstanceData_t *pWrkrData, cJSON *root)
+ static rsRetVal
+ checkResult(wrkrInstanceData_t *pWrkrData, uchar *reqmsg)
+ {
+-	cJSON *root;
+-	cJSON *status;
++	fjson_object *root;
++	fjson_object *status;
+ 	DEFiRet;
+ 
+-	root = cJSON_Parse(pWrkrData->reply);
++	root = fjson_tokener_parse(pWrkrData->reply);
+ 	if(root == NULL) {
+ 		DBGPRINTF("omelasticsearch: could not parse JSON result \n");
+ 		ABORT_FINALIZE(RS_RET_ERR);
+@@ -1124,10 +1120,7 @@ checkResult(wrkrInstanceData_t *pWrkrData, uchar *reqmsg)
+ 	if(pWrkrData->pData->bulkmode) {
+ 		iRet = checkResultBulkmode(pWrkrData, root);
+ 	} else {
+-		status = cJSON_GetObjectItem(root, "status");
+-		/* as far as we know, no "status" means all went well */
+-		if(status != NULL &&
+-		   (status->type == cJSON_Number || status->valueint >= 0 || status->valueint <= 299)) {
++		if(fjson_object_object_get_ex(root, "status", &status)) {
+ 			iRet = RS_RET_DATAFAIL;
+ 		}
+ 	}
+@@ -1143,7 +1136,7 @@ checkResult(wrkrInstanceData_t *pWrkrData, uchar *reqmsg)
+ 
+ finalize_it:
+ 	if(root != NULL)
+-		cJSON_Delete(root);
++		fjson_object_put(root);
+ 	if(iRet != RS_RET_OK) {
+ 		STATSCOUNTER_INC(indexESFail, mutIndexESFail);
+ 	}
+diff --git a/tests/es-bulk-errfile-empty.sh b/tests/es-bulk-errfile-empty.sh
+index 1f27f62fe..95883cb3d 100755
+--- a/tests/es-bulk-errfile-empty.sh
++++ b/tests/es-bulk-errfile-empty.sh
+@@ -12,6 +12,7 @@ echo \[es-bulk-errfile-empty\]: basic test for elasticsearch functionality
+ if [ -f rsyslog.errorfile ]
+ then
+     echo "error: error file exists!"
++    cat rsyslog.errorfile
+     exit 1
+ fi
+ . $srcdir/diag.sh seq-check  0 9999
+-- 
+2.14.4
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-write-op-types-bulk-rejection-retries.patch b/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-write-op-types-bulk-rejection-retries.patch
new file mode 100644
index 0000000..14c7bf5
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1565214-omelasticsearch-write-op-types-bulk-rejection-retries.patch
@@ -0,0 +1,738 @@
+From 989be897340eb458b00efedfd5e082bb362db79a Mon Sep 17 00:00:00 2001
+From: Rich Megginson <rmeggins@redhat.com>
+Date: Tue, 15 May 2018 16:03:25 -0600
+Subject: [PATCH 11/11] omelasticsearch: write op types; bulk rejection retries
+
+Add support for a 'create' write operation type in addition to
+the default 'index'.  Using create allows specifying a unique id
+for each record, and allows duplicate document detection.
+
+Add support for checking each record returned in a bulk index
+request response.  Allow specifying a ruleset to send each failed
+record to.  Add a local variable `omes` which contains the
+information in the error response, so that users can control how
+to handle responses e.g. retry, or send to an error file.
+
+Add support for response stats - count successes, duplicates, and
+different types of failures.
+
+Add testing for bulk index rejections.
+
+(cherry picked from commit 57dd368a2a915d79c94a8dc0de30c93a0bbdc8fe)
+(cherry picked from commit 30a15621e1e7e393b2153e9fe5c13f724dea25b5)
+---
+ plugins/omelasticsearch/omelasticsearch.c | 441 ++++++++++++++++++++++++++++--
+ 1 file changed, 415 insertions(+), 26 deletions(-)
+
+diff --git a/plugins/omelasticsearch/omelasticsearch.c b/plugins/omelasticsearch/omelasticsearch.c
+index ed2b47535..ca61ae28f 100644
+--- a/plugins/omelasticsearch/omelasticsearch.c
++++ b/plugins/omelasticsearch/omelasticsearch.c
+@@ -51,6 +51,8 @@
+ #include "statsobj.h"
+ #include "cfsysline.h"
+ #include "unicode-helper.h"
++#include "ratelimit.h"
++#include "ruleset.h"
+ 
+ #ifndef O_LARGEFILE
+ #  define O_LARGEFILE 0
+@@ -64,6 +66,8 @@ MODULE_CNFNAME("omelasticsearch")
+ DEF_OMOD_STATIC_DATA
+ DEFobjCurrIf(errmsg)
+ DEFobjCurrIf(statsobj)
++DEFobjCurrIf(prop)
++DEFobjCurrIf(ruleset)
+ 
+ statsobj_t *indexStats;
+ STATSCOUNTER_DEF(indexSubmit, mutIndexSubmit)
+@@ -71,19 +75,35 @@ STATSCOUNTER_DEF(indexHTTPFail, mutIndexHTTPFail)
+ STATSCOUNTER_DEF(indexHTTPReqFail, mutIndexHTTPReqFail)
+ STATSCOUNTER_DEF(checkConnFail, mutCheckConnFail)
+ STATSCOUNTER_DEF(indexESFail, mutIndexESFail)
++STATSCOUNTER_DEF(indexSuccess, mutIndexSuccess)
++STATSCOUNTER_DEF(indexBadResponse, mutIndexBadResponse)
++STATSCOUNTER_DEF(indexDuplicate, mutIndexDuplicate)
++STATSCOUNTER_DEF(indexBadArgument, mutIndexBadArgument)
++STATSCOUNTER_DEF(indexBulkRejection, mutIndexBulkRejection)
++STATSCOUNTER_DEF(indexOtherResponse, mutIndexOtherResponse)
+ 
++static prop_t *pInputName = NULL;
+ 
+ #	define META_STRT "{\"index\":{\"_index\": \""
++#	define META_STRT_CREATE "{\"create\":{\"_index\": \""
+ #	define META_TYPE "\",\"_type\":\""
+ #	define META_PARENT "\",\"_parent\":\""
+ #	define META_ID "\", \"_id\":\""
+ #	define META_END  "\"}}\n"
+ 
++typedef enum {
++	ES_WRITE_INDEX,
++	ES_WRITE_CREATE,
++	ES_WRITE_UPDATE, /* not supported */
++	ES_WRITE_UPSERT /* not supported */
++} es_write_ops_t;
++
+ /* REST API for elasticsearch hits this URL:
+  * http://<hostName>:<restPort>/<searchIndex>/<searchType>
+  */
++/* bulk API uses /_bulk */
+ typedef struct curl_slist HEADER;
+-typedef struct _instanceData {
++typedef struct instanceConf_s {
+ 	int defaultPort;
+ 	int fdErrFile;		/* error file fd or -1 if not open */
+ 	pthread_mutex_t mutErrFile;
+@@ -113,8 +133,25 @@ typedef struct _instanceData {
+ 	uchar *caCertFile;
+ 	uchar *myCertFile;
+ 	uchar *myPrivKeyFile;
++	es_write_ops_t writeOperation;
++	sbool retryFailures;
++	int ratelimitInterval;
++	int ratelimitBurst;
++	/* for retries */
++	ratelimit_t *ratelimiter;
++	uchar *retryRulesetName;
++	ruleset_t *retryRuleset;
++	struct instanceConf_s *next;
+ } instanceData;
+ 
++typedef instanceConf_t instanceData;
++
++struct modConfData_s {
++	rsconf_t *pConf;		/* our overall config object */
++	instanceConf_t *root, *tail;
++};
++static modConfData_t *loadModConf = NULL;	/* modConf ptr to use for the current load process */
++
+ typedef struct wrkrInstanceData {
+ 	instanceData *pData;
+ 	int serverIndex;
+@@ -160,7 +197,12 @@ static struct cnfparamdescr actpdescr[] = {
+ 	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
+ 	{ "tls.cacert", eCmdHdlrString, 0 },
+ 	{ "tls.mycert", eCmdHdlrString, 0 },
+-	{ "tls.myprivkey", eCmdHdlrString, 0 }
++	{ "tls.myprivkey", eCmdHdlrString, 0 },
++	{ "writeoperation", eCmdHdlrGetWord, 0 },
++	{ "retryfailures", eCmdHdlrBinary, 0 },
++	{ "ratelimit.interval", eCmdHdlrInt, 0 },
++	{ "ratelimit.burst", eCmdHdlrInt, 0 },
++	{ "retryruleset", eCmdHdlrString, 0 }
+ };
+ static struct cnfparamblk actpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -177,6 +219,9 @@ CODESTARTcreateInstance
+ 	pData->caCertFile = NULL;
+ 	pData->myCertFile = NULL;
+ 	pData->myPrivKeyFile = NULL;
++	pData->ratelimiter = NULL;
++	pData->retryRulesetName = NULL;
++	pData->retryRuleset = NULL;
+ ENDcreateInstance
+ 
+ BEGINcreateWrkrInstance
+@@ -228,6 +273,9 @@ CODESTARTfreeInstance
+ 	free(pData->caCertFile);
+ 	free(pData->myCertFile);
+ 	free(pData->myPrivKeyFile);
++	free(pData->retryRulesetName);
++	if (pData->ratelimiter != NULL)
++		ratelimitDestruct(pData->ratelimiter);
+ ENDfreeInstance
+ 
+ BEGINfreeWrkrInstance
+@@ -285,6 +333,10 @@ CODESTARTdbgPrintInstInfo
+ 	dbgprintf("\ttls.cacert='%s'\n", pData->caCertFile);
+ 	dbgprintf("\ttls.mycert='%s'\n", pData->myCertFile);
+ 	dbgprintf("\ttls.myprivkey='%s'\n", pData->myPrivKeyFile);
++	dbgprintf("\twriteoperation='%d'\n", pData->writeOperation);
++	dbgprintf("\tretryfailures='%d'\n", pData->retryFailures);
++	dbgprintf("\tratelimit.interval='%d'\n", pData->ratelimitInterval);
++	dbgprintf("\tratelimit.burst='%d'\n", pData->ratelimitBurst);
+ ENDdbgPrintInstInfo
+ 
+ 
+@@ -557,7 +609,11 @@ finalize_it:
+ static size_t
+ computeMessageSize(wrkrInstanceData_t *pWrkrData, uchar *message, uchar **tpls)
+ {
+-	size_t r = sizeof(META_STRT)-1 + sizeof(META_TYPE)-1 + sizeof(META_END)-1 + sizeof("\n")-1;
++	size_t r = sizeof(META_TYPE)-1 + sizeof(META_END)-1 + sizeof("\n")-1;
++	if (pWrkrData->pData->writeOperation == ES_WRITE_CREATE)
++		r += sizeof(META_STRT_CREATE)-1;
++	else
++		r += sizeof(META_STRT)-1;
+ 
+ 	uchar *searchIndex = 0;
+ 	uchar *searchType;
+@@ -594,7 +650,10 @@ buildBatch(wrkrInstanceData_t *pWrkrData, uchar *message, uchar **tpls)
+ 	DEFiRet;
+ 
+ 	getIndexTypeAndParent(pWrkrData->pData, tpls, &searchIndex, &searchType, &parent, &bulkId);
+-	r = es_addBuf(&pWrkrData->batch.data, META_STRT, sizeof(META_STRT)-1);
++	if (pWrkrData->pData->writeOperation == ES_WRITE_CREATE)
++		r = es_addBuf(&pWrkrData->batch.data, META_STRT_CREATE, sizeof(META_STRT_CREATE)-1);
++	else
++		r = es_addBuf(&pWrkrData->batch.data, META_STRT, sizeof(META_STRT)-1);
+ 	if(r == 0) r = es_addBuf(&pWrkrData->batch.data, (char*)searchIndex,
+ 				 ustrlen(searchIndex));
+ 	if(r == 0) r = es_addBuf(&pWrkrData->batch.data, META_TYPE, sizeof(META_TYPE)-1);
+@@ -709,13 +768,20 @@ static int checkReplyStatus(fjson_object* ok) {
+ 
+ /*
+  * Context object for error file content creation or status check
++ * response_item - the full {"create":{"_index":"idxname",.....}}
++ * response_body - the inner hash of the response_item - {"_index":"idxname",...}
++ * status - the "status" field from the inner hash - "status":500
++ *          should be able to use fjson_object_get_int(status) to get the http result code
+  */
+ typedef struct exeContext{
+ 	int statusCheckOnly;
+ 	fjson_object *errRoot;
+-	rsRetVal (*prepareErrorFileContent)(struct exeContext *ctx,int itemStatus,char *request,char *response);
+-
+-
++	rsRetVal (*prepareErrorFileContent)(struct exeContext *ctx,int itemStatus,char *request,char *response,
++			fjson_object *response_item, fjson_object *response_body, fjson_object *status);
++	es_write_ops_t writeOperation;
++	ratelimit_t *ratelimiter;
++	ruleset_t *retryRuleset;
++	struct json_tokener *jTokener;
+ } context;
+ 
+ /*
+@@ -728,8 +794,15 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+ 	fjson_object *replyRoot = *pReplyRoot;
+ 	int i;
+ 	int numitems;
+-	fjson_object *items=NULL;
++	fjson_object *items=NULL, *jo_errors = NULL;
++	int errors = 0;
+ 
++	if(fjson_object_object_get_ex(replyRoot, "errors", &jo_errors)) {
++		errors = fjson_object_get_boolean(jo_errors);
++		if (!errors && pWrkrData->pData->retryFailures) {
++			return RS_RET_OK;
++		}
++	}
+ 
+ 	/*iterate over items*/
+ 	if(!fjson_object_object_get_ex(replyRoot, "items", &items)) {
+@@ -741,7 +814,11 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+ 
+ 	numitems = fjson_object_array_length(items);
+ 
+-	DBGPRINTF("omelasticsearch: Entire request %s\n",reqmsg);
++	if (reqmsg) {
++		DBGPRINTF("omelasticsearch: Entire request %s\n", reqmsg);
++	} else {
++		DBGPRINTF("omelasticsearch: Empty request\n");
++	}
+ 	const char *lastReqRead= (char*)reqmsg;
+ 
+ 	DBGPRINTF("omelasticsearch: %d items in reply\n", numitems);
+@@ -769,8 +846,7 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+ 		
+ 		char *request =0;
+ 		char *response =0;
+-		if(ctx->statusCheckOnly)
+-		{
++		if(ctx->statusCheckOnly || (NULL == lastReqRead)) {
+ 			if(itemStatus) {
+ 				DBGPRINTF("omelasticsearch: error in elasticsearch reply: item %d, status is %d\n", i, fjson_object_get_int(ok));
+ 				DBGPRINTF("omelasticsearch: status check found error.\n");
+@@ -795,7 +871,8 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+ 			}
+ 
+ 			/*call the context*/
+-			rsRetVal ret = ctx->prepareErrorFileContent(ctx, itemStatus, request,response);
++			rsRetVal ret = ctx->prepareErrorFileContent(ctx, itemStatus, request,
++					response, item, result, ok);
+ 
+ 			/*free memory in any case*/
+ 			free(request);
+@@ -818,11 +895,14 @@ parseRequestAndResponseForContext(wrkrInstanceData_t *pWrkrData,fjson_object **p
+  * Dumps only failed requests of bulk insert
+  */
+ static rsRetVal
+-getDataErrorOnly(context *ctx,int itemStatus,char *request,char *response)
++getDataErrorOnly(context *ctx,int itemStatus,char *request,char *response,
++		fjson_object *response_item, fjson_object *response_body, fjson_object *status)
+ {
+ 	DEFiRet;
+-	if(itemStatus)
+-	{
++	(void)response_item; /* unused */
++	(void)response_body; /* unused */
++	(void)status; /* unused */
++	if(itemStatus) {
+ 		fjson_object *onlyErrorResponses =NULL;
+ 		fjson_object *onlyErrorRequests=NULL;
+ 
+@@ -855,9 +935,16 @@ static rsRetVal
+ getDataInterleaved(context *ctx,
+ 	int __attribute__((unused)) itemStatus,
+ 	char *request,
+-	char *response)
++	char *response,
++	fjson_object *response_item,
++	fjson_object *response_body,
++	fjson_object *status
++)
+ {
+ 	DEFiRet;
++	(void)response_item; /* unused */
++	(void)response_body; /* unused */
++	(void)status; /* unused */
+ 	fjson_object *interleaved =NULL;
+ 	if(!fjson_object_object_get_ex(ctx->errRoot, "response", &interleaved))
+ 	{
+@@ -889,11 +976,13 @@ getDataInterleaved(context *ctx,
+  */
+ 
+ static rsRetVal
+-getDataErrorOnlyInterleaved(context *ctx,int itemStatus,char *request,char *response)
++getDataErrorOnlyInterleaved(context *ctx,int itemStatus,char *request,char *response,
++		fjson_object *response_item, fjson_object *response_body, fjson_object *status)
+ {
+ 	DEFiRet;
+ 	if (itemStatus) {
+-		if(getDataInterleaved(ctx, itemStatus,request,response)!= RS_RET_OK) {
++		if(getDataInterleaved(ctx, itemStatus,request,response,
++				response_item, response_body, status)!= RS_RET_OK) {
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+ 	}
+@@ -902,6 +991,141 @@ getDataErrorOnlyInterleaved(context *ctx,int itemStatus,char *request,char *resp
+ 		RETiRet;
+ }
+ 
++/* request string looks like this:
++ * "{\"create\":{\"_index\": \"rsyslog_testbench\",\"_type\":\"test-type\",
++ *   \"_id\":\"FAEAFC0D17C847DA8BD6F47BC5B3800A\"}}\n
++ * {\"msgnum\":\"x00000000\",\"viaq_msg_id\":\"FAEAFC0D17C847DA8BD6F47BC5B3800A\"}\n"
++ * we don't want the meta header, only the data part
++ * start = first \n + 1
++ * end = last \n
++ */
++static rsRetVal
++createMsgFromRequest(const char *request, context *ctx, smsg_t **msg)
++{
++	DEFiRet;
++	fjson_object *jo_msg = NULL;
++	const char *datastart, *dataend;
++	size_t datalen;
++	enum json_tokener_error json_error;
++
++	*msg = NULL;
++	if (!(datastart = strchr(request, '\n')) || (datastart[1] != '{')) {
++		LogError(0, RS_RET_ERR,
++			"omelasticsearch: malformed original request - "
++			"could not find start of original data [%s]",
++			request);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	datastart++; /* advance to { */
++	if (!(dataend = strchr(datastart, '\n')) || (dataend[1] != '\0')) {
++		LogError(0, RS_RET_ERR,
++			"omelasticsearch: malformed original request - "
++			"could not find end of original data [%s]",
++			request);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++	datalen = dataend - datastart;
++	json_tokener_reset(ctx->jTokener);
++	fjson_object *jo_request = json_tokener_parse_ex(ctx->jTokener, datastart, datalen);
++	json_error = fjson_tokener_get_error(ctx->jTokener);
++	if (!jo_request || (json_error != fjson_tokener_success)) {
++		LogError(0, RS_RET_ERR,
++			"omelasticsearch: parse error [%s] - could not convert original "
++			"request JSON back into JSON object [%s]",
++			fjson_tokener_error_desc(json_error), request);
++		ABORT_FINALIZE(RS_RET_ERR);
++	}
++
++	CHKiRet(msgConstruct(msg));
++	MsgSetFlowControlType(*msg, eFLOWCTL_FULL_DELAY);
++	MsgSetInputName(*msg, pInputName);
++	if (fjson_object_object_get_ex(jo_request, "message", &jo_msg)) {
++		const char *rawmsg = json_object_get_string(jo_msg);
++		const size_t msgLen = (size_t)json_object_get_string_len(jo_msg);
++		MsgSetRawMsg(*msg, rawmsg, msgLen);
++	} else {
++		MsgSetRawMsg(*msg, request, strlen(request));
++	}
++	MsgSetMSGoffs(*msg, 0);	/* we do not have a header... */
++	CHKiRet(msgAddJSON(*msg, (uchar*)"!", jo_request, 0, 0));
++
++	finalize_it:
++		RETiRet;
++
++}
++
++
++static rsRetVal
++getDataRetryFailures(context *ctx,int itemStatus,char *request,char *response,
++		fjson_object *response_item, fjson_object *response_body, fjson_object *status)
++{
++	DEFiRet;
++	fjson_object *omes = NULL, *jo = NULL;
++	int istatus = fjson_object_get_int(status);
++	int iscreateop = 0;
++	struct json_object_iterator it = json_object_iter_begin(response_item);
++	struct json_object_iterator itEnd = json_object_iter_end(response_item);
++	const char *optype = NULL;
++	smsg_t *msg = NULL;
++
++	(void)response;
++	(void)itemStatus;
++	CHKiRet(createMsgFromRequest(request, ctx, &msg));
++	CHKmalloc(msg);
++	/* add status as local variables */
++	omes = json_object_new_object();
++	if (!json_object_iter_equal(&it, &itEnd))
++		optype = json_object_iter_peek_name(&it);
++	if (optype && !strcmp("create", optype))
++		iscreateop = 1;
++	if (optype && !strcmp("index", optype) && (ctx->writeOperation == ES_WRITE_INDEX))
++		iscreateop = 1;
++	if (optype) {
++		jo = json_object_new_string(optype);
++	} else {
++		jo = json_object_new_string("unknown");
++	}
++	json_object_object_add(omes, "writeoperation", jo);
++
++	if (!optype) {
++		STATSCOUNTER_INC(indexBadResponse, mutIndexBadResponse);
++	} else if ((istatus == 200) || (istatus == 201)) {
++		STATSCOUNTER_INC(indexSuccess, mutIndexSuccess);
++	} else if ((istatus == 409) && iscreateop) {
++		STATSCOUNTER_INC(indexDuplicate, mutIndexDuplicate);
++	} else if (istatus == 400 || (istatus < 200)) {
++		STATSCOUNTER_INC(indexBadArgument, mutIndexBadArgument);
++	} else {
++		fjson_object *error = NULL, *errtype = NULL;
++		if(fjson_object_object_get_ex(response_body, "error", &error) &&
++		   fjson_object_object_get_ex(error, "type", &errtype)) {
++			if (istatus == 429) {
++				STATSCOUNTER_INC(indexBulkRejection, mutIndexBulkRejection);
++			} else {
++				STATSCOUNTER_INC(indexOtherResponse, mutIndexOtherResponse);
++			}
++		} else {
++			STATSCOUNTER_INC(indexBadResponse, mutIndexBadResponse);
++		}
++	}
++	/* add response_body fields to local var omes */
++	it = json_object_iter_begin(response_body);
++	itEnd = json_object_iter_end(response_body);
++	while (!json_object_iter_equal(&it, &itEnd)) {
++		json_object_object_add(omes, json_object_iter_peek_name(&it),
++			json_object_get(json_object_iter_peek_value(&it)));
++		json_object_iter_next(&it);
++	}
++	CHKiRet(msgAddJSON(msg, (uchar*)".omes", omes, 0, 0));
++	omes = NULL;
++	MsgSetRuleset(msg, ctx->retryRuleset);
++	CHKiRet(ratelimitAddMsg(ctx->ratelimiter, NULL, msg));
++finalize_it:
++	if (omes)
++		json_object_put(omes);
++	RETiRet;
++}
++
+ /*
+  * get erroronly context
+  */
+@@ -979,6 +1203,23 @@ initializeErrorInterleavedConext(wrkrInstanceData_t *pWrkrData,context *ctx){
+ 		RETiRet;
+ }
+ 
++/*get retry failures context*/
++static rsRetVal
++initializeRetryFailuresContext(wrkrInstanceData_t *pWrkrData,context *ctx){
++	DEFiRet;
++	ctx->statusCheckOnly=0;
++	fjson_object *errRoot=NULL;
++	if((errRoot=fjson_object_new_object()) == NULL) ABORT_FINALIZE(RS_RET_ERR);
++
++
++	fjson_object_object_add(errRoot, "url", fjson_object_new_string((char*)pWrkrData->restURL));
++	ctx->errRoot = errRoot;
++	ctx->prepareErrorFileContent= &getDataRetryFailures;
++	CHKmalloc(ctx->jTokener = json_tokener_new());
++	finalize_it:
++		RETiRet;
++}
++
+ 
+ /* write data error request/replies to separate error file
+  * Note: we open the file but never close it before exit. If it
+@@ -994,6 +1235,10 @@ writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, fjson_object
+ 	char errStr[1024];
+ 	context ctx;
+ 	ctx.errRoot=0;
++	ctx.writeOperation = pWrkrData->pData->writeOperation;
++	ctx.ratelimiter = pWrkrData->pData->ratelimiter;
++	ctx.retryRuleset = pWrkrData->pData->retryRuleset;
++	ctx.jTokener = NULL;
+ 	DEFiRet;
+ 
+ 	if(pData->errorFile == NULL) {
+@@ -1039,9 +1284,12 @@ writeDataError(wrkrInstanceData_t *pWrkrData, instanceData *pData, fjson_object
+ 				DBGPRINTF("omelasticsearch: error initializing error interleaved context.\n");
+ 				ABORT_FINALIZE(RS_RET_ERR);
+ 			}
+-		}
+-		else
+-		{
++		} else if(pData->retryFailures) {
++			if(initializeRetryFailuresContext(pWrkrData, &ctx) != RS_RET_OK) {
++				DBGPRINTF("omelasticsearch: error initializing retry failures context.\n");
++				ABORT_FINALIZE(RS_RET_ERR);
++			}
++		} else {
+ 			DBGPRINTF("omelasticsearch: None of the modes match file write. No data to write.\n");
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+@@ -1082,25 +1330,38 @@ finalize_it:
+ 	if(bMutLocked)
+ 		pthread_mutex_unlock(&pData->mutErrFile);
+ 	fjson_object_put(ctx.errRoot);
++	if (ctx.jTokener)
++		json_tokener_free(ctx.jTokener);
++	free(rendered);
+ 	RETiRet;
+ }
+ 
+ 
+ static rsRetVal
+-checkResultBulkmode(wrkrInstanceData_t *pWrkrData, fjson_object *root)
++checkResultBulkmode(wrkrInstanceData_t *pWrkrData, fjson_object *root, uchar *reqmsg)
+ {
+ 	DEFiRet;
+ 	context ctx;
+-	ctx.statusCheckOnly=1;
+ 	ctx.errRoot = 0;
+-	if(parseRequestAndResponseForContext(pWrkrData,&root,0,&ctx)!= RS_RET_OK)
+-	{
++	ctx.writeOperation = pWrkrData->pData->writeOperation;
++	ctx.ratelimiter = pWrkrData->pData->ratelimiter;
++	ctx.retryRuleset = pWrkrData->pData->retryRuleset;
++	ctx.statusCheckOnly=1;
++	ctx.jTokener = NULL;
++	if (pWrkrData->pData->retryFailures) {
++		ctx.statusCheckOnly=0;
++		CHKiRet(initializeRetryFailuresContext(pWrkrData, &ctx));
++	}
++	if(parseRequestAndResponseForContext(pWrkrData,&root,reqmsg,&ctx)!= RS_RET_OK) {
+ 		DBGPRINTF("omelasticsearch: error found in elasticsearch reply\n");
+ 		ABORT_FINALIZE(RS_RET_DATAFAIL);
+ 	}
+ 
+-	finalize_it:
+-		RETiRet;
++finalize_it:
++	fjson_object_put(ctx.errRoot);
++	if (ctx.jTokener)
++		json_tokener_free(ctx.jTokener);
++	RETiRet;
+ }
+ 
+ 
+@@ -1118,7 +1378,7 @@ checkResult(wrkrInstanceData_t *pWrkrData, uchar *reqmsg)
+ 	}
+ 
+ 	if(pWrkrData->pData->bulkmode) {
+-		iRet = checkResultBulkmode(pWrkrData, root);
++		iRet = checkResultBulkmode(pWrkrData, root, reqmsg);
+ 	} else {
+ 		if(fjson_object_object_get_ex(root, "status", &status)) {
+ 			iRet = RS_RET_DATAFAIL;
+@@ -1397,6 +1657,13 @@ setInstParamDefaults(instanceData *pData)
+ 	pData->caCertFile = NULL;
+ 	pData->myCertFile = NULL;
+ 	pData->myPrivKeyFile = NULL;
++	pData->writeOperation = ES_WRITE_INDEX;
++	pData->retryFailures = 0;
++	pData->ratelimitBurst = 20000;
++	pData->ratelimitInterval = 600;
++	pData->ratelimiter = NULL;
++	pData->retryRulesetName = NULL;
++	pData->retryRuleset = NULL;
+ }
+ 
+ BEGINnewActInst
+@@ -1495,6 +1762,27 @@ CODESTARTnewActInst
+ 			} else {
+ 				fclose(fp);
+ 			}
++		} else if(!strcmp(actpblk.descr[i].name, "writeoperation")) {
++			char *writeop = es_str2cstr(pvals[i].val.d.estr, NULL);
++			if (writeop && !strcmp(writeop, "create")) {
++				pData->writeOperation = ES_WRITE_CREATE;
++			} else if (writeop && !strcmp(writeop, "index")) {
++				pData->writeOperation = ES_WRITE_INDEX;
++			} else if (writeop) {
++				errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++					"omelasticsearch: invalid value '%s' for writeoperation: "
++					"must be one of 'index' or 'create' - using default value 'index'", writeop);
++				pData->writeOperation = ES_WRITE_INDEX;
++			}
++			free(writeop);
++		} else if(!strcmp(actpblk.descr[i].name, "retryfailures")) {
++			pData->retryFailures = pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "ratelimit.burst")) {
++			pData->ratelimitBurst = (int) pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "ratelimit.interval")) {
++			pData->ratelimitInterval = (int) pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "retryruleset")) {
++			pData->retryRulesetName = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else {
+ 			dbgprintf("omelasticsearch: program error, non-handled "
+ 			  "param '%s'\n", actpblk.descr[i].name);
+@@ -1661,6 +1949,27 @@ CODESTARTnewActInst
+ 		pData->searchIndex = (uchar*) strdup("system");
+ 	if(pData->searchType == NULL)
+ 		pData->searchType = (uchar*) strdup("events");
++
++	if ((pData->writeOperation != ES_WRITE_INDEX) && (pData->bulkId == NULL)) {
++		errmsg.LogError(0, RS_RET_CONFIG_ERROR,
++			"omelasticsearch: writeoperation '%d' requires bulkid", pData->writeOperation);
++		ABORT_FINALIZE(RS_RET_CONFIG_ERROR);
++	}
++
++	if (pData->retryFailures) {
++		CHKiRet(ratelimitNew(&pData->ratelimiter, "omelasticsearch", NULL));
++		ratelimitSetLinuxLike(pData->ratelimiter, pData->ratelimitInterval, pData->ratelimitBurst);
++		ratelimitSetNoTimeCache(pData->ratelimiter);
++	}
++
++	/* node created, let's add to list of instance configs for the module */
++	if(loadModConf->tail == NULL) {
++		loadModConf->tail = loadModConf->root = pData;
++	} else {
++		loadModConf->tail->next = pData;
++		loadModConf->tail = pData;
++	}
++
+ CODE_STD_FINALIZERnewActInst
+ 	cnfparamvalsDestruct(pvals, &actpblk);
+ 	if (serverParam)
+@@ -1680,6 +1989,51 @@ CODE_STD_STRING_REQUESTparseSelectorAct(1)
+ CODE_STD_FINALIZERparseSelectorAct
+ ENDparseSelectorAct
+ 
++
++BEGINbeginCnfLoad
++CODESTARTbeginCnfLoad
++	loadModConf = pModConf;
++	pModConf->pConf = pConf;
++	pModConf->root = pModConf->tail = NULL;
++ENDbeginCnfLoad
++
++
++BEGINendCnfLoad
++CODESTARTendCnfLoad
++	loadModConf = NULL; /* done loading */
++ENDendCnfLoad
++
++
++BEGINcheckCnf
++	instanceConf_t *inst;
++CODESTARTcheckCnf
++	for(inst = pModConf->root ; inst != NULL ; inst = inst->next) {
++		ruleset_t *pRuleset;
++		rsRetVal localRet;
++
++		if (inst->retryRulesetName) {
++			localRet = ruleset.GetRuleset(pModConf->pConf, &pRuleset, inst->retryRulesetName);
++			if(localRet == RS_RET_NOT_FOUND) {
++				errmsg.LogError(0, localRet, "omelasticsearch: retryruleset '%s' not found - "
++						"no retry ruleset will be used", inst->retryRulesetName);
++			} else {
++				inst->retryRuleset = pRuleset;
++			}
++		}
++	}
++ENDcheckCnf
++
++
++BEGINactivateCnf
++CODESTARTactivateCnf
++ENDactivateCnf
++
++
++BEGINfreeCnf
++CODESTARTfreeCnf
++ENDfreeCnf
++
++
+ BEGINdoHUP
+ CODESTARTdoHUP
+ 	if(pData->fdErrFile != -1) {
+@@ -1691,10 +2045,14 @@ ENDdoHUP
+ 
+ BEGINmodExit
+ CODESTARTmodExit
++	if(pInputName != NULL)
++		prop.Destruct(&pInputName);
+ 	curl_global_cleanup();
+ 	statsobj.Destruct(&indexStats);
+ 	objRelease(errmsg, CORE_COMPONENT);
+-        objRelease(statsobj, CORE_COMPONENT);
++	objRelease(statsobj, CORE_COMPONENT);
++	objRelease(prop, CORE_COMPONENT);
++	objRelease(ruleset, CORE_COMPONENT);
+ ENDmodExit
+ 
+ BEGINqueryEtryPt
+@@ -1705,6 +2063,7 @@ CODEqueryEtryPt_IsCompatibleWithFeature_IF_OMOD_QUERIES
+ CODEqueryEtryPt_STD_CONF2_OMOD_QUERIES
+ CODEqueryEtryPt_doHUP
+ CODEqueryEtryPt_TXIF_OMOD_QUERIES /* we support the transactional interface! */
++CODEqueryEtryPt_STD_CONF2_QUERIES
+ ENDqueryEtryPt
+ 
+ 
+@@ -1714,6 +2073,8 @@ CODESTARTmodInit
+ CODEmodInit_QueryRegCFSLineHdlr
+ 	CHKiRet(objUse(errmsg, CORE_COMPONENT));
+ 	CHKiRet(objUse(statsobj, CORE_COMPONENT));
++	CHKiRet(objUse(prop, CORE_COMPONENT));
++	CHKiRet(objUse(ruleset, CORE_COMPONENT));
+ 
+ 	if (curl_global_init(CURL_GLOBAL_ALL) != 0) {
+ 		errmsg.LogError(0, RS_RET_OBJ_CREATION_FAILED, "CURL fail. -elasticsearch indexing disabled");
+@@ -1739,7 +2100,28 @@ CODEmodInit_QueryRegCFSLineHdlr
+ 	STATSCOUNTER_INIT(indexESFail, mutIndexESFail);
+ 	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"failed.es",
+ 		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexESFail));
++	STATSCOUNTER_INIT(indexSuccess, mutIndexSuccess);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.success",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexSuccess));
++	STATSCOUNTER_INIT(indexBadResponse, mutIndexBadResponse);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.bad",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexBadResponse));
++	STATSCOUNTER_INIT(indexDuplicate, mutIndexDuplicate);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.duplicate",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexDuplicate));
++	STATSCOUNTER_INIT(indexBadArgument, mutIndexBadArgument);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.badargument",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexBadArgument));
++	STATSCOUNTER_INIT(indexBulkRejection, mutIndexBulkRejection);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.bulkrejection",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexBulkRejection));
++	STATSCOUNTER_INIT(indexOtherResponse, mutIndexOtherResponse);
++	CHKiRet(statsobj.AddCounter(indexStats, (uchar *)"response.other",
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &indexOtherResponse));
+ 	CHKiRet(statsobj.ConstructFinalize(indexStats));
++	CHKiRet(prop.Construct(&pInputName));
++	CHKiRet(prop.SetString(pInputName, UCHAR_CONSTANT("omelasticsearch"), sizeof("omelasticsearch") - 1));
++	CHKiRet(prop.ConstructFinalize(pInputName));
+ ENDmodInit
+ 
+ /* vi:set ai:
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1582517-buffer-overflow-memcpy-in-parser.patch b/SOURCES/rsyslog-8.24.0-rhbz1582517-buffer-overflow-memcpy-in-parser.patch
new file mode 100644
index 0000000..96d0695
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1582517-buffer-overflow-memcpy-in-parser.patch
@@ -0,0 +1,46 @@
+From cc3098b63174b8aa875d1f2e9c6ea94407b211b8 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Thu, 16 Feb 2017 19:02:36 +0100
+Subject: [PATCH 04/11] Bug 1582517 - rsyslog: Buffer overflow in memcpy() in parser.c
+
+core: fix potential misadressing in parser message sanitizer
+
+misadressing could happen when an oversize message made it to the
+sanitizer AND contained a control character in the oversize part
+of the message. Note that it is an error in itself that such an
+oversize message enters the system, but we harden the sanitizer
+to handle this gracefully (it will truncate the message).
+
+Note that truncation may still - as previously - happen if the
+number of escape characters makes the string grow above the max
+message size.
+
+(cherry picked from commit 20f8237870eb5e971fa068e4dd4d296f1dbef329)
+---
+ runtime/parser.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+diff --git a/runtime/parser.c b/runtime/parser.c
+index 0574d982a..9645baa40 100644
+--- a/runtime/parser.c
++++ b/runtime/parser.c
+@@ -464,9 +464,15 @@ SanitizeMsg(smsg_t *pMsg)
+ 	if(maxDest < sizeof(szSanBuf))
+ 		pDst = szSanBuf;
+ 	else 
+-		CHKmalloc(pDst = MALLOC(iMaxLine + 1));
++		CHKmalloc(pDst = MALLOC(maxDest + 1));
+ 	if(iSrc > 0) {
+ 		iSrc--; /* go back to where everything is OK */
++		if(iSrc > maxDest) {
++			DBGPRINTF("parser.Sanitize: have oversize index %zd, "
++				"max %zd - corrected, but should not happen\n",
++				iSrc, maxDest);
++			iSrc = maxDest;
++		}
+ 		memcpy(pDst, pszMsg, iSrc); /* fast copy known good */
+ 	}
+ 	iDst = iSrc;
+-- 
+2.14.4
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1591819-msg-loss-shutdown.patch b/SOURCES/rsyslog-8.24.0-rhbz1591819-msg-loss-shutdown.patch
new file mode 100644
index 0000000..472823b
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1591819-msg-loss-shutdown.patch
@@ -0,0 +1,63 @@
+From 59627f23bee26f3acec19d491d5884bcd1fb672e Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Wed, 6 Jun 2018 17:30:21 +0200
+Subject: [PATCH] core: fix message loss on target unavailibility during
+ shutdown
+
+Triggering condition:
+- action queue in disk mode (or DA)
+- batch is being processed by failed action in retry mode
+- rsyslog is shut down without resuming action
+
+In these cases messages may be lost by not properly writing them
+back to the disk queue.
+
+closes https://github.com/rsyslog/rsyslog/issues/2760
+---
+ action.c        | 11 +++++++++--
+ runtime/queue.c |  3 +++
+ 2 files changed, 12 insertions(+), 2 deletions(-)
+
+diff --git a/action.c b/action.c
+index a9f886a43..39fcb1c19 100644
+--- a/action.c
++++ b/action.c
+@@ -1554,8 +1554,15 @@ processBatchMain(void *__restrict__ const pVoid,
+ 			/* we do not check error state below, because aborting would be
+ 			 * more harmful than continuing.
+ 			 */
+-			processMsgMain(pAction, pWti, pBatch->pElem[i].pMsg, &ttNow);
+-			batchSetElemState(pBatch, i, BATCH_STATE_COMM);
++			rsRetVal localRet = processMsgMain(pAction, pWti, pBatch->pElem[i].pMsg, &ttNow);
++			DBGPRINTF("processBatchMain: i %d, processMsgMain iRet %d\n", i, localRet);
++			if(   localRet == RS_RET_OK
++			   || localRet == RS_RET_DEFER_COMMIT
++			   || localRet == RS_RET_ACTION_FAILED
++			   || localRet == RS_RET_PREVIOUS_COMMITTED ) {
++				batchSetElemState(pBatch, i, BATCH_STATE_COMM);
++				DBGPRINTF("processBatchMain: i %d, COMM state set\n", i);
++			}
+ 		}
+ 	}
+ 
+diff --git a/runtime/queue.c b/runtime/queue.c
+index 74cc217d1..fd163a49f 100644
+--- a/runtime/queue.c
++++ b/runtime/queue.c
+@@ -1666,6 +1666,7 @@ DeleteProcessedBatch(qqueue_t *pThis, batch_t *pBatch)
+ 
+ 	for(i = 0 ; i < pBatch->nElem ; ++i) {
+ 		pMsg = pBatch->pElem[i].pMsg;
++		DBGPRINTF("DeleteProcessedBatch: etry %d state %d\n", i, pBatch->eltState[i]);
+ 		if(   pBatch->eltState[i] == BATCH_STATE_RDY
+ 		   || pBatch->eltState[i] == BATCH_STATE_SUB) {
+ 			localRet = doEnqSingleObj(pThis, eFLOWCTL_NO_DELAY, MsgAddRef(pMsg));
+@@ -1778,6 +1779,8 @@ DequeueConsumableElements(qqueue_t *pThis, wti_t *pWti, int *piRemainingQueueSiz
+ 	/* it is sufficient to persist only when the bulk of work is done */
+ 	qqueueChkPersist(pThis, nDequeued+nDiscarded+nDeleted);
+ 
++	DBGOPRINT((obj_t*) pThis, "dequeued %d consumable elements, szlog %d sz phys %d\n",
++		nDequeued, getLogicalQueueSize(pThis), getPhysicalQueueSize(pThis));
+ 	pWti->batch.nElem = nDequeued;
+ 	pWti->batch.nElemDeq = nDequeued + nDiscarded;
+ 	pWti->batch.deqID = getNextDeqID(pThis);
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1597264-man-page-fix.patch b/SOURCES/rsyslog-8.24.0-rhbz1597264-man-page-fix.patch
new file mode 100644
index 0000000..74d3395
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1597264-man-page-fix.patch
@@ -0,0 +1,30 @@
+diff --git a/tools/rsyslogd.8 b/tools/rsyslogd.8
+index 77d0f97..d9a2e32 100644
+--- a/tools/rsyslogd.8
++++ b/tools/rsyslogd.8
+@@ -127,14 +127,14 @@ reacts to a set of signals.  You may easily send a signal to
+ using the following:
+ .IP
+ .nf
+-kill -SIGNAL $(cat /var/run/rsyslogd.pid)
++kill -SIGNAL $(cat /var/run/syslogd.pid)
+ .fi
+ .PP
+ Note that -SIGNAL must be replaced with the actual signal
+ you are trying to send, e.g. with HUP. So it then becomes:
+ .IP
+ .nf
+-kill -HUP $(cat /var/run/rsyslogd.pid)
++kill -HUP $(cat /var/run/syslogd.pid)
+ .fi
+ .PP
+ .TP
+@@ -215,7 +215,7 @@ for exact information.
+ .I /dev/log
+ The Unix domain socket to from where local syslog messages are read.
+ .TP
+-.I /var/run/rsyslogd.pid
++.I /var/run/syslogd.pid
+ The file containing the process id of 
+ .BR rsyslogd .
+ .TP
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1600462-wrktable-realloc-null.patch b/SOURCES/rsyslog-8.24.0-rhbz1600462-wrktable-realloc-null.patch
new file mode 100644
index 0000000..81f58ee
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1600462-wrktable-realloc-null.patch
@@ -0,0 +1,64 @@
+From f2f67932a37539080b7ad3403dd073df3511a410 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Fri, 27 Oct 2017 08:36:19 +0200
+Subject: [PATCH] core/action: fix NULL pointer access under OOM condition
+
+If a new worker was started while the system ran out of memory
+a NULL pointer access could happen. The patch handles this more
+gracefully.
+
+Detected by Coverty Scan, CID 185342.
+---
+ action.c | 17 +++++++++++++----
+ 1 file changed, 13 insertions(+), 4 deletions(-)
+
+diff --git a/action.c b/action.c
+index 986501074..4e467ee8b 100644
+--- a/action.c
++++ b/action.c
+@@ -822,8 +822,9 @@ actionDoRetry(action_t * const pThis, wti_t * const pWti)
+ 
+ 
+ static rsRetVal
+-actionCheckAndCreateWrkrInstance(action_t * const pThis, wti_t * const pWti)
++actionCheckAndCreateWrkrInstance(action_t * const pThis, const wti_t *const pWti)
+ {
++	int locked = 0;
+ 	DEFiRet;
+ 	if(pWti->actWrkrInfo[pThis->iActionNbr].actWrkrData == NULL) {
+ 		DBGPRINTF("wti %p: we need to create a new action worker instance for "
+@@ -836,23 +837,31 @@ actionCheckAndCreateWrkrInstance(action_t * const pThis, wti_t * const pWti)
+ 		/* maintain worker data table -- only needed if wrkrHUP is requested! */
+ 
+ 		pthread_mutex_lock(&pThis->mutWrkrDataTable);
++		locked = 1;
+ 		int freeSpot;
+ 		for(freeSpot = 0 ; freeSpot < pThis->wrkrDataTableSize ; ++freeSpot)
+ 			if(pThis->wrkrDataTable[freeSpot] == NULL)
+ 				break;
+ 		if(pThis->nWrkr == pThis->wrkrDataTableSize) {
+-			// TODO: check realloc, fall back to old table if it fails. Better than nothing...
+-			pThis->wrkrDataTable = realloc(pThis->wrkrDataTable,
++			void *const newTable = realloc(pThis->wrkrDataTable,
+ 				(pThis->wrkrDataTableSize + 1) * sizeof(void*));
++			if(newTable == NULL) {
++				DBGPRINTF("actionCheckAndCreateWrkrInstance: out of "
++					"memory realloc wrkrDataTable\n")
++				ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++			}
++			pThis->wrkrDataTable = newTable;
+ 			pThis->wrkrDataTableSize++;
+ 		}
+ 		pThis->wrkrDataTable[freeSpot] = pWti->actWrkrInfo[pThis->iActionNbr].actWrkrData;
+ 		pThis->nWrkr++;
+-		pthread_mutex_unlock(&pThis->mutWrkrDataTable);
+ 		DBGPRINTF("wti %p: created action worker instance %d for "
+ 			  "action %d\n", pWti, pThis->nWrkr, pThis->iActionNbr);
+ 	}
+ finalize_it:
++	if(locked) {
++		pthread_mutex_unlock(&pThis->mutWrkrDataTable);
++	}
+ 	RETiRet;
+ }
+ 
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1622767-mmkubernetes-stop-on-pod-delete.patch b/SOURCES/rsyslog-8.24.0-rhbz1622767-mmkubernetes-stop-on-pod-delete.patch
new file mode 100644
index 0000000..956e27e
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1622767-mmkubernetes-stop-on-pod-delete.patch
@@ -0,0 +1,764 @@
+From 3987cd929d859f900318b393133c3bdde8dfffd5 Mon Sep 17 00:00:00 2001
+From: Rich Megginson <rmeggins@redhat.com>
+Date: Tue, 28 Aug 2018 12:44:23 -0600
+Subject: [PATCH] mmkubertnetes: action fails preparation cycle if kubernetes
+ API destroys resource during bootup sequence
+
+The plugin was not handling 404 Not Found correctly when looking
+up pods and namespaces.  In this case, we assume the pod/namespace
+was deleted, annotate the record with whatever metadata we have,
+and cache the fact that the pod/namespace is missing so we don't
+attempt to look it up again.
+In addition, the plugin was not handling error 429 Busy correctly.
+In this case, it should also annotate the record with whatever
+metadata it has, and _not_ cache anything.  By default the plugin
+will retry every 5 seconds to connect to Kubernetes.  This
+behavior is controlled by the new config param `busyretryinterval`.
+This commit also adds impstats counters so that admins can
+view the state of the plugin to see if the lookups are working
+or are returning errors.  The stats are reported per-instance
+or per-action to facilitate using multiple different actions
+for different Kubernetes servers.
+This commit also adds support for client cert auth to
+Kubernetes via the two new config params `tls.mycert` and
+`tls.myprivkey`.
+---
+ contrib/mmkubernetes/mmkubernetes.c | 296 ++++++++++++++++++++++++----
+ 1 files changed, 272 insertions(+), 24 deletions(-)
+
+diff --git a/contrib/mmkubernetes/mmkubernetes.c b/contrib/mmkubernetes/mmkubernetes.c
+index 422cb2577..5bf5b049d 100644
+--- a/contrib/mmkubernetes/mmkubernetes.c
++++ b/contrib/mmkubernetes/mmkubernetes.c
+@@ -52,9 +52,12 @@
+ #include "syslogd-types.h"
+ #include "module-template.h"
+ #include "errmsg.h"
++#include "statsobj.h"
+ #include "regexp.h"
+ #include "hashtable.h"
+ #include "srUtils.h"
++#include "unicode-helper.h"
++#include "datetime.h"
+ 
+ /* static data */
+ MODULE_TYPE_OUTPUT /* this is technically an output plugin */
+@@ -62,6 +65,8 @@ MODULE_TYPE_KEEP /* releasing the module would cause a leak through libcurl */
+ DEF_OMOD_STATIC_DATA
+ DEFobjCurrIf(errmsg)
+ DEFobjCurrIf(regexp)
++DEFobjCurrIf(statsobj)
++DEFobjCurrIf(datetime)
+ 
+ #define HAVE_LOADSAMPLESFROMSTRING 1
+ #if defined(NO_LOADSAMPLESFROMSTRING)
+@@ -95,12 +100,14 @@ DEFobjCurrIf(regexp)
+ #define DFLT_CONTAINER_NAME "$!CONTAINER_NAME" /* name of variable holding CONTAINER_NAME value */
+ #define DFLT_CONTAINER_ID_FULL "$!CONTAINER_ID_FULL" /* name of variable holding CONTAINER_ID_FULL value */
+ #define DFLT_KUBERNETES_URL "https://kubernetes.default.svc.cluster.local:443"
++#define DFLT_BUSY_RETRY_INTERVAL 5 /* retry every 5 seconds */
+ 
+ static struct cache_s {
+ 	const uchar *kbUrl;
+ 	struct hashtable *mdHt;
+ 	struct hashtable *nsHt;
+ 	pthread_mutex_t *cacheMtx;
++	int lastBusyTime;
+ } **caches;
+ 
+ typedef struct {
+@@ -116,6 +123,8 @@ struct modConfData_s {
+ 	uchar *srcMetadataPath;	/* where to get data for kubernetes queries */
+ 	uchar *dstMetadataPath;	/* where to put metadata obtained from kubernetes */
+ 	uchar *caCertFile; /* File holding the CA cert (+optional chain) of CA that issued the Kubernetes server cert */
++	uchar *myCertFile; /* File holding cert corresponding to private key used for client cert auth */
++	uchar *myPrivKeyFile; /* File holding private key corresponding to cert used for client cert auth */
+ 	sbool allowUnsignedCerts; /* For testing/debugging - do not check for CA certs (CURLOPT_SSL_VERIFYPEER FALSE) */
+ 	uchar *token; /* The token value to use to authenticate to Kubernetes - takes precedence over tokenFile */
+ 	uchar *tokenFile; /* The file whose contents is the token value to use to authenticate to Kubernetes */
+@@ -127,6 +136,7 @@ struct modConfData_s {
+ 	uchar *fnRulebase; /* lognorm rulebase filename for container log filename match */
+ 	char *contRules; /* lognorm rules for CONTAINER_NAME value match */
+ 	uchar *contRulebase; /* lognorm rulebase filename for CONTAINER_NAME value match */
++	int busyRetryInterval; /* how to handle 429 response - 0 means error, non-zero means retry every N seconds */
+ };
+ 
+ /* action (instance) configuration data */
+@@ -135,6 +145,8 @@ typedef struct _instanceData {
+ 	msgPropDescr_t *srcMetadataDescr;	/* where to get data for kubernetes queries */
+ 	uchar *dstMetadataPath;	/* where to put metadata obtained from kubernetes */
+ 	uchar *caCertFile; /* File holding the CA cert (+optional chain) of CA that issued the Kubernetes server cert */
++	uchar *myCertFile; /* File holding cert corresponding to private key used for client cert auth */
++	uchar *myPrivKeyFile; /* File holding private key corresponding to cert used for client cert auth */
+ 	sbool allowUnsignedCerts; /* For testing/debugging - do not check for CA certs (CURLOPT_SSL_VERIFYPEER FALSE) */
+ 	uchar *token; /* The token value to use to authenticate to Kubernetes - takes precedence over tokenFile */
+ 	uchar *tokenFile; /* The file whose contents is the token value to use to authenticate to Kubernetes */
+@@ -151,6 +163,7 @@ typedef struct _instanceData {
+ 	msgPropDescr_t *contNameDescr; /* CONTAINER_NAME field */
+ 	msgPropDescr_t *contIdFullDescr; /* CONTAINER_ID_FULL field */
+ 	struct cache_s *cache;
++	int busyRetryInterval; /* how to handle 429 response - 0 means error, non-zero means retry every N seconds */
+ } instanceData;
+ 
+ typedef struct wrkrInstanceData {
+@@ -159,6 +172,16 @@ typedef struct wrkrInstanceData {
+ 	struct curl_slist *curlHdr;
+ 	char *curlRply;
+ 	size_t curlRplyLen;
++	statsobj_t *stats; /* stats for this instance */
++	STATSCOUNTER_DEF(k8sRecordSeen, mutK8sRecordSeen)
++	STATSCOUNTER_DEF(namespaceMetadataSuccess, mutNamespaceMetadataSuccess)
++	STATSCOUNTER_DEF(namespaceMetadataNotFound, mutNamespaceMetadataNotFound)
++	STATSCOUNTER_DEF(namespaceMetadataBusy, mutNamespaceMetadataBusy)
++	STATSCOUNTER_DEF(namespaceMetadataError, mutNamespaceMetadataError)
++	STATSCOUNTER_DEF(podMetadataSuccess, mutPodMetadataSuccess)
++	STATSCOUNTER_DEF(podMetadataNotFound, mutPodMetadataNotFound)
++	STATSCOUNTER_DEF(podMetadataBusy, mutPodMetadataBusy)
++	STATSCOUNTER_DEF(podMetadataError, mutPodMetadataError)
+ } wrkrInstanceData_t;
+ 
+ /* module parameters (v6 config format) */
+@@ -167,6 +190,8 @@ static struct cnfparamdescr modpdescr[] = {
+ 	{ "srcmetadatapath", eCmdHdlrString, 0 },
+ 	{ "dstmetadatapath", eCmdHdlrString, 0 },
+ 	{ "tls.cacert", eCmdHdlrString, 0 },
++	{ "tls.mycert", eCmdHdlrString, 0 },
++	{ "tls.myprivkey", eCmdHdlrString, 0 },
+ 	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
+ 	{ "token", eCmdHdlrString, 0 },
+ 	{ "tokenfile", eCmdHdlrString, 0 },
+@@ -174,7 +199,8 @@ static struct cnfparamdescr modpdescr[] = {
+ 	{ "de_dot", eCmdHdlrBinary, 0 },
+ 	{ "de_dot_separator", eCmdHdlrString, 0 },
+ 	{ "filenamerulebase", eCmdHdlrString, 0 },
+-	{ "containerrulebase", eCmdHdlrString, 0 }
++	{ "containerrulebase", eCmdHdlrString, 0 },
++	{ "busyretryinterval", eCmdHdlrInt, 0 }
+ #if HAVE_LOADSAMPLESFROMSTRING == 1
+ 	,
+ 	{ "filenamerules", eCmdHdlrArray, 0 },
+@@ -193,6 +219,8 @@ static struct cnfparamdescr actpdescr[] = {
+ 	{ "srcmetadatapath", eCmdHdlrString, 0 },
+ 	{ "dstmetadatapath", eCmdHdlrString, 0 },
+ 	{ "tls.cacert", eCmdHdlrString, 0 },
++	{ "tls.mycert", eCmdHdlrString, 0 },
++	{ "tls.myprivkey", eCmdHdlrString, 0 },
+ 	{ "allowunsignedcerts", eCmdHdlrBinary, 0 },
+ 	{ "token", eCmdHdlrString, 0 },
+ 	{ "tokenfile", eCmdHdlrString, 0 },
+@@ -200,7 +228,8 @@ static struct cnfparamdescr actpdescr[] = {
+ 	{ "de_dot", eCmdHdlrBinary, 0 },
+ 	{ "de_dot_separator", eCmdHdlrString, 0 },
+ 	{ "filenamerulebase", eCmdHdlrString, 0 },
+-	{ "containerrulebase", eCmdHdlrString, 0 }
++	{ "containerrulebase", eCmdHdlrString, 0 },
++	{ "busyretryinterval", eCmdHdlrInt, 0 }
+ #if HAVE_LOADSAMPLESFROMSTRING == 1
+ 	,
+ 	{ "filenamerules", eCmdHdlrArray, 0 },
+@@ -493,8 +522,9 @@ ENDbeginCnfLoad
+ BEGINsetModCnf
+ 	struct cnfparamvals *pvals = NULL;
+ 	int i;
+-	FILE *fp;
++	FILE *fp = NULL;
+ 	int ret;
++	char errStr[1024];
+ CODESTARTsetModCnf
+ 	pvals = nvlstGetParams(lst, &modpblk, NULL);
+ 	if(pvals == NULL) {
+@@ -509,6 +539,7 @@ CODESTARTsetModCnf
+ 	}
+ 
+ 	loadModConf->de_dot = DFLT_DE_DOT;
++	loadModConf->busyRetryInterval = DFLT_BUSY_RETRY_INTERVAL;
+ 	for(i = 0 ; i < modpblk.nParams ; ++i) {
+ 		if(!pvals[i].bUsed) {
+ 			continue;
+@@ -528,11 +559,39 @@ CODESTARTsetModCnf
+ 				rs_strerror_r(errno, errStr, sizeof(errStr));
+ 				iRet = RS_RET_NO_FILE_ACCESS;
+ 				errmsg.LogError(0, iRet,
+-						"error: certificate file %s couldn't be accessed: %s\n",
++						"error: 'tls.cacert' file %s couldn't be accessed: %s\n",
+ 						loadModConf->caCertFile, errStr);
+ 				ABORT_FINALIZE(iRet);
+ 			} else {
+ 				fclose(fp);
++				fp = NULL;
++			}
++		} else if(!strcmp(modpblk.descr[i].name, "tls.mycert")) {
++			free(loadModConf->myCertFile);
++			loadModConf->myCertFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->myCertFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				LogError(0, iRet,
++						"error: 'tls.mycert' file %s couldn't be accessed: %s\n",
++						loadModConf->myCertFile, errStr);
++			} else {
++				fclose(fp);
++				fp = NULL;
++			}
++		} else if(!strcmp(modpblk.descr[i].name, "tls.myprivkey")) {
++			loadModConf->myPrivKeyFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)loadModConf->myPrivKeyFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				LogError(0, iRet,
++						"error: 'tls.myprivkey' file %s couldn't be accessed: %s\n",
++						loadModConf->myPrivKeyFile, errStr);
++			} else {
++				fclose(fp);
++				fp = NULL;
+ 			}
+ 		} else if(!strcmp(modpblk.descr[i].name, "allowunsignedcerts")) {
+ 			loadModConf->allowUnsignedCerts = pvals[i].val.d.n;
+@@ -557,6 +614,7 @@ CODESTARTsetModCnf
+ 				ABORT_FINALIZE(iRet);
+ 			} else {
+ 				fclose(fp);
++				fp = NULL;
+ 			}
+ 		} else if(!strcmp(modpblk.descr[i].name, "annotation_match")) {
+ 			free_annotationmatch(&loadModConf->annotation_match);
+@@ -586,6 +643,7 @@ CODESTARTsetModCnf
+ 				ABORT_FINALIZE(iRet);
+ 			} else {
+ 				fclose(fp);
++				fp = NULL;
+ 			}
+ #if HAVE_LOADSAMPLESFROMSTRING == 1
+ 		} else if(!strcmp(modpblk.descr[i].name, "containerrules")) {
+@@ -606,7 +663,10 @@ CODESTARTsetModCnf
+ 				ABORT_FINALIZE(iRet);
+ 			} else {
+ 				fclose(fp);
++				fp = NULL;
+ 			}
++		} else if(!strcmp(modpblk.descr[i].name, "busyretryinterval")) {
++			loadModConf->busyRetryInterval = pvals[i].val.d.n;
+ 		} else {
+ 			dbgprintf("mmkubernetes: program error, non-handled "
+ 				"param '%s' in module() block\n", modpblk.descr[i].name);
+@@ -650,6 +710,8 @@ CODESTARTsetModCnf
+ 	caches = calloc(1, sizeof(struct cache_s *));
+ 
+ finalize_it:
++	if (fp)
++		fclose(fp);
+ 	if(pvals != NULL)
+ 		cnfparamvalsDestruct(pvals, &modpblk);
+ ENDsetModCnf
+@@ -667,6 +729,8 @@ CODESTARTfreeInstance
+ 	free(pData->srcMetadataDescr);
+ 	free(pData->dstMetadataPath);
+ 	free(pData->caCertFile);
++	free(pData->myCertFile);
++	free(pData->myPrivKeyFile);
+ 	free(pData->token);
+ 	free(pData->tokenFile);
+ 	free(pData->fnRules);
+@@ -710,6 +774,45 @@ CODESTARTcreateWrkrInstance
+ 	char *tokenHdr = NULL;
+ 	FILE *fp = NULL;
+ 	char *token = NULL;
++	char *statsName = NULL;
++
++	CHKiRet(statsobj.Construct(&(pWrkrData->stats)));
++	if ((-1 == asprintf(&statsName, "mmkubernetes(%s)", pWrkrData->pData->kubernetesUrl)) ||
++		(!statsName)) {
++		ABORT_FINALIZE(RS_RET_OUT_OF_MEMORY);
++	}
++	CHKiRet(statsobj.SetName(pWrkrData->stats, (uchar *)statsName));
++	free(statsName);
++	statsName = NULL;
++	CHKiRet(statsobj.SetOrigin(pWrkrData->stats, UCHAR_CONSTANT("mmkubernetes")));
++	STATSCOUNTER_INIT(pWrkrData->k8sRecordSeen, pWrkrData->mutK8sRecordSeen);
++	CHKiRet(statsobj.AddCounter(pWrkrData->stats, UCHAR_CONSTANT("recordseen"),
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &(pWrkrData->k8sRecordSeen)));
++	STATSCOUNTER_INIT(pWrkrData->namespaceMetadataSuccess, pWrkrData->mutNamespaceMetadataSuccess);
++	CHKiRet(statsobj.AddCounter(pWrkrData->stats, UCHAR_CONSTANT("namespacemetadatasuccess"),
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &(pWrkrData->namespaceMetadataSuccess)));
++	STATSCOUNTER_INIT(pWrkrData->namespaceMetadataNotFound, pWrkrData->mutNamespaceMetadataNotFound);
++	CHKiRet(statsobj.AddCounter(pWrkrData->stats, UCHAR_CONSTANT("namespacemetadatanotfound"),
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &(pWrkrData->namespaceMetadataNotFound)));
++	STATSCOUNTER_INIT(pWrkrData->namespaceMetadataBusy, pWrkrData->mutNamespaceMetadataBusy);
++	CHKiRet(statsobj.AddCounter(pWrkrData->stats, UCHAR_CONSTANT("namespacemetadatabusy"),
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &(pWrkrData->namespaceMetadataBusy)));
++	STATSCOUNTER_INIT(pWrkrData->namespaceMetadataError, pWrkrData->mutNamespaceMetadataError);
++	CHKiRet(statsobj.AddCounter(pWrkrData->stats, UCHAR_CONSTANT("namespacemetadataerror"),
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &(pWrkrData->namespaceMetadataError)));
++	STATSCOUNTER_INIT(pWrkrData->podMetadataSuccess, pWrkrData->mutPodMetadataSuccess);
++	CHKiRet(statsobj.AddCounter(pWrkrData->stats, UCHAR_CONSTANT("podmetadatasuccess"),
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &(pWrkrData->podMetadataSuccess)));
++	STATSCOUNTER_INIT(pWrkrData->podMetadataNotFound, pWrkrData->mutPodMetadataNotFound);
++	CHKiRet(statsobj.AddCounter(pWrkrData->stats, UCHAR_CONSTANT("podmetadatanotfound"),
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &(pWrkrData->podMetadataNotFound)));
++	STATSCOUNTER_INIT(pWrkrData->podMetadataBusy, pWrkrData->mutPodMetadataBusy);
++	CHKiRet(statsobj.AddCounter(pWrkrData->stats, UCHAR_CONSTANT("podmetadatabusy"),
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &(pWrkrData->podMetadataBusy)));
++	STATSCOUNTER_INIT(pWrkrData->podMetadataError, pWrkrData->mutPodMetadataError);
++	CHKiRet(statsobj.AddCounter(pWrkrData->stats, UCHAR_CONSTANT("podmetadataerror"),
++		ctrType_IntCtr, CTR_FLAG_RESETTABLE, &(pWrkrData->podMetadataError)));
++	CHKiRet(statsobj.ConstructFinalize(pWrkrData->stats));
+ 
+ 	hdr = curl_slist_append(hdr, "Content-Type: text/json; charset=utf-8");
+ 	if (pWrkrData->pData->token) {
+@@ -749,12 +852,20 @@ CODESTARTcreateWrkrInstance
+ 	curl_easy_setopt(ctx, CURLOPT_WRITEDATA, pWrkrData);
+ 	if(pWrkrData->pData->caCertFile)
+ 		curl_easy_setopt(ctx, CURLOPT_CAINFO, pWrkrData->pData->caCertFile);
++	if(pWrkrData->pData->myCertFile)
++		curl_easy_setopt(ctx, CURLOPT_SSLCERT, pWrkrData->pData->myCertFile);
++	if(pWrkrData->pData->myPrivKeyFile)
++		curl_easy_setopt(ctx, CURLOPT_SSLKEY, pWrkrData->pData->myPrivKeyFile);
+ 	if(pWrkrData->pData->allowUnsignedCerts)
+ 		curl_easy_setopt(ctx, CURLOPT_SSL_VERIFYPEER, 0);
+ 
+ 	pWrkrData->curlCtx = ctx;
+ finalize_it:
+ 	free(token);
++	free(statsName);
++	if ((iRet != RS_RET_OK) && pWrkrData->stats) {
++		statsobj.Destruct(&(pWrkrData->stats));
++	}
+ 	if (fp) {
+ 		fclose(fp);
+ 	}
+@@ -765,6 +876,7 @@ BEGINfreeWrkrInstance
+ CODESTARTfreeWrkrInstance
+ 	curl_easy_cleanup(pWrkrData->curlCtx);
+ 	curl_slist_free_all(pWrkrData->curlHdr);
++	statsobj.Destruct(&(pWrkrData->stats));
+ ENDfreeWrkrInstance
+ 
+ 
+@@ -790,6 +902,8 @@ cacheNew(const uchar *const url)
+ 		key_equals_string, (void (*)(void *)) json_object_put);
+ 	cache->nsHt = create_hashtable(100, hash_from_string,
+ 		key_equals_string, (void (*)(void *)) json_object_put);
++	dbgprintf("mmkubernetes: created cache mdht [%p] nsht [%p]\n",
++			cache->mdHt, cache->nsHt);
+ 	cache->cacheMtx = malloc(sizeof(pthread_mutex_t));
+ 	if (!cache->mdHt || !cache->nsHt || !cache->cacheMtx) {
+ 		free (cache);
+@@ -797,6 +911,7 @@ cacheNew(const uchar *const url)
+ 		FINALIZE;
+ 	}
+ 	pthread_mutex_init(cache->cacheMtx, NULL);
++	cache->lastBusyTime = 0;
+ 
+ finalize_it:
+ 	return cache;
+@@ -816,9 +931,10 @@ static void cacheFree(struct cache_s *cache)
+ BEGINnewActInst
+ 	struct cnfparamvals *pvals = NULL;
+ 	int i;
+-	FILE *fp;
++	FILE *fp = NULL;
+ 	char *rxstr = NULL;
+ 	char *srcMetadataPath = NULL;
++	char errStr[1024];
+ CODESTARTnewActInst
+ 	DBGPRINTF("newActInst (mmkubernetes)\n");
+ 
+@@ -840,6 +956,7 @@ CODESTARTnewActInst
+ 
+ 	pData->de_dot = loadModConf->de_dot;
+ 	pData->allowUnsignedCerts = loadModConf->allowUnsignedCerts;
++	pData->busyRetryInterval = loadModConf->busyRetryInterval;
+ 	for(i = 0 ; i < actpblk.nParams ; ++i) {
+ 		if(!pvals[i].bUsed) {
+ 			continue;
+@@ -872,6 +988,33 @@ CODESTARTnewActInst
+ 				ABORT_FINALIZE(iRet);
+ 			} else {
+ 				fclose(fp);
++				fp = NULL;
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "tls.mycert")) {
++			pData->myCertFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->myCertFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				LogError(0, iRet,
++						"error: 'tls.mycert' file %s couldn't be accessed: %s\n",
++						pData->myCertFile, errStr);
++			} else {
++				fclose(fp);
++				fp = NULL;
++			}
++		} else if(!strcmp(actpblk.descr[i].name, "tls.myprivkey")) {
++			pData->myPrivKeyFile = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
++			fp = fopen((const char*)pData->myPrivKeyFile, "r");
++			if(fp == NULL) {
++				rs_strerror_r(errno, errStr, sizeof(errStr));
++				iRet = RS_RET_NO_FILE_ACCESS;
++				LogError(0, iRet,
++						"error: 'tls.myprivkey' file %s couldn't be accessed: %s\n",
++						pData->myPrivKeyFile, errStr);
++			} else {
++				fclose(fp);
++				fp = NULL;
+ 			}
+ 		} else if(!strcmp(actpblk.descr[i].name, "allowunsignedcerts")) {
+ 			pData->allowUnsignedCerts = pvals[i].val.d.n;
+@@ -892,6 +1034,7 @@ CODESTARTnewActInst
+ 				ABORT_FINALIZE(iRet);
+ 			} else {
+ 				fclose(fp);
++				fp = NULL;
+ 			}
+ 		} else if(!strcmp(actpblk.descr[i].name, "annotation_match")) {
+ 			free_annotationmatch(&pData->annotation_match);
+@@ -921,6 +1063,7 @@ CODESTARTnewActInst
+ 				ABORT_FINALIZE(iRet);
+ 			} else {
+ 				fclose(fp);
++				fp = NULL;
+ 			}
+ #if HAVE_LOADSAMPLESFROMSTRING == 1
+ 		} else if(!strcmp(modpblk.descr[i].name, "containerrules")) {
+@@ -941,7 +1083,10 @@ CODESTARTnewActInst
+ 				ABORT_FINALIZE(iRet);
+ 			} else {
+ 				fclose(fp);
++				fp = NULL;
+ 			}
++		} else if(!strcmp(actpblk.descr[i].name, "busyretryinterval")) {
++			pData->busyRetryInterval = pvals[i].val.d.n;
+ 		} else {
+ 			dbgprintf("mmkubernetes: program error, non-handled "
+ 				"param '%s' in action() block\n", actpblk.descr[i].name);
+@@ -982,6 +1127,10 @@ CODESTARTnewActInst
+ 		pData->dstMetadataPath = (uchar *) strdup((char *) loadModConf->dstMetadataPath);
+ 	if(pData->caCertFile == NULL && loadModConf->caCertFile)
+ 		pData->caCertFile = (uchar *) strdup((char *) loadModConf->caCertFile);
++	if(pData->myCertFile == NULL && loadModConf->myCertFile)
++		pData->myCertFile = (uchar *) strdup((char *) loadModConf->myCertFile);
++	if(pData->myPrivKeyFile == NULL && loadModConf->myPrivKeyFile)
++		pData->myPrivKeyFile = (uchar *) strdup((char *) loadModConf->myPrivKeyFile);
+ 	if(pData->token == NULL && loadModConf->token)
+ 		pData->token = (uchar *) strdup((char *) loadModConf->token);
+ 	if(pData->tokenFile == NULL && loadModConf->tokenFile)
+@@ -1018,6 +1167,8 @@ CODESTARTnewActInst
+ CODE_STD_FINALIZERnewActInst
+ 	if(pvals != NULL)
+ 		cnfparamvalsDestruct(pvals, &actpblk);
++	if(fp)
++		fclose(fp);
+ 	free(rxstr);
+ 	free(srcMetadataPath);
+ ENDnewActInst
+@@ -1061,6 +1212,8 @@ CODESTARTfreeCnf
+ 	free(pModConf->srcMetadataPath);
+ 	free(pModConf->dstMetadataPath);
+ 	free(pModConf->caCertFile);
++	free(pModConf->myCertFile);
++	free(pModConf->myPrivKeyFile);
+ 	free(pModConf->token);
+ 	free(pModConf->tokenFile);
+ 	free(pModConf->de_dot_separator);
+@@ -1069,8 +1222,11 @@ CODESTARTfreeCnf
+ 	free(pModConf->contRules);
+ 	free(pModConf->contRulebase);
+ 	free_annotationmatch(&pModConf->annotation_match);
+-	for(i = 0; caches[i] != NULL; i++)
++	for(i = 0; caches[i] != NULL; i++) {
++		dbgprintf("mmkubernetes: freeing cache [%d] mdht [%p] nsht [%p]\n",
++				i, caches[i]->mdHt, caches[i]->nsHt);
+ 		cacheFree(caches[i]);
++	}
+ 	free(caches);
+ ENDfreeCnf
+ 
+@@ -1082,6 +1238,8 @@ CODESTARTdbgPrintInstInfo
+ 	dbgprintf("\tsrcMetadataPath='%s'\n", pData->srcMetadataDescr->name);
+ 	dbgprintf("\tdstMetadataPath='%s'\n", pData->dstMetadataPath);
+ 	dbgprintf("\ttls.cacert='%s'\n", pData->caCertFile);
++	dbgprintf("\ttls.mycert='%s'\n", pData->myCertFile);
++	dbgprintf("\ttls.myprivkey='%s'\n", pData->myPrivKeyFile);
+ 	dbgprintf("\tallowUnsignedCerts='%d'\n", pData->allowUnsignedCerts);
+ 	dbgprintf("\ttoken='%s'\n", pData->token);
+ 	dbgprintf("\ttokenFile='%s'\n", pData->tokenFile);
+@@ -1093,6 +1251,7 @@ CODESTARTdbgPrintInstInfo
+ 	dbgprintf("\tfilenamerules='%s'\n", pData->fnRules);
+ 	dbgprintf("\tcontainerrules='%s'\n", pData->contRules);
+ #endif
++	dbgprintf("\tbusyretryinterval='%d'\n", pData->busyRetryInterval);
+ ENDdbgPrintInstInfo
+ 
+ 
+@@ -1206,6 +1365,24 @@ queryKB(wrkrInstanceData_t *pWrkrData, char *url, struct json_object **rply)
+ 	struct json_object *jo;
+ 	long resp_code = 400;
+ 
++	if (pWrkrData->pData->cache->lastBusyTime) {
++		time_t now;
++		datetime.GetTime(&now);
++		now -= pWrkrData->pData->cache->lastBusyTime;
++		if (now < pWrkrData->pData->busyRetryInterval) {
++			LogMsg(0, RS_RET_RETRY, LOG_DEBUG,
++				"mmkubernetes: Waited [%ld] of [%d] seconds for the requested url [%s]\n",
++				now, pWrkrData->pData->busyRetryInterval, url);
++			ABORT_FINALIZE(RS_RET_RETRY);
++		} else {
++			LogMsg(0, RS_RET_OK, LOG_DEBUG,
++				"mmkubernetes: Cleared busy status after [%d] seconds - "
++				"will retry the requested url [%s]\n",
++				pWrkrData->pData->busyRetryInterval, url);
++			pWrkrData->pData->cache->lastBusyTime = 0;
++		}
++	}
++
+ 	/* query kubernetes for pod info */
+ 	ccode = curl_easy_setopt(pWrkrData->curlCtx, CURLOPT_URL, url);
+ 	if(ccode != CURLE_OK)
+@@ -1411,17 +1411,23 @@ queryKB(wrkrInstanceData_t *pWrkrData, char *url, struct json_object **rply)
+ 		ABORT_FINALIZE(RS_RET_ERR);
+ 	}
+ 	if(resp_code == 404) {
+-		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++		errmsg.LogMsg(0, RS_RET_NOT_FOUND, LOG_INFO,
+ 			      "mmkubernetes: Not Found: the resource does not exist at url [%s]\n",
+ 			      url);
+-		ABORT_FINALIZE(RS_RET_ERR);
++		ABORT_FINALIZE(RS_RET_NOT_FOUND);
+ 	}
+ 	if(resp_code == 429) {
+-		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
++		if (pWrkrData->pData->busyRetryInterval) {
++			time_t now;
++			datetime.GetTime(&now);
++			pWrkrData->pData->cache->lastBusyTime = now;
++		}
++
++		errmsg.LogMsg(0, RS_RET_RETRY, LOG_INFO,
+ 			      "mmkubernetes: Too Many Requests: the server is too heavily loaded "
+ 			      "to provide the data for the requested url [%s]\n",
+ 			      url);
+-		ABORT_FINALIZE(RS_RET_ERR);
++		ABORT_FINALIZE(RS_RET_RETRY);
+ 	}
+ 	if(resp_code != 200) {
+ 		errmsg.LogMsg(0, RS_RET_ERR, LOG_ERR,
+@@ -1299,12 +1482,14 @@ BEGINdoAction
+ 	char *mdKey = NULL;
+ 	struct json_object *jMetadata = NULL, *jMetadataCopy = NULL, *jMsgMeta = NULL,
+ 			*jo = NULL;
+-	int add_ns_metadata = 0;
++	int add_pod_metadata = 1;
+ CODESTARTdoAction
+ 	CHKiRet_Hdlr(extractMsgMetadata(pMsg, pWrkrData->pData, &jMsgMeta)) {
+ 		ABORT_FINALIZE((iRet == RS_RET_NOT_FOUND) ? RS_RET_OK : iRet);
+ 	}
+ 
++	STATSCOUNTER_INC(pWrkrData->k8sRecordSeen, pWrkrData->mutK8sRecordSeen);
++
+ 	if (fjson_object_object_get_ex(jMsgMeta, "pod_name", &jo))
+ 		podName = json_object_get_string(jo);
+ 	if (fjson_object_object_get_ex(jMsgMeta, "namespace_name", &jo))
+@@ -1347,28 +1532,49 @@ CODESTARTdoAction
+ 			}
+ 			iRet = queryKB(pWrkrData, url, &jReply);
+ 			free(url);
+-			/* todo: implement support for the .orphaned namespace */
+-			if (iRet != RS_RET_OK) {
++			if (iRet == RS_RET_NOT_FOUND) {
++				/* negative cache namespace - make a dummy empty namespace metadata object */
++				jNsMeta = json_object_new_object();
++				STATSCOUNTER_INC(pWrkrData->namespaceMetadataNotFound,
++						 pWrkrData->mutNamespaceMetadataNotFound);
++			} else if (iRet == RS_RET_RETRY) {
++				/* server is busy - retry or error */
++				STATSCOUNTER_INC(pWrkrData->namespaceMetadataBusy,
++						 pWrkrData->mutNamespaceMetadataBusy);
++				if (0 == pWrkrData->pData->busyRetryInterval) {
++					pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++					ABORT_FINALIZE(RS_RET_ERR);
++				}
++				add_pod_metadata = 0; /* don't cache pod metadata either - retry both */
++			} else if (iRet != RS_RET_OK) {
++				/* hard error - something the admin needs to fix e.g. network, config, auth */
+ 				json_object_put(jReply);
+ 				jReply = NULL;
++				STATSCOUNTER_INC(pWrkrData->namespaceMetadataError,
++						 pWrkrData->mutNamespaceMetadataError);
+ 				pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
+ 				FINALIZE;
+-			}
+-
+-			if(fjson_object_object_get_ex(jReply, "metadata", &jNsMeta)) {
++			} else if (fjson_object_object_get_ex(jReply, "metadata", &jNsMeta)) {
+ 				jNsMeta = json_object_get(jNsMeta);
+ 				parse_labels_annotations(jNsMeta, &pWrkrData->pData->annotation_match,
+ 					pWrkrData->pData->de_dot,
+ 					(const char *)pWrkrData->pData->de_dot_separator,
+ 					pWrkrData->pData->de_dot_separator_len);
+-				add_ns_metadata = 1;
++				STATSCOUNTER_INC(pWrkrData->namespaceMetadataSuccess,
++						 pWrkrData->mutNamespaceMetadataSuccess);
+ 			} else {
+ 				/* namespace with no metadata??? */
+ 				errmsg.LogMsg(0, RS_RET_ERR, LOG_INFO,
+ 					      "mmkubernetes: namespace [%s] has no metadata!\n", ns);
+-				jNsMeta = NULL;
++				/* negative cache namespace - make a dummy empty namespace metadata object */
++				jNsMeta = json_object_new_object();
++				STATSCOUNTER_INC(pWrkrData->namespaceMetadataSuccess,
++						 pWrkrData->mutNamespaceMetadataSuccess);
+ 			}
+ 
++			if(jNsMeta) {
++				hashtable_insert(pWrkrData->pData->cache->nsHt, strdup(ns), jNsMeta);
++			}
+ 			json_object_put(jReply);
+ 			jReply = NULL;
+ 		}
+@@ -1381,14 +1587,28 @@ CODESTARTdoAction
+ 		}
+ 		iRet = queryKB(pWrkrData, url, &jReply);
+ 		free(url);
+-		if(iRet != RS_RET_OK) {
+-			if(jNsMeta && add_ns_metadata) {
+-				hashtable_insert(pWrkrData->pData->cache->nsHt, strdup(ns), jNsMeta);
++		if (iRet == RS_RET_NOT_FOUND) {
++			/* negative cache pod - make a dummy empty pod metadata object */
++			iRet = RS_RET_OK;
++			STATSCOUNTER_INC(pWrkrData->podMetadataNotFound, pWrkrData->mutPodMetadataNotFound);
++		} else if (iRet == RS_RET_RETRY) {
++			/* server is busy - retry or error */
++			STATSCOUNTER_INC(pWrkrData->podMetadataBusy, pWrkrData->mutPodMetadataBusy);
++			if (0 == pWrkrData->pData->busyRetryInterval) {
++				pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
++				ABORT_FINALIZE(RS_RET_ERR);
+ 			}
++			add_pod_metadata = 0; /* do not cache so that we can retry */
++			iRet = RS_RET_OK;
++		} else if(iRet != RS_RET_OK) {
++			/* hard error - something the admin needs to fix e.g. network, config, auth */
+ 			json_object_put(jReply);
+ 			jReply = NULL;
++			STATSCOUNTER_INC(pWrkrData->podMetadataError, pWrkrData->mutPodMetadataError);
+ 			pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
+ 			FINALIZE;
++		} else {
++			STATSCOUNTER_INC(pWrkrData->podMetadataSuccess, pWrkrData->mutPodMetadataSuccess);
+ 		}
+ 
+ 		jo = json_object_new_object();
+@@ -1435,11 +1655,9 @@ CODESTARTdoAction
+ 			json_object_object_add(jo, "container_id", json_object_get(jo2));
+ 		json_object_object_add(jMetadata, "docker", jo);
+ 
+-		hashtable_insert(pWrkrData->pData->cache->mdHt, mdKey, jMetadata);
+-		mdKey = NULL;
+-		if(jNsMeta && add_ns_metadata) {
+-			hashtable_insert(pWrkrData->pData->cache->nsHt, strdup(ns), jNsMeta);
+-			ns = NULL;
++		if (add_pod_metadata) {
++			hashtable_insert(pWrkrData->pData->cache->mdHt, mdKey, jMetadata);
++			mdKey = NULL;
+ 		}
+ 	}
+ 
+@@ -1450,6 +1668,11 @@ CODESTARTdoAction
+ 	 * outside of the cache lock
+ 	 */
+ 	jMetadataCopy = json_tokener_parse(json_object_get_string(jMetadata));
++	if (!add_pod_metadata) {
++		/* jMetadata object was created from scratch and not cached */
++		json_object_put(jMetadata);
++		jMetadata = NULL;
++	}
+ 	pthread_mutex_unlock(pWrkrData->pData->cache->cacheMtx);
+ 	/* the +1 is there to skip the leading '$' */
+ 	msgAddJSON(pMsg, (uchar *) pWrkrData->pData->dstMetadataPath + 1, jMetadataCopy, 0, 0);
+@@ -1691,6 +1693,8 @@ BEGINmodExit
+ 
+ 	objRelease(regexp, LM_REGEXP_FILENAME);
+ 	objRelease(errmsg, CORE_COMPONENT);
++	objRelease(datetime, CORE_COMPONENT);
++	objRelease(statsobj, CORE_COMPONENT);
+ ENDmodExit
+ 
+ 
+@@ -1705,6 +1711,8 @@ CODEmodInit_QueryRegCFSLineHdlr
+ 	DBGPRINTF("mmkubernetes: module compiled with rsyslog version %s.\n", VERSION);
+ 	CHKiRet(objUse(errmsg, CORE_COMPONENT));
+ 	CHKiRet(objUse(regexp, LM_REGEXP_FILENAME));
++	CHKiRet(objUse(datetime, CORE_COMPONENT));
++	CHKiRet(objUse(statsobj, CORE_COMPONENT));
+ 
+ 	/* CURL_GLOBAL_ALL initializes more than is needed but the
+ 	 * libcurl documentation discourages use of other values
+--- a/contrib/mmkubernetes/mmkubernetes.c
++++ b/contrib/mmkubernetes/mmkubernetes.c
+@@ -560,7 +560,6 @@
+ 			loadModConf->caCertFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
+ 			fp = fopen((const char*)loadModConf->caCertFile, "r");
+ 			if(fp == NULL) {
+-				char errStr[1024];
+ 				rs_strerror_r(errno, errStr, sizeof(errStr));
+ 				iRet = RS_RET_NO_FILE_ACCESS;
+ 				errmsg.LogError(0, iRet,
+@@ -608,7 +607,6 @@
+ 			loadModConf->tokenFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
+ 			fp = fopen((const char*)loadModConf->tokenFile, "r");
+ 			if(fp == NULL) {
+-				char errStr[1024];
+ 				rs_strerror_r(errno, errStr, sizeof(errStr));
+ 				iRet = RS_RET_NO_FILE_ACCESS;
+ 				errmsg.LogError(0, iRet,
+@@ -638,7 +636,6 @@
+ 			loadModConf->fnRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
+ 			fp = fopen((const char*)loadModConf->fnRulebase, "r");
+ 			if(fp == NULL) {
+-				char errStr[1024];
+ 				rs_strerror_r(errno, errStr, sizeof(errStr));
+ 				iRet = RS_RET_NO_FILE_ACCESS;
+ 				errmsg.LogError(0, iRet,
+@@ -659,7 +656,6 @@
+ 			loadModConf->contRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
+ 			fp = fopen((const char*)loadModConf->contRulebase, "r");
+ 			if(fp == NULL) {
+-				char errStr[1024];
+ 				rs_strerror_r(errno, errStr, sizeof(errStr));
+ 				iRet = RS_RET_NO_FILE_ACCESS;
+ 				errmsg.LogError(0, iRet,
+@@ -975,7 +971,6 @@
+ 			pData->caCertFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
+ 			fp = fopen((const char*)pData->caCertFile, "r");
+ 			if(fp == NULL) {
+-				char errStr[1024];
+ 				rs_strerror_r(errno, errStr, sizeof(errStr));
+ 				iRet = RS_RET_NO_FILE_ACCESS;
+ 				errmsg.LogError(0, iRet,
+@@ -1022,7 +1017,6 @@
+ 			pData->tokenFile = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
+ 			fp = fopen((const char*)pData->tokenFile, "r");
+ 			if(fp == NULL) {
+-				char errStr[1024];
+ 				rs_strerror_r(errno, errStr, sizeof(errStr));
+ 				iRet = RS_RET_NO_FILE_ACCESS;
+ 				errmsg.LogError(0, iRet,
+@@ -1052,7 +1046,6 @@
+ 			pData->fnRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
+ 			fp = fopen((const char*)pData->fnRulebase, "r");
+ 			if(fp == NULL) {
+-				char errStr[1024];
+ 				rs_strerror_r(errno, errStr, sizeof(errStr));
+ 				iRet = RS_RET_NO_FILE_ACCESS;
+ 				errmsg.LogError(0, iRet,
+@@ -1073,7 +1066,6 @@
+ 			pData->contRulebase = (uchar *) es_str2cstr(pvals[i].val.d.estr, NULL);
+ 			fp = fopen((const char*)pData->contRulebase, "r");
+ 			if(fp == NULL) {
+-				char errStr[1024];
+ 				rs_strerror_r(errno, errStr, sizeof(errStr));
+ 				iRet = RS_RET_NO_FILE_ACCESS;
+ 				errmsg.LogError(0, iRet,
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1627799-cert-chains.patch b/SOURCES/rsyslog-8.24.0-rhbz1627799-cert-chains.patch
new file mode 100644
index 0000000..b90850e
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1627799-cert-chains.patch
@@ -0,0 +1,622 @@
+diff -up rsyslog-8.24.0/plugins/imtcp/imtcp.c.v36tls rsyslog-8.24.0/plugins/imtcp/imtcp.c
+--- rsyslog-8.24.0/plugins/imtcp/imtcp.c.v36tls	2019-08-19 17:37:07.872166694 +0100
++++ rsyslog-8.24.0/plugins/imtcp/imtcp.c	2019-08-19 17:37:07.876166693 +0100
+@@ -100,6 +100,7 @@ static struct configSettings_s {
+ 	int bDisableLFDelim;
+ 	int bUseFlowControl;
+ 	int bPreserveCase;
++	uchar *gnutlsPriorityString;
+ 	uchar *pszStrmDrvrAuthMode;
+ 	uchar *pszInputName;
+ 	uchar *pszBindRuleset;
+@@ -136,6 +137,7 @@ struct modConfData_s {
+ 	int iKeepAliveProbes;
+ 	int iKeepAliveTime;
+ 	sbool bEmitMsgOnClose; /* emit an informational message on close by remote peer */
++	uchar *gnutlsPriorityString;
+ 	uchar *pszStrmDrvrName; /* stream driver to use */
+ 	uchar *pszStrmDrvrAuthMode; /* authentication mode to use */
+ 	struct cnfarray *permittedPeers;
+@@ -164,7 +166,8 @@ static struct cnfparamdescr modpdescr[]
+ 	{ "keepalive.probes", eCmdHdlrPositiveInt, 0 },
+ 	{ "keepalive.time", eCmdHdlrPositiveInt, 0 },
+ 	{ "keepalive.interval", eCmdHdlrPositiveInt, 0 },
+-	{ "preservecase", eCmdHdlrBinary, 0 }
++	{ "preservecase", eCmdHdlrBinary, 0 },
++	{ "gnutlsprioritystring", eCmdHdlrString, 0 }
+ };
+ static struct cnfparamblk modpblk =
+ 	{ CNFPARAMBLK_VERSION,
+@@ -354,6 +357,7 @@ addListner(modConfData_t *modConf, insta
+ 		CHKiRet(tcpsrv.SetKeepAliveIntvl(pOurTcpsrv, modConf->iKeepAliveIntvl));
+ 		CHKiRet(tcpsrv.SetKeepAliveProbes(pOurTcpsrv, modConf->iKeepAliveProbes));
+ 		CHKiRet(tcpsrv.SetKeepAliveTime(pOurTcpsrv, modConf->iKeepAliveTime));
++		CHKiRet(tcpsrv.SetGnutlsPriorityString(pOurTcpsrv, modConf->gnutlsPriorityString));
+ 		CHKiRet(tcpsrv.SetSessMax(pOurTcpsrv, modConf->iTCPSessMax));
+ 		CHKiRet(tcpsrv.SetLstnMax(pOurTcpsrv, modConf->iTCPLstnMax));
+ 		CHKiRet(tcpsrv.SetDrvrMode(pOurTcpsrv, modConf->iStrmDrvrMode));
+@@ -463,6 +467,7 @@ CODESTARTbeginCnfLoad
+ 	loadModConf->bEmitMsgOnClose = 0;
+ 	loadModConf->iAddtlFrameDelim = TCPSRV_NO_ADDTL_DELIMITER;
+ 	loadModConf->bDisableLFDelim = 0;
++	loadModConf->gnutlsPriorityString = NULL;
+ 	loadModConf->pszStrmDrvrName = NULL;
+ 	loadModConf->pszStrmDrvrAuthMode = NULL;
+ 	loadModConf->permittedPeers = NULL;
+@@ -517,6 +522,8 @@ CODESTARTsetModCnf
+ 			loadModConf->iKeepAliveTime = (int) pvals[i].val.d.n;
+ 		} else if(!strcmp(modpblk.descr[i].name, "keepalive.interval")) {
+ 			loadModConf->iKeepAliveIntvl = (int) pvals[i].val.d.n;
++		} else if(!strcmp(modpblk.descr[i].name, "gnutlsprioritystring")) {
++			loadModConf->gnutlsPriorityString = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else if(!strcmp(modpblk.descr[i].name, "streamdriver.mode")) {
+ 			loadModConf->iStrmDrvrMode = (int) pvals[i].val.d.n;
+ 		} else if(!strcmp(modpblk.descr[i].name, "streamdriver.authmode")) {
+diff -up rsyslog-8.24.0/runtime/netstrm.c.v36tls rsyslog-8.24.0/runtime/netstrm.c
+--- rsyslog-8.24.0/runtime/netstrm.c.v36tls	2017-01-10 09:00:04.000000000 +0000
++++ rsyslog-8.24.0/runtime/netstrm.c	2019-08-19 17:37:07.876166693 +0100
+@@ -280,6 +280,16 @@ SetKeepAliveIntvl(netstrm_t *pThis, int
+ 	RETiRet;
+ }
+ 
++/* gnutls priority string */
++static rsRetVal
++SetGnutlsPriorityString(netstrm_t *pThis, uchar *gnutlsPriorityString)
++{
++	DEFiRet;
++	ISOBJ_TYPE_assert(pThis, netstrm);
++	iRet = pThis->Drvr.SetGnutlsPriorityString(pThis->pDrvrData, gnutlsPriorityString);
++	RETiRet;
++}
++
+ /* check connection - slim wrapper for NSD driver function */
+ static rsRetVal
+ CheckConnection(netstrm_t *pThis)
+@@ -387,6 +397,7 @@ CODESTARTobjQueryInterface(netstrm)
+ 	pIf->SetKeepAliveProbes = SetKeepAliveProbes;
+ 	pIf->SetKeepAliveTime = SetKeepAliveTime;
+ 	pIf->SetKeepAliveIntvl = SetKeepAliveIntvl;
++	pIf->SetGnutlsPriorityString = SetGnutlsPriorityString;
+ finalize_it:
+ ENDobjQueryInterface(netstrm)
+ 
+diff -up rsyslog-8.24.0/runtime/netstrm.h.v36tls rsyslog-8.24.0/runtime/netstrm.h
+--- rsyslog-8.24.0/runtime/netstrm.h.v36tls	2017-01-10 09:00:04.000000000 +0000
++++ rsyslog-8.24.0/runtime/netstrm.h	2019-08-19 17:37:07.876166693 +0100
+@@ -75,14 +75,16 @@ BEGINinterface(netstrm) /* name must als
+ 	rsRetVal (*SetKeepAliveProbes)(netstrm_t *pThis, int keepAliveProbes);
+ 	rsRetVal (*SetKeepAliveTime)(netstrm_t *pThis, int keepAliveTime);
+ 	rsRetVal (*SetKeepAliveIntvl)(netstrm_t *pThis, int keepAliveIntvl);
++	rsRetVal (*SetGnutlsPriorityString)(netstrm_t *pThis, uchar *priorityString);
+ ENDinterface(netstrm)
+-#define netstrmCURR_IF_VERSION 8 /* increment whenever you change the interface structure! */
++#define netstrmCURR_IF_VERSION 9 /* increment whenever you change the interface structure! */
+ /* interface version 3 added GetRemAddr()
+  * interface version 4 added EnableKeepAlive() -- rgerhards, 2009-06-02
+  * interface version 5 changed return of CheckConnection from void to rsRetVal -- alorbach, 2012-09-06
+  * interface version 6 changed signature of GetRemoteIP() -- rgerhards, 2013-01-21
+  * interface version 7 added KeepAlive parameter set functions
+  * interface version 8 changed signature of Connect() -- dsa, 2016-11-14
++ * interface version 9 added SetGnutlsPriorityString -- PascalWithopf, 2017-08-08
+  * */
+ 
+ /* prototypes */
+diff -up rsyslog-8.24.0/runtime/netstrms.c.v36tls rsyslog-8.24.0/runtime/netstrms.c
+--- rsyslog-8.24.0/runtime/netstrms.c.v36tls	2016-12-03 17:41:03.000000000 +0000
++++ rsyslog-8.24.0/runtime/netstrms.c	2019-08-19 17:37:07.876166693 +0100
+@@ -113,6 +113,10 @@ CODESTARTobjDestruct(netstrms)
+ 		free(pThis->pBaseDrvrName);
+ 		pThis->pBaseDrvrName = NULL;
+ 	}
++	if(pThis->gnutlsPriorityString != NULL) {
++		free(pThis->gnutlsPriorityString);
++		pThis->gnutlsPriorityString = NULL;
++	}
+ ENDobjDestruct(netstrms)
+ 
+ 
+@@ -196,6 +200,31 @@ GetDrvrAuthMode(netstrms_t *pThis)
+ }
+ 
+ 
++/* Set the priorityString for GnuTLS
++ * PascalWithopf 2017-08-16
++ */
++static rsRetVal
++SetDrvrGnutlsPriorityString(netstrms_t *pThis, uchar *iVal)
++{
++	DEFiRet;
++	ISOBJ_TYPE_assert(pThis, netstrms);
++	CHKmalloc(pThis->gnutlsPriorityString = (uchar*)strdup((char*)iVal));
++finalize_it:
++	RETiRet;
++}
++
++
++/* return the priorityString for GnuTLS
++ * PascalWithopf, 2017-08-16
++ */
++static uchar*
++GetDrvrGnutlsPriorityString(netstrms_t *pThis)
++{
++	ISOBJ_TYPE_assert(pThis, netstrms);
++	return pThis->gnutlsPriorityString;
++}
++
++
+ /* set the driver mode -- rgerhards, 2008-04-30 */
+ static rsRetVal
+ SetDrvrMode(netstrms_t *pThis, int iMode)
+@@ -272,6 +301,8 @@ CODESTARTobjQueryInterface(netstrms)
+ 	pIf->GetDrvrMode = GetDrvrMode;
+ 	pIf->SetDrvrAuthMode = SetDrvrAuthMode;
+ 	pIf->GetDrvrAuthMode = GetDrvrAuthMode;
++	pIf->SetDrvrGnutlsPriorityString = SetDrvrGnutlsPriorityString;
++	pIf->GetDrvrGnutlsPriorityString = GetDrvrGnutlsPriorityString;
+ 	pIf->SetDrvrPermPeers = SetDrvrPermPeers;
+ 	pIf->GetDrvrPermPeers = GetDrvrPermPeers;
+ finalize_it:
+diff -up rsyslog-8.24.0/runtime/netstrms.h.v36tls rsyslog-8.24.0/runtime/netstrms.h
+--- rsyslog-8.24.0/runtime/netstrms.h.v36tls	2016-12-03 17:41:03.000000000 +0000
++++ rsyslog-8.24.0/runtime/netstrms.h	2019-08-19 17:37:07.876166693 +0100
+@@ -33,6 +33,7 @@ struct netstrms_s {
+ 	uchar *pDrvrName;	/**< full base driver name (set when driver is loaded) */
+ 	int iDrvrMode;		/**< current default driver mode */
+ 	uchar *pszDrvrAuthMode;	/**< current driver authentication mode */
++	uchar *gnutlsPriorityString; /**< priorityString for connection */
+ 	permittedPeers_t *pPermPeers;/**< current driver's permitted peers */
+ 
+ 	nsd_if_t Drvr;		/**< our stream driver */
+@@ -52,6 +53,8 @@ BEGINinterface(netstrms) /* name must al
+ 	int      (*GetDrvrMode)(netstrms_t *pThis);
+ 	uchar*   (*GetDrvrAuthMode)(netstrms_t *pThis);
+ 	permittedPeers_t* (*GetDrvrPermPeers)(netstrms_t *pThis);
++	rsRetVal (*SetDrvrGnutlsPriorityString)(netstrms_t *pThis, uchar*);
++	uchar*   (*GetDrvrGnutlsPriorityString)(netstrms_t *pThis);
+ ENDinterface(netstrms)
+ #define netstrmsCURR_IF_VERSION 1 /* increment whenever you change the interface structure! */
+ 
+diff -up rsyslog-8.24.0/runtime/nsd_gtls.c.v36tls rsyslog-8.24.0/runtime/nsd_gtls.c
+--- rsyslog-8.24.0/runtime/nsd_gtls.c.v36tls	2017-01-10 09:00:04.000000000 +0000
++++ rsyslog-8.24.0/runtime/nsd_gtls.c	2019-08-19 17:39:30.576158227 +0100
+@@ -73,8 +73,20 @@ DEFobjCurrIf(nsd_ptcp)
+ 
+ static int bGlblSrvrInitDone = 0;	/**< 0 - server global init not yet done, 1 - already done */
+ 
+-static pthread_mutex_t mutGtlsStrerror; /**< a mutex protecting the potentially non-reentrant gtlStrerror() function */
++static pthread_mutex_t mutGtlsStrerror;
++/*< a mutex protecting the potentially non-reentrant gtlStrerror() function */
+ 
++/* a macro to abort if GnuTLS error is not acceptable. We split this off from
++ * CHKgnutls() to avoid some Coverity report in cases where we know GnuTLS
++ * failed. Note: gnuRet must already be set accordingly!
++ */
++#define ABORTgnutls { \
++		uchar *pErr = gtlsStrerror(gnuRet); \
++		LogError(0, RS_RET_GNUTLS_ERR, "unexpected GnuTLS error %d in %s:%d: %s\n", \
++	gnuRet, __FILE__, __LINE__, pErr); \
++		free(pErr); \
++		ABORT_FINALIZE(RS_RET_GNUTLS_ERR); \
++}
+ /* a macro to check GnuTLS calls against unexpected errors */
+ #define CHKgnutls(x) { \
+ 	gnuRet = (x); \
+@@ -82,10 +94,7 @@ static pthread_mutex_t mutGtlsStrerror;
+ 		errmsg.LogError(0, RS_RET_GNUTLS_ERR, "error reading file - a common cause is that the file  does not exist"); \
+ 		ABORT_FINALIZE(RS_RET_GNUTLS_ERR); \
+ 	} else if(gnuRet != 0) { \
+-		uchar *pErr = gtlsStrerror(gnuRet); \
+-		errmsg.LogError(0, RS_RET_GNUTLS_ERR, "unexpected GnuTLS error %d in %s:%d: %s\n", gnuRet, __FILE__, __LINE__, pErr); \
+-		free(pErr); \
+-		ABORT_FINALIZE(RS_RET_GNUTLS_ERR); \
++		ABORTgnutls; \
+ 	} \
+ }
+ 
+@@ -192,9 +201,12 @@ gtlsLoadOurCertKey(nsd_gtls_t *pThis)
+ 
+ 	/* try load certificate */
+ 	CHKiRet(readFile(certFile, &data));
+-	CHKgnutls(gnutls_x509_crt_init(&pThis->ourCert));
+-	pThis->bOurCertIsInit = 1;
+-	CHKgnutls(gnutls_x509_crt_import(pThis->ourCert, &data, GNUTLS_X509_FMT_PEM));
++	pThis->nOurCerts = sizeof(pThis->pOurCerts) / sizeof(gnutls_x509_crt_t);
++	gnuRet = gnutls_x509_crt_list_import(pThis->pOurCerts, &pThis->nOurCerts,
++		&data, GNUTLS_X509_FMT_PEM,  GNUTLS_X509_CRT_LIST_IMPORT_FAIL_IF_EXCEED);
++	if(gnuRet < 0) {
++		ABORTgnutls;
++	}
+ 	free(data.data);
+ 	data.data = NULL;
+ 
+@@ -210,7 +222,9 @@ finalize_it:
+ 		if(data.data != NULL)
+ 			free(data.data);
+ 		if(pThis->bOurCertIsInit) {
+-			gnutls_x509_crt_deinit(pThis->ourCert);
++			for(unsigned i=0; i<pThis->nOurCerts; ++i) {
++				gnutls_x509_crt_deinit(pThis->pOurCerts[i]);
++			}
+ 			pThis->bOurCertIsInit = 0;
+ 		}
+ 		if(pThis->bOurKeyIsInit) {
+@@ -255,8 +269,8 @@ gtlsClientCertCallback(gnutls_session_t
+ #else
+ 	st->type = GNUTLS_CRT_X509;
+ #endif
+-	st->ncerts = 1;
+-	st->cert.x509 = &pThis->ourCert;
++	st->ncerts = pThis->nOurCerts;
++	st->cert.x509 = pThis->pOurCerts;
+ 	st->key.x509 = pThis->ourKey;
+ 	st->deinit_all = 0;
+ 
+@@ -532,8 +546,8 @@ gtlsRecordRecv(nsd_gtls_t *pThis)
+ 		dbgprintf("GnuTLS receive requires a retry (this most probably is OK and no error condition)\n");
+ 		ABORT_FINALIZE(RS_RET_RETRY);
+ 	} else {
+-		int gnuRet; /* TODO: build a specific function for GnuTLS error reporting */
+-		CHKgnutls(lenRcvd); /* this will abort the function */
++		int gnuRet = lenRcvd;
++		ABORTgnutls;
+ 	}
+ 
+ finalize_it:
+@@ -646,7 +660,7 @@ gtlsInitSession(nsd_gtls_t *pThis)
+ 	pThis->bIsInitiator = 0;
+ 
+ 	/* avoid calling all the priority functions, since the defaults are adequate. */
+-	CHKgnutls(gnutls_set_default_priority(session));
++
+ 	CHKgnutls(gnutls_credentials_set(session, GNUTLS_CRD_CERTIFICATE, xcred));
+ 
+ 	/* request client certificate if any.  */
+@@ -1204,7 +1218,9 @@ CODESTARTobjDestruct(nsd_gtls)
+ 	}
+ 
+ 	if(pThis->bOurCertIsInit)
+-		gnutls_x509_crt_deinit(pThis->ourCert);
++                for(unsigned i=0; i<pThis->nOurCerts; ++i) {
++			gnutls_x509_crt_deinit(pThis->pOurCerts[i]);
++                }
+ 	if(pThis->bOurKeyIsInit)
+ 		gnutls_x509_privkey_deinit(pThis->ourKey);
+ 	if(pThis->bHaveSess)
+@@ -1299,6 +1315,21 @@ finalize_it:
+ }
+ 
+ 
++/* gnutls priority string
++ * PascalWithopf 2017-08-16
++ */
++static rsRetVal
++SetGnutlsPriorityString(nsd_t *pNsd, uchar *gnutlsPriorityString)
++{
++	DEFiRet;
++	nsd_gtls_t *pThis = (nsd_gtls_t*) pNsd;
++
++	ISOBJ_TYPE_assert((pThis), nsd_gtls);
++	pThis->gnutlsPriorityString = gnutlsPriorityString;
++	RETiRet;
++}
++
++
+ /* Provide access to the underlying OS socket. This is primarily
+  * useful for other drivers (like nsd_gtls) who utilize ourselfs
+  * for some of their functionality. -- rgerhards, 2008-04-18
+@@ -1476,6 +1507,7 @@ AcceptConnReq(nsd_t *pNsd, nsd_t **ppNew
+ 	int gnuRet;
+ 	nsd_gtls_t *pNew = NULL;
+ 	nsd_gtls_t *pThis = (nsd_gtls_t*) pNsd;
++	const char *error_position;
+ 
+ 	ISOBJ_TYPE_assert((pThis), nsd_gtls);
+ 	CHKiRet(nsd_gtlsConstruct(&pNew)); // TODO: prevent construct/destruct!
+@@ -1493,6 +1525,19 @@ AcceptConnReq(nsd_t *pNsd, nsd_t **ppNew
+ 	gtlsSetTransportPtr(pNew, ((nsd_ptcp_t*) (pNew->pTcp))->sock);
+ 	pNew->authMode = pThis->authMode;
+ 	pNew->pPermPeers = pThis->pPermPeers;
++	pNew->gnutlsPriorityString = pThis->gnutlsPriorityString;
++	/* here is the priorityString set */
++	if(pNew->gnutlsPriorityString != NULL) {
++		if(gnutls_priority_set_direct(pNew->sess,
++					(const char*) pNew->gnutlsPriorityString,
++					&error_position)==GNUTLS_E_INVALID_REQUEST) {
++			LogError(0, RS_RET_GNUTLS_ERR, "Syntax Error in"
++					" Priority String: \"%s\"\n", error_position);
++		}
++	} else {
++		/* Use default priorities */
++		CHKgnutls(gnutls_set_default_priority(pNew->sess));
++	}
+ 
+ 	/* we now do the handshake. This is a bit complicated, because we are 
+ 	 * on non-blocking sockets. Usually, the handshake will not complete
+@@ -1673,6 +1718,31 @@ EnableKeepAlive(nsd_t *pNsd)
+ 	return nsd_ptcp.EnableKeepAlive(pThis->pTcp);
+ }
+ 
++/*
++ * SNI should not be used if the hostname is a bare IP address
++ */
++static int
++SetServerNameIfPresent(nsd_gtls_t *pThis, uchar *host) {
++	struct sockaddr_in sa;
++	struct sockaddr_in6 sa6;
++
++	int inet_pton_ret = inet_pton(AF_INET, CHAR_CONVERT(host), &(sa.sin_addr));
++
++	if (inet_pton_ret == 0) { // host wasn't a bare IPv4 address: try IPv6
++		inet_pton_ret = inet_pton(AF_INET6, CHAR_CONVERT(host), &(sa6.sin6_addr));
++	}
++
++	switch(inet_pton_ret) {
++		case 1: // host is a valid IP address: don't use SNI
++			return 0;
++		case 0: // host isn't a valid IP address: assume it's a domain name, use SNI
++			return gnutls_server_name_set(pThis->sess, GNUTLS_NAME_DNS, host, ustrlen(host));
++		default: // unexpected error
++			return -1;
++	}
++
++}
++
+ /* open a connection to a remote host (server). With GnuTLS, we always
+  * open a plain tcp socket and then, if in TLS mode, do a handshake on it.
+  * rgerhards, 2008-03-19
+@@ -1685,6 +1755,7 @@ Connect(nsd_t *pNsd, int family, uchar *
+ 	nsd_gtls_t *pThis = (nsd_gtls_t*) pNsd;
+ 	int sock;
+ 	int gnuRet;
++	const char *error_position;
+ #	ifdef HAVE_GNUTLS_CERTIFICATE_TYPE_SET_PRIORITY
+ 	static const int cert_type_priority[2] = { GNUTLS_CRT_X509, 0 };
+ #	endif
+@@ -1704,6 +1775,8 @@ Connect(nsd_t *pNsd, int family, uchar *
+ 	pThis->bHaveSess = 1;
+ 	pThis->bIsInitiator = 1;
+ 
++	CHKgnutls(SetServerNameIfPresent(pThis, host));
++
+ 	/* in the client case, we need to set a callback that ensures our certificate
+ 	 * will be presented to the server even if it is not signed by one of the server's
+ 	 * trusted roots. This is necessary to support fingerprint authentication.
+@@ -1721,8 +1794,19 @@ Connect(nsd_t *pNsd, int family, uchar *
+ 		FINALIZE; /* we have an error case! */
+ 	}
+ 
+-	/* Use default priorities */
+-	CHKgnutls(gnutls_set_default_priority(pThis->sess));
++	/*priority string setzen*/
++	if(pThis->gnutlsPriorityString != NULL) {
++		if(gnutls_priority_set_direct(pThis->sess,
++					(const char*) pThis->gnutlsPriorityString,
++					&error_position)==GNUTLS_E_INVALID_REQUEST) {
++			LogError(0, RS_RET_GNUTLS_ERR, "Syntax Error in"
++					" Priority String: \"%s\"\n", error_position);
++		}
++	} else {
++		/* Use default priorities */
++		CHKgnutls(gnutls_set_default_priority(pThis->sess));
++	}
++
+ #	ifdef HAVE_GNUTLS_CERTIFICATE_TYPE_SET_PRIORITY
+ 	/* The gnutls_certificate_type_set_priority function is deprecated
+ 	 * and not available in recent GnuTLS versions. However, there is no
+@@ -1806,6 +1890,7 @@ CODESTARTobjQueryInterface(nsd_gtls)
+ 	pIf->SetKeepAliveIntvl = SetKeepAliveIntvl;
+ 	pIf->SetKeepAliveProbes = SetKeepAliveProbes;
+ 	pIf->SetKeepAliveTime = SetKeepAliveTime;
++	pIf->SetGnutlsPriorityString = SetGnutlsPriorityString;
+ finalize_it:
+ ENDobjQueryInterface(nsd_gtls)
+ 
+diff -up rsyslog-8.24.0/runtime/nsd_gtls.h.v36tls rsyslog-8.24.0/runtime/nsd_gtls.h
+--- rsyslog-8.24.0/runtime/nsd_gtls.h.v36tls	2016-12-03 17:41:03.000000000 +0000
++++ rsyslog-8.24.0/runtime/nsd_gtls.h	2019-08-19 17:37:07.878166693 +0100
+@@ -25,6 +25,7 @@
+ #include "nsd.h"
+ 
+ #define NSD_GTLS_MAX_RCVBUF 8 * 1024 /* max size of buffer for message reception */
++#define NSD_GTLS_MAX_CERT 10 /* max number of certs in our chain */
+ 
+ typedef enum {
+ 	gtlsRtry_None = 0,	/**< no call needs to be retried */
+@@ -56,7 +57,10 @@ struct nsd_gtls_s {
+ 				 * set to 1 and changed to 0 after the first report. It is changed back to 1 after
+ 				 * one successful authentication. */
+ 	permittedPeers_t *pPermPeers; /* permitted peers */
+-	gnutls_x509_crt_t ourCert;	/**< our certificate, if in client mode (unused in server mode) */
++	uchar *gnutlsPriorityString;	/* gnutls priority string */
++	gnutls_x509_crt_t pOurCerts[NSD_GTLS_MAX_CERT];	/**< our certificate, if in client mode
++							(unused in server mode) */
++	unsigned int nOurCerts;  /* number of certificates in our chain */
+ 	gnutls_x509_privkey_t ourKey;	/**< our private key, if in client mode (unused in server mode) */
+ 	short	bOurCertIsInit;	/**< 1 if our certificate is initialized and must be deinit on destruction */
+ 	short	bOurKeyIsInit;	/**< 1 if our private key is initialized and must be deinit on destruction */
+diff -up rsyslog-8.24.0/runtime/nsd.h.v36tls rsyslog-8.24.0/runtime/nsd.h
+--- rsyslog-8.24.0/runtime/nsd.h.v36tls	2017-01-10 09:00:04.000000000 +0000
++++ rsyslog-8.24.0/runtime/nsd.h	2019-08-19 17:37:07.878166693 +0100
+@@ -83,14 +83,17 @@ BEGINinterface(nsd) /* name must also be
+ 	rsRetVal (*SetKeepAliveIntvl)(nsd_t *pThis, int keepAliveIntvl);
+ 	rsRetVal (*SetKeepAliveProbes)(nsd_t *pThis, int keepAliveProbes);
+ 	rsRetVal (*SetKeepAliveTime)(nsd_t *pThis, int keepAliveTime);
++	/* v10 */
++	rsRetVal (*SetGnutlsPriorityString)(nsd_t *pThis, uchar *gnutlsPriorityString);
+ ENDinterface(nsd)
+-#define nsdCURR_IF_VERSION 9 /* increment whenever you change the interface structure! */
++#define nsdCURR_IF_VERSION 10 /* increment whenever you change the interface structure! */
+ /* interface version 4 added GetRemAddr()
+  * interface version 5 added EnableKeepAlive() -- rgerhards, 2009-06-02
+  * interface version 6 changed return of CheckConnection from void to rsRetVal -- alorbach, 2012-09-06
+  * interface version 7 changed signature ofGetRempoteIP() -- rgerhards, 2013-01-21
+  * interface version 8 added keep alive parameter set functions
+  * interface version 9 changed signature of Connect() -- dsa, 2016-11-14
++ * interface version 10 added SetGnutlsPriorityString() -- PascalWithopf, 2017-08-08
+  */
+ 
+ /* interface  for the select call */
+diff -up rsyslog-8.24.0/runtime/nsd_ptcp.c.v36tls rsyslog-8.24.0/runtime/nsd_ptcp.c
+--- rsyslog-8.24.0/runtime/nsd_ptcp.c.v36tls	2017-01-10 09:00:04.000000000 +0000
++++ rsyslog-8.24.0/runtime/nsd_ptcp.c	2019-08-19 17:37:07.879166693 +0100
+@@ -176,6 +176,23 @@ finalize_it:
+ }
+ 
+ 
++/* Set priorityString
++ * PascalWithopf 2017-08-18 */
++static rsRetVal
++SetGnutlsPriorityString(nsd_t __attribute__((unused)) *pNsd, uchar *iVal)
++{
++	DEFiRet;
++	if(iVal != NULL) {
++		LogError(0, RS_RET_VALUE_NOT_SUPPORTED, "error: "
++		"gnutlsPriorityString '%s' not supported by ptcp netstream "
++		"driver", iVal);
++		ABORT_FINALIZE(RS_RET_VALUE_NOT_SUPPORTED);
++	}
++finalize_it:
++	RETiRet;
++}
++
++
+ /* Set the permitted peers. This is a dummy, always returning an
+  * error because we do not support fingerprint authentication.
+  * rgerhards, 2008-05-17
+@@ -535,6 +552,7 @@ LstnInit(netstrms_t *pNS, void *pUsr, rs
+ 		CHKiRet(pNS->Drvr.SetMode(pNewNsd, netstrms.GetDrvrMode(pNS)));
+ 		CHKiRet(pNS->Drvr.SetAuthMode(pNewNsd, netstrms.GetDrvrAuthMode(pNS)));
+ 		CHKiRet(pNS->Drvr.SetPermPeers(pNewNsd, netstrms.GetDrvrPermPeers(pNS)));
++		CHKiRet(pNS->Drvr.SetGnutlsPriorityString(pNewNsd, netstrms.GetDrvrGnutlsPriorityString(pNS)));
+ 		CHKiRet(netstrms.CreateStrm(pNS, &pNewStrm));
+ 		pNewStrm->pDrvrData = (nsd_t*) pNewNsd;
+ 		pNewNsd = NULL;
+@@ -854,6 +872,7 @@ CODESTARTobjQueryInterface(nsd_ptcp)
+ 	pIf->SetSock = SetSock;
+ 	pIf->SetMode = SetMode;
+ 	pIf->SetAuthMode = SetAuthMode;
++	pIf->SetGnutlsPriorityString = SetGnutlsPriorityString;
+ 	pIf->SetPermPeers = SetPermPeers;
+ 	pIf->Rcv = Rcv;
+ 	pIf->Send = Send;
+diff -up rsyslog-8.24.0/runtime/tcpsrv.c.v36tls rsyslog-8.24.0/runtime/tcpsrv.c
+--- rsyslog-8.24.0/runtime/tcpsrv.c.v36tls	2019-08-19 17:37:07.874166693 +0100
++++ rsyslog-8.24.0/runtime/tcpsrv.c	2019-08-19 17:37:07.880166693 +0100
+@@ -470,6 +470,9 @@ SessAccept(tcpsrv_t *pThis, tcpLstnPortL
+ 	}
+ 
+ 	/* we found a free spot and can construct our session object */
++	if(pThis->gnutlsPriorityString != NULL) {
++		CHKiRet(netstrm.SetGnutlsPriorityString(pNewStrm, pThis->gnutlsPriorityString));
++	}
+ 	CHKiRet(tcps_sess.Construct(&pSess));
+ 	CHKiRet(tcps_sess.SetTcpsrv(pSess, pThis));
+ 	CHKiRet(tcps_sess.SetLstnInfo(pSess, pLstnInfo));
+@@ -1001,6 +1004,8 @@ tcpsrvConstructFinalize(tcpsrv_t *pThis)
+ 		CHKiRet(netstrms.SetDrvrAuthMode(pThis->pNS, pThis->pszDrvrAuthMode));
+ 	if(pThis->pPermPeers != NULL)
+ 		CHKiRet(netstrms.SetDrvrPermPeers(pThis->pNS, pThis->pPermPeers));
++	if(pThis->gnutlsPriorityString != NULL)
++		CHKiRet(netstrms.SetDrvrGnutlsPriorityString(pThis->pNS, pThis->gnutlsPriorityString));
+ 	CHKiRet(netstrms.ConstructFinalize(pThis->pNS));
+ 
+ 	/* set up listeners */
+@@ -1173,6 +1178,16 @@ SetKeepAliveTime(tcpsrv_t *pThis, int iV
+ }
+ 
+ static rsRetVal
++SetGnutlsPriorityString(tcpsrv_t *pThis, uchar *iVal)
++{
++	DEFiRet;
++	DBGPRINTF("tcpsrv: gnutlsPriorityString set to %s\n",
++		(iVal == NULL) ? "(null)" : (const char*) iVal);
++	pThis->gnutlsPriorityString = iVal;
++	RETiRet;
++}
++
++static rsRetVal
+ SetOnMsgReceive(tcpsrv_t *pThis, rsRetVal (*OnMsgReceive)(tcps_sess_t*, uchar*, int))
+ {
+ 	DEFiRet;
+@@ -1414,6 +1429,7 @@ CODESTARTobjQueryInterface(tcpsrv)
+ 	pIf->SetKeepAliveIntvl = SetKeepAliveIntvl;
+ 	pIf->SetKeepAliveProbes = SetKeepAliveProbes;
+ 	pIf->SetKeepAliveTime = SetKeepAliveTime;
++	pIf->SetGnutlsPriorityString = SetGnutlsPriorityString;
+ 	pIf->SetUsrP = SetUsrP;
+ 	pIf->SetInputName = SetInputName;
+ 	pIf->SetOrigin = SetOrigin;
+diff -up rsyslog-8.24.0/runtime/tcpsrv.h.v36tls rsyslog-8.24.0/runtime/tcpsrv.h
+--- rsyslog-8.24.0/runtime/tcpsrv.h.v36tls	2019-08-19 17:37:07.874166693 +0100
++++ rsyslog-8.24.0/runtime/tcpsrv.h	2019-08-19 17:37:07.880166693 +0100
+@@ -61,6 +61,7 @@ struct tcpsrv_s {
+ 	int iKeepAliveTime;	/**< socket layer KEEPALIVE timeout */
+ 	netstrms_t *pNS;	/**< pointer to network stream subsystem */
+ 	int iDrvrMode;		/**< mode of the stream driver to use */
++	uchar *gnutlsPriorityString;	/**< priority string for gnutls */
+ 	uchar *pszDrvrAuthMode;	/**< auth mode of the stream driver to use */
+ 	uchar *pszDrvrName;	/**< name of stream driver to use */
+ 	uchar *pszInputName;	/**< value to be used as input name */
+@@ -169,6 +170,8 @@ BEGINinterface(tcpsrv) /* name must also
+ 	rsRetVal (*SetKeepAliveTime)(tcpsrv_t*, int);
+ 	/* added v18 */
+ 	rsRetVal (*SetbSPFramingFix)(tcpsrv_t*, sbool);
++	/* added v19 -- PascalWithopf, 2017-08-08 */
++	rsRetVal (*SetGnutlsPriorityString)(tcpsrv_t*, uchar*);
+ 	/* added v21 -- Preserve case in fromhost, 2018-08-16 */
+ 	rsRetVal (*SetPreserveCase)(tcpsrv_t *pThis, int bPreserveCase);
+ ENDinterface(tcpsrv)
+diff -up rsyslog-8.24.0/tools/omfwd.c.v36tls rsyslog-8.24.0/tools/omfwd.c
+--- rsyslog-8.24.0/tools/omfwd.c.v36tls	2019-08-19 17:37:07.848166695 +0100
++++ rsyslog-8.24.0/tools/omfwd.c	2019-08-19 17:37:07.881166693 +0100
+@@ -91,6 +91,7 @@ typedef struct _instanceData {
+ 	int iKeepAliveIntvl;
+ 	int iKeepAliveProbes;
+ 	int iKeepAliveTime;
++	uchar *gnutlsPriorityString;
+ 
+ #	define	FORW_UDP 0
+ #	define	FORW_TCP 1
+@@ -138,6 +139,7 @@ typedef struct configSettings_s {
+ 	int iKeepAliveIntvl;
+ 	int iKeepAliveProbes;
+ 	int iKeepAliveTime;
++	uchar *gnutlsPriorityString;
+ 	permittedPeers_t *pPermPeers;
+ } configSettings_t;
+ static configSettings_t cs;
+@@ -169,6 +171,7 @@ static struct cnfparamdescr actpdescr[]
+ 	{ "keepalive.probes", eCmdHdlrPositiveInt, 0 },
+ 	{ "keepalive.time", eCmdHdlrPositiveInt, 0 },
+ 	{ "keepalive.interval", eCmdHdlrPositiveInt, 0 },
++	{ "gnutlsprioritystring", eCmdHdlrString, 0 },
+ 	{ "streamdriver", eCmdHdlrGetWord, 0 },
+ 	{ "streamdrivermode", eCmdHdlrInt, 0 },
+ 	{ "streamdriverauthmode", eCmdHdlrGetWord, 0 },
+@@ -717,6 +720,9 @@ static rsRetVal TCPSendInit(void *pvData
+ 			CHKiRet(netstrm.SetDrvrPermPeers(pWrkrData->pNetstrm, pData->pPermPeers));
+ 		}
+ 		/* params set, now connect */
++		if(pData->gnutlsPriorityString != NULL) {
++			CHKiRet(netstrm.SetGnutlsPriorityString(pWrkrData->pNetstrm, pData->gnutlsPriorityString));
++		}
+ 		CHKiRet(netstrm.Connect(pWrkrData->pNetstrm, glbl.GetDefPFFamily(),
+ 			(uchar*)pData->port, (uchar*)pData->target, pData->device));
+ 
+@@ -960,6 +966,7 @@ setInstParamDefaults(instanceData *pData
+ 	pData->iKeepAliveProbes = 0;
+ 	pData->iKeepAliveIntvl = 0;
+ 	pData->iKeepAliveTime = 0;
++	pData->gnutlsPriorityString = NULL;
+ 	pData->bResendLastOnRecon = 0; 
+ 	pData->bSendToAll = -1;  /* unspecified */
+ 	pData->iUDPSendDelay = 0;
+@@ -1046,6 +1053,8 @@ CODESTARTnewActInst
+ 			pData->iKeepAliveIntvl = (int) pvals[i].val.d.n;
+ 		} else if(!strcmp(actpblk.descr[i].name, "keepalive.time")) {
+ 			pData->iKeepAliveTime = (int) pvals[i].val.d.n;
++		} else if(!strcmp(actpblk.descr[i].name, "gnutlsprioritystring")) {
++			pData->gnutlsPriorityString = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else if(!strcmp(actpblk.descr[i].name, "streamdriver")) {
+ 			pData->pszStrmDrvr = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else if(!strcmp(actpblk.descr[i].name, "streamdrivermode")) {
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1632211-journal-cursor-fix.patch b/SOURCES/rsyslog-8.24.0-rhbz1632211-journal-cursor-fix.patch
new file mode 100644
index 0000000..a5f35c0
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1632211-journal-cursor-fix.patch
@@ -0,0 +1,104 @@
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Thu, 14 Mar 2019 10:58:03 +0100
+Subject: [PATCH] Journal cursor related fixes
+
+Added missing free() calls of received journal cursor
+In one case there was possibility of free()'d value of journal
+cursor not being reset, causing double-free and crash later on.
+Not trying to get and save position of invalid journal
+---
+ plugins/imjournal/imjournal.c | 28 +++++----
+ 1 file changed, 15 insertions(+), 13 deletitions(-)
+
+diff --git a/plugins/imjournal/imjournal.c b/plugins/imjournal/imjournal.c
+index a85e52100..f5c2be4b6 100644
+--- a/plugins/imjournal/imjournal.c
++++ b/plugins/imjournal/imjournal.c
+@@ -121,7 +121,7 @@
+ 
+ #define J_PROCESS_PERIOD 1024  /* Call sd_journal_process() every 1,024 records */
+ 
+-static rsRetVal persistJournalState(void);
++static rsRetVal persistJournalState(int trySave);
+ static rsRetVal loadJournalState(void);
+ 
+ static rsRetVal openJournal(sd_journal** jj) {
+@@ -140,10 +140,10 @@
+ 	RETiRet;
+ }
+ 
+-static void closeJournal(sd_journal** jj) {
++static void closeJournal(sd_journal** jj, int trySave) {
+ 
+ 	if (cs.stateFile) { /* can't persist without a state file */
+-		persistJournalState();
++		persistJournalState(trySave);
+ 	}
+ 	sd_journal_close(*jj);
+ 	j_inotify_fd = 0;
+@@ -433,7 +434,7 @@
+ /* This function gets journal cursor and saves it into state file
+  */
+ static rsRetVal
+-persistJournalState (void)
++persistJournalState(int trySave)
+ {
+ 	DEFiRet;
+ 	FILE *sf; /* state file */
+@@ -443,12 +444,13 @@
+ 	if (cs.bWorkAroundJournalBug) {
+ 		if (!last_cursor)
+ 			ABORT_FINALIZE(RS_RET_OK);
+-
+-	} else if ((ret = sd_journal_get_cursor(j, &last_cursor)) < 0) {
+-		char errStr[256];
+-		rs_strerror_r(-(ret), errStr, sizeof(errStr));
+-		errmsg.LogError(0, RS_RET_ERR, "sd_journal_get_cursor() failed: '%s'\n", errStr);
+-		ABORT_FINALIZE(RS_RET_ERR);
++	} else if (trySave) {
++		if ((ret = sd_journal_get_cursor(j, &last_cursor))) {
++			LogError(-ret, RS_RET_ERR, "imjournal: sd_journal_get_cursor() failed");
++			ABORT_FINALIZE(RS_RET_ERR);
++		}
++	} else { /* not trying to get cursor out of invalid journal state */
++		ABORT_FINALIZE(RS_RET_OK);
+ 	}
+ 	/* we create a temporary name by adding a ".tmp"
+ 	 * suffix to the end of our state file's name
+@@ -501,7 +506,7 @@
+ 	r = sd_journal_wait(j, POLL_TIMEOUT);
+ 
+ 	if (r == SD_JOURNAL_INVALIDATE) {
+-		closeJournal(&j);
++		closeJournal(&j, 0);
+ 
+ 		iRet = openJournal(&j);
+ 		if (iRet != RS_RET_OK)
+@@ -628,7 +634,7 @@
+ tryRecover(void) {
+ 	errmsg.LogMsg(0, RS_RET_OK, LOG_INFO, "imjournal: trying to recover from unexpected "
+ 		"journal error");
+-	closeJournal(&j);
++	closeJournal(&j, 1);
+ 	srSleep(10, 0);	// do not hammer machine with too-frequent retries
+ 	openJournal(&j);
+ }
+@@ -708,7 +708,7 @@
+ 		if (cs.stateFile) { /* can't persist without a state file */
+ 			/* TODO: This could use some finer metric. */
+ 			if ((count % cs.iPersistStateInterval) == 0) {
+-				persistJournalState();
++				persistJournalState(1);
+ 			}
+ 		}
+ 	}
+@@ -764,7 +764,7 @@
+ /* close journal */
+ BEGINafterRun
+ CODESTARTafterRun
+-	closeJournal(&j);
++	closeJournal(&j, 1);
+ 	ratelimitDestruct(ratelimiter);
+ ENDafterRun
+ 
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1632659-omfwd-mem-corruption.patch b/SOURCES/rsyslog-8.24.0-rhbz1632659-omfwd-mem-corruption.patch
new file mode 100644
index 0000000..61e67e0
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1632659-omfwd-mem-corruption.patch
@@ -0,0 +1,51 @@
+From 5bbd0a4b3c212425ace54bf8a8ede5b832776209 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Wed, 6 Sep 2017 13:16:42 +0200
+Subject: [PATCH] core: memory corruption during configuration parsing
+
+when omfwd is used with the $streamdriverpermittedpeers legacy
+parameter, a memory corruption can occur. This depends on the
+length of the provided strings and probably the malloc subsystem.
+
+Once config parsing succeeds, no problem can happen.
+
+Thanks to Brent Douglas for initially reporting this issue and
+providing great analysis.
+Thanks to github user bwdoll for analyzing this bug and providing
+a suggested fix (which is almost what this commit includes).
+
+closes https://github.com/rsyslog/rsyslog/issues/1408
+closes https://github.com/rsyslog/rsyslog/issues/1474
+---
+ tools/omfwd.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/tools/omfwd.c b/tools/omfwd.c
+index 3bffbb3cc..8d51fbb51 100644
+--- a/tools/omfwd.c
++++ b/tools/omfwd.c
+@@ -1157,7 +1157,6 @@ CODESTARTnewActInst
+ 			pData->pszStrmDrvrAuthMode = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else if(!strcmp(actpblk.descr[i].name, "streamdriverpermittedpeers")) {
+ 			uchar *start, *str;
+-			uchar save;
+ 			uchar *p;
+ 			int lenStr;
+ 			str = (uchar*)es_str2cstr(pvals[i].val.d.estr, NULL);
+@@ -1170,8 +1169,6 @@ CODESTARTnewActInst
+ 				if(*p == ',') {
+ 					*p = '\0';
+ 				}
+-				save = *(p+1); /* we always have this, at least the \0 byte at EOS */
+-				*(p+1) = '\0';
+ 				if(*start == '\0') {
+ 					DBGPRINTF("omfwd: ignoring empty permitted peer\n");
+ 				} else {
+@@ -1181,7 +1178,6 @@ CODESTARTnewActInst
+ 				start = p+1;
+ 				if(lenStr)
+ 					--lenStr;
+-				*(p+1) = save;
+ 			}
+ 			free(str);
+ 		} else if(!strcmp(actpblk.descr[i].name, "ziplevel")) {
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1649250-imfile-rotation.patch b/SOURCES/rsyslog-8.24.0-rhbz1649250-imfile-rotation.patch
new file mode 100644
index 0000000..c52abc1
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1649250-imfile-rotation.patch
@@ -0,0 +1,306 @@
+From 31350bc0b935920f9924317b4cb3602602420f83 Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Fri, 16 Nov 2018 13:16:13 +0100
+Subject: [PATCH] bugfix imfile: file change was not reliably detected
+
+A change in the inode was not detected under all circumstances,
+most importantly not in some logrotate cases.
+
+Previously, truncation was only detected at end of file. Especially with
+busy files that could cause loss of data and possibly also stall imfile
+reading. The new code now also checks during each read. Obviously, there
+is some additional overhead associated with that, but this is unavoidable.
+
+It still is highly recommended NOT to turn on "reopenOnTruncate" in imfile.
+Note that there are also inherant reliability issues. There is no way to
+"fix" these, as they are caused by races between the process(es) who truncate
+and rsyslog reading the file. But with the new code, the "problem window"
+should be much smaller and, more importantly, imfile should not stall.
+---
+ plugins/imfile/imfile.c                       |  13 ++++++++++++-
+ runtime/rsyslog.h                             |   1 +
+ runtime/stream.c                              | 116 ++++++++-
+ runtime/stream.h                              |   7 +++++++
+ 4 files changed, 125 insertions(+), 11 deletions(-)
+
+diff --git a/plugins/imfile/imfile.c b/plugins/imfile/imfile.c
+index f4a4ef9b7..6be8b2999 100644
+--- a/plugins/imfile/imfile.c
++++ b/plugins/imfile/imfile.c
+@@ -740,8 +740,19 @@ detect_updates(fs_edge_t *const edge)
+ 			act_obj_unlink(act);
+ 			restart = 1;
+ 			break;
++		} else if(fileInfo.st_ino != act->ino) {
++			DBGPRINTF("file '%s' inode changed from %llu to %llu, unlinking from "
++				"internal lists\n", act->name, (long long unsigned) act->ino,
++				(long long unsigned) fileInfo.st_ino);
++			if(act->pStrm != NULL) {
++				/* we do no need to re-set later, as act_obj_unlink
++				 * will destroy the strm obj */
++				strmSet_checkRotation(act->pStrm, STRM_ROTATION_DO_NOT_CHECK);
++			}
++			act_obj_unlink(act);
++			restart = 1;
++			break;
+ 		}
+-		// TODO: add inode check for change notification!
+ 
+		/* Note: active nodes may get deleted, so we need to do the
+		 * pointer advancement at the end of the for loop!
+diff --git a/runtime/rsyslog.h b/runtime/rsyslog.h
+index 61d0af623..22a1c46d1 100644
+--- a/runtime/rsyslog.h
++++ b/runtime/rsyslog.h
+@@ -183,6 +183,7 @@ enum rsRetVal_                          /** return value. All methods return this if not specified otherwise */
+ 	RS_RET_NOT_IMPLEMENTED = -7,	/**< implementation is missing (probably internal error or lazyness ;)) */
+ 	RS_RET_OUT_OF_MEMORY = -6,	/**< memory allocation failed */
+ 	RS_RET_PROVIDED_BUFFER_TOO_SMALL = -50,/**< the caller provided a buffer, but the called function sees the size of this buffer is too small - operation not carried out */
++	RS_RET_FILE_TRUNCATED = -51,    /**< (input) file was truncated, not an error but a status */
+ 	RS_RET_TRUE = -3,		/**< to indicate a true state (can be used as TRUE, legacy) */
+ 	RS_RET_FALSE = -2,		/**< to indicate a false state (can be used as FALSE, legacy) */
+ 	RS_RET_NO_IRET = -8,	/**< This is a trick for the debuging system - it means no iRet is provided  */
+diff --git a/runtime/stream.c b/runtime/stream.c
+index 2d494c612..5b52591ef 100644
+--- a/runtime/stream.c
++++ b/runtime/stream.c
+@@ -400,6 +400,7 @@ static rsRetVal strmOpenFile(strm_t *pThis)
+ 	CHKiRet(doPhysOpen(pThis));
+ 
+ 	pThis->iCurrOffs = 0;
++	pThis->iBufPtrMax = 0;
+ 	CHKiRet(getFileSize(pThis->pszCurrFName, &offset));
+ 	if(pThis->tOperationsMode == STREAMMODE_WRITE_APPEND) {
+ 		pThis->iCurrOffs = offset;
+@@ -574,7 +574,7 @@ strmNextFile(strm_t *pThis)
+  * a file change is detected only if the inode changes. -- rgerhards, 2011-01-10
+  */
+ static rsRetVal
+-strmHandleEOFMonitor(strm_t *pThis)
++strmHandleEOFMonitor(strm_t *const pThis)
+ {
+ 	DEFiRet;
+ 	struct stat statName;
+@@ -611,7 +611,7 @@ strmHandleEOFMonitor(strm_t *pThis)
+  * rgerhards, 2008-02-13
+  */
+ static rsRetVal
+-strmHandleEOF(strm_t *pThis)
++strmHandleEOF(strm_t *const pThis)
+ {
+ 	DEFiRet;
+ 
+@@ -629,7 +629,13 @@ strmHandleEOF(strm_t *pThis)
+ 			CHKiRet(strmNextFile(pThis));
+ 			break;
+ 		case STREAMTYPE_FILE_MONITOR:
+-			CHKiRet(strmHandleEOFMonitor(pThis));
++			DBGOPRINT((obj_t*) pThis, "file '%s' (%d) EOF, rotationCheck %d\n",
++				pThis->pszCurrFName, pThis->fd, pThis->rotationCheck);
++			if(pThis->rotationCheck == STRM_ROTATION_DO_CHECK) {
++				CHKiRet(strmHandleEOFMonitor(pThis));
++			} else {
++				ABORT_FINALIZE(RS_RET_EOF);
++			}
+ 			break;
+ 	}
+ 
+@@ -636,6 +637,75 @@ strmHandleEOF(strm_t *pThis)
+ 	RETiRet;
+ }
+ 
++
++/* helper to checkTruncation */
++static rsRetVal
++rereadTruncated(strm_t *const pThis, const char *const reason)
++{
++	DEFiRet;
++
++	LogMsg(errno, RS_RET_FILE_TRUNCATED, LOG_WARNING, "file '%s': truncation detected, "
++		"(%s) - re-start reading from beginning",
++		pThis->pszCurrFName, reason);
++	DBGPRINTF("checkTruncation, file %s last buffer CHANGED\n", pThis->pszCurrFName);
++	CHKiRet(strmCloseFile(pThis));
++	CHKiRet(strmOpenFile(pThis));
++	iRet = RS_RET_FILE_TRUNCATED;
++
++finalize_it:
++	RETiRet;
++}
++/* helper to read:
++ * Check if file has been truncated since last read and, if so, re-set reading
++ * to begin of file. To detect truncation, we try to re-read the last block.
++ * If that does not succeed or different data than from the original read is
++ * returned, truncation is assumed.
++ * NOTE: this function must be called only if truncation is enabled AND
++ * when the previous read buffer still is valid (aka "before the next read").
++ * It is ok to call with a 0-size buffer, which we than assume as begin of
++ * reading. In that case, no truncation will be detected.
++ * rgerhards, 2018-09-20
++ */
++static rsRetVal
++checkTruncation(strm_t *const pThis)
++{
++	DEFiRet;
++	off64_t ret;
++	off64_t backseek;
++	assert(pThis->bReopenOnTruncate);
++
++	DBGPRINTF("checkTruncation, file %s, iBufPtrMax %zd\n", pThis->pszCurrFName, pThis->iBufPtrMax);
++	if(pThis->iBufPtrMax == 0) {
++		FINALIZE;
++	}
++
++	int currpos = lseek64(pThis->fd, 0, SEEK_CUR);
++	backseek = -1 * (off64_t) pThis->iBufPtrMax;
++	dbgprintf("checkTruncation in actual processing, currpos %d, backseek is %d\n", (int)currpos, (int) backseek);
++	ret = lseek64(pThis->fd, backseek, SEEK_CUR);
++	if(ret < 0) {
++		iRet = rereadTruncated(pThis, "cannot seek backward to begin of last block");
++		FINALIZE;
++	}
++
++	const ssize_t lenRead = read(pThis->fd, pThis->pIOBuf_truncation, pThis->iBufPtrMax);
++	dbgprintf("checkTruncation proof-read: %d bytes\n", (int) lenRead);
++	if(lenRead < 0) {
++		iRet = rereadTruncated(pThis, "last block could not be re-read");
++		FINALIZE;
++	}
++
++	if(!memcmp(pThis->pIOBuf_truncation, pThis->pIOBuf, pThis->iBufPtrMax)) {
++		DBGPRINTF("checkTruncation, file %s last buffer unchanged\n", pThis->pszCurrFName);
++	} else {
++		iRet = rereadTruncated(pThis, "last block data different");
++	}
++
++finalize_it:
++	RETiRet;
++}
++
++
+ /* read the next buffer from disk
+  * rgerhards, 2008-02-13
+  */
+@@ -668,6 +741,13 @@ strmReadBuf(strm_t *pThis, int *padBytes)
+ 				toRead = (size_t) bytesLeft;
+ 			}
+ 		}
++		if(pThis->bReopenOnTruncate) {
++			rsRetVal localRet = checkTruncation(pThis);
++			if(localRet == RS_RET_FILE_TRUNCATED) {
++				continue;
++			}
++			CHKiRet(localRet);
++		}
+ 		iLenRead = read(pThis->fd, pThis->pIOBuf, toRead);
+ 		DBGOPRINT((obj_t*) pThis, "file %d read %ld bytes\n", pThis->fd, iLenRead);
+ 		/* end crypto */
+@@ -854,7 +854,7 @@
+  * a line, but following lines that are indented are part of the same log entry
+  */
+ static rsRetVal
+-strmReadLine(strm_t *pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF,
++strmReadLine(strm_t *const pThis, cstr_t **ppCStr, uint8_t mode, sbool bEscapeLF,
+ 	uint32_t trimLineOverBytes, int64 *const strtOffs)
+ {
+         uchar c;
+@@ -1184,6 +1264,7 @@ static rsRetVal strmConstructFinalize(strm_t *pThis)
+ 	} else {
+ 		/* we work synchronously, so we need to alloc a fixed pIOBuf */
+ 		CHKmalloc(pThis->pIOBuf = (uchar*) MALLOC(pThis->sIOBufSize));
++		CHKmalloc(pThis->pIOBuf_truncation = (char*) MALLOC(pThis->sIOBufSize));
+ 	}
+ 
+ finalize_it:
+@@ -1231,6 +1312,7 @@ CODESTARTobjDestruct(strm)
+ 		}
+ 	} else {
+ 		free(pThis->pIOBuf);
++		free(pThis->pIOBuf_truncation);
+ 	}
+ 
+ 	/* Finally, we can free the resources.
+@@ -2147,11 +2150,22 @@ DEFpropSetMeth(strm, cryprov, cryprov_if_t*)
+ void
+ strmSetReadTimeout(strm_t *const __restrict__ pThis, const int val)
+ {
++	ISOBJ_TYPE_assert(pThis, strm);
+ 	pThis->readTimeout = val;
+ }
+ 
+-static rsRetVal strmSetbDeleteOnClose(strm_t *pThis, int val)
++void
++strmSet_checkRotation(strm_t *const pThis, const int val) {
++	ISOBJ_TYPE_assert(pThis, strm);
++	assert(val == STRM_ROTATION_DO_CHECK || val == STRM_ROTATION_DO_NOT_CHECK);
++	pThis->rotationCheck = val;
++}
++
++
++static rsRetVal
++strmSetbDeleteOnClose(strm_t *const pThis, const int val)
+ {
++	ISOBJ_TYPE_assert(pThis, strm);
+ 	pThis->bDeleteOnClose = val;
+ 	if(pThis->cryprov != NULL) {
+ 		pThis->cryprov->SetDeleteOnClose(pThis->cryprovFileData, pThis->bDeleteOnClose);
+@@ -2162,15 +2176,19 @@ static rsRetVal strmSetbDeleteOnClose(strm_t *pThis, int val)
+ 	return RS_RET_OK;
+ }
+ 
+-static rsRetVal strmSetiMaxFiles(strm_t *pThis, int iNewVal)
++static rsRetVal
++strmSetiMaxFiles(strm_t *const pThis, const int iNewVal)
+ {
++	ISOBJ_TYPE_assert(pThis, strm);
+ 	pThis->iMaxFiles = iNewVal;
+ 	pThis->iFileNumDigits = getNumberDigits(iNewVal);
+ 	return RS_RET_OK;
+ }
+ 
+-static rsRetVal strmSetFileNotFoundError(strm_t *pThis, int pFileNotFoundError)
++static rsRetVal 
++strmSetFileNotFoundError(strm_t *const pThis, const int pFileNotFoundError)
+ {
++	ISOBJ_TYPE_assert(pThis, strm);
+ 	pThis->fileNotFoundError = pFileNotFoundError;
+ 	return RS_RET_OK;
+ }
+diff --git a/runtime/stream.h b/runtime/stream.h
+index e3d6c2372..f6f48378a 100644
+--- a/runtime/stream.h
++++ b/runtime/stream.h
+@@ -91,6 +91,10 @@ typedef enum {				/* when extending, do NOT change existing modes! */
+ 	STREAMMODE_WRITE_APPEND = 4
+ } strmMode_t;
+ 
++/* settings for stream rotation (applies not to all processing modes!) */
++#define	STRM_ROTATION_DO_CHECK		0
++#define	STRM_ROTATION_DO_NOT_CHECK	1
++
+ #define STREAM_ASYNC_NUMBUFS 2 /* must be a power of 2 -- TODO: make configurable */
+ /* The strm_t data structure */
+ typedef struct strm_s {
+@@ -114,6 +118,7 @@ typedef struct strm_s {
+ 	sbool bDisabled; /* should file no longer be written to? (currently set only if omfile file size limit fails) */
+ 	sbool bSync;	/* sync this file after every write? */
+ 	sbool bReopenOnTruncate;
++	int rotationCheck; /* rotation check mode */
+ 	size_t sIOBufSize;/* size of IO buffer */
+ 	uchar *pszDir; /* Directory */
+ 	int lenDir;
+@@ -124,6 +124,7 @@ typedef struct strm_s {
+ 	ino_t inode;	/* current inode for files being monitored (undefined else) */
+ 	uchar *pszCurrFName; /* name of current file (if open) */
+ 	uchar *pIOBuf;	/* the iobuffer currently in use to gather data */
++	char *pIOBuf_truncation; /* iobuffer used during trucation detection block re-reads */
+ 	size_t iBufPtrMax;	/* current max Ptr in Buffer (if partial read!) */
+ 	size_t iBufPtr;	/* pointer into current buffer */
+ 	int iUngetC;	/* char set via UngetChar() call or -1 if none set */
+@@ -238,5 +238,6 @@
+ const uchar * strmGetPrevLineSegment(strm_t *const pThis);
+ const uchar * strmGetPrevMsgSegment(strm_t *const pThis);
+ int strmGetPrevWasNL(const strm_t *const pThis);
++void strmSet_checkRotation(strm_t *const pThis, const int val);
+ 
+ #endif /* #ifndef STREAM_H_INCLUDED */
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1656860-imfile-buffer-overflow.patch b/SOURCES/rsyslog-8.24.0-rhbz1656860-imfile-buffer-overflow.patch
new file mode 100644
index 0000000..f7398d3
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1656860-imfile-buffer-overflow.patch
@@ -0,0 +1,40 @@
+From d5bcd5b89b2f88611e73ea193ce35178b1e89b32 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Tue, 19 Dec 2017 10:15:46 +0100
+Subject: [PATCH] core bugfix: MAXFNAME was set too low
+
+it just permitted 200 chars, with almost all systems permitting for
+more. I tried to find a more portable way to find out the actual max,
+but this turned out horrible. The next solution would have been to use
+dynamic alloc, but that often is overkill. So I now settle to just
+increasing the value to 4KiB. This is the Linux limit, and it is by
+far the highest I could find. This should be good to go for quite
+some while but should not put too much stressure on the stack alloc.
+
+closes https://github.com/rsyslog/rsyslog/issues/2228
+---
+ runtime/syslogd-types.h | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/runtime/syslogd-types.h b/runtime/syslogd-types.h
+index 2cbe8039a..360f6d557 100644
+--- a/runtime/syslogd-types.h
++++ b/runtime/syslogd-types.h
+@@ -4,7 +4,7 @@
+  *
+  * File begun on 2007-07-13 by RGerhards (extracted from syslogd.c)
+  *
+- * Copyright 2007-2014 Adiscon GmbH.
++ * Copyright 2007-2017 Adiscon GmbH.
+  *
+  * This file is part of the rsyslog runtime library.
+  *
+@@ -38,7 +38,7 @@
+ # define UNAMESZ	8	/* length of a login name */
+ #endif
+ #define MAXUNAMES	20	/* maximum number of user names */
+-#define MAXFNAME	200	/* max file pathname length */
++#define MAXFNAME	4096	/* max file pathname length */
+ 
+ #define	_DB_MAXDBLEN	128	/* maximum number of db */
+ #define _DB_MAXUNAMELEN	128	/* maximum number of user name */
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1658288-imptcp-octet-segfault.patch b/SOURCES/rsyslog-8.24.0-rhbz1658288-imptcp-octet-segfault.patch
new file mode 100644
index 0000000..cbccd21
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1658288-imptcp-octet-segfault.patch
@@ -0,0 +1,50 @@
+From 0381a0de64a5a048c3d48b79055bd9848d0c7fc2 Mon Sep 17 00:00:00 2001
+From: PascalWithopf <pwithopf@adiscon.com>
+Date: Wed, 19 Apr 2017 13:06:30 +0200
+Subject: [PATCH] imptcp: fix Segmentation Fault when octet count is to high
+
+---
+ plugins/imptcp/imptcp.c                   | 14 ++++++-
+ 1 files changed, 12 insertions(+), 2 deletions(-)
+
+diff --git a/plugins/imptcp/imptcp.c b/plugins/imptcp/imptcp.c
+index acf0dcd25..b9a4e2fdf 100644
+--- a/plugins/imptcp/imptcp.c
++++ b/plugins/imptcp/imptcp.c
+@@ -902,7 +902,16 @@ processDataRcvd(ptcpsess_t *const __restrict__ pThis,
+ 
+ 	if(pThis->inputState == eInOctetCnt) {
+ 		if(isdigit(c)) {
+-			pThis->iOctetsRemain = pThis->iOctetsRemain * 10 + c - '0';
++			if(pThis->iOctetsRemain <= 200000000) {
++				pThis->iOctetsRemain = pThis->iOctetsRemain * 10 + c - '0';
++			} else {
++				errmsg.LogError(0, NO_ERRCODE, "Framing Error in received TCP message: "
++						"frame too large (at least %d%c), change to octet stuffing",
++						pThis->iOctetsRemain, c);
++				pThis->eFraming = TCP_FRAMING_OCTET_STUFFING;
++				pThis->inputState = eInMsg;
++			}
++			*(pThis->pMsg + pThis->iMsg++) = c;
+ 		} else { /* done with the octet count, so this must be the SP terminator */
+ 			DBGPRINTF("TCP Message with octet-counter, size %d.\n", pThis->iOctetsRemain);
+ 			if(c != ' ') {
+@@ -911,9 +920,9 @@ processDataRcvd(ptcpsess_t *const __restrict__ pThis,
+ 			}
+ 			if(pThis->iOctetsRemain < 1) {
+ 				/* TODO: handle the case where the octet count is 0! */
+-				DBGPRINTF("Framing Error: invalid octet count\n");
+ 				errmsg.LogError(0, NO_ERRCODE, "Framing Error in received TCP message: "
+ 					    "invalid octet count %d.", pThis->iOctetsRemain);
++				pThis->eFraming = TCP_FRAMING_OCTET_STUFFING;
+ 			} else if(pThis->iOctetsRemain > iMaxLine) {
+ 				/* while we can not do anything against it, we can at least log an indication
+ 				 * that something went wrong) -- rgerhards, 2008-03-14
+@@ -924,6 +933,7 @@ processDataRcvd(ptcpsess_t *const __restrict__ pThis,
+ 					        "max msg size is %d, truncating...", pThis->iOctetsRemain, iMaxLine);
+ 			}
+ 			pThis->inputState = eInMsg;
++			pThis->iMsg = 0;
+ 		}
+ 	} else {
+ 		assert(pThis->inputState == eInMsg);
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1666365-internal-messages-memory-leak.patch b/SOURCES/rsyslog-8.24.0-rhbz1666365-internal-messages-memory-leak.patch
new file mode 100644
index 0000000..51ef1d2
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1666365-internal-messages-memory-leak.patch
@@ -0,0 +1,39 @@
+From 890e3bb0d83719350ace0ee00b1d2d471333778d Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Wed, 10 May 2017 11:46:45 +0200
+Subject: [PATCH] core bugfix: memory leak when internal messages not processed
+ internally
+
+In this case, the message object is not destructed, resulting in
+a memory leak. Usually, this is no problem due to the low number
+of internal message, but it can become an issue if a large number
+of messages is emitted.
+
+closes https://github.com/rsyslog/rsyslog/issues/1548
+closes https://github.com/rsyslog/rsyslog/issues/1531
+---
+ tools/rsyslogd.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/tools/rsyslogd.c b/tools/rsyslogd.c
+index 417804e18..d2e5361a6 100644
+--- a/tools/rsyslogd.c
++++ b/tools/rsyslogd.c
+@@ -840,7 +840,7 @@ submitMsgWithDfltRatelimiter(smsg_t *pMsg)
+ 
+ 
+ static void
+-logmsgInternal_doWrite(smsg_t *const __restrict__ pMsg)
++logmsgInternal_doWrite(smsg_t *pMsg)
+ {
+ 	if(bProcessInternalMessages) {
+ 		ratelimitAddMsg(internalMsg_ratelimiter, NULL, pMsg);
+@@ -852,6 +852,8 @@ logmsgInternal_doWrite(smsg_t *const __restrict__ pMsg)
+ #		else
+ 		syslog(pri, "%s", msg);
+ #		endif
++		/* we have emitted the message and must destruct it */
++		msgDestruct(&pMsg);
+ 	}
+ }
+ 
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1684236-omelastic-sigsegv.patch b/SOURCES/rsyslog-8.24.0-rhbz1684236-omelastic-sigsegv.patch
new file mode 100644
index 0000000..2f839a6
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1684236-omelastic-sigsegv.patch
@@ -0,0 +1,65 @@
+From 48dd54fd6d1edeb5dcdde95935a3ca9d2a6ab52e Mon Sep 17 00:00:00 2001
+From: Rich Megginson <rmeggins@redhat.com>
+Date: Wed, 6 Mar 2019 10:23:28 -0700
+Subject: [PATCH] Bug 1684236 - rsyslog-8.24.0-34.el7.x86_64 SIGSEGV when using
+ rsyslog-elasticsearch-8.24.0-34
+
+https://bugzilla.redhat.com/show_bug.cgi?id=1684236
+
+Cause: When omelasticsearch has a problem sending data to
+Elasticsearch because the connection was broken, the curl api
+returns an error code that was not being checked.  The
+omelasticsearch code also assumed that the reply field would
+always be allocated, but it is not in this error case.
+
+Consequence: rsyslog crashes when the connection to Elasticsearch
+is lost while attempting to send data to Elasticsearch.
+
+Fix: Check for the correct error code (CURLE_GOT_NOTHING), and
+also check that the reply field was allocated.
+
+Result: rsyslog does not crash when the connection to Elasticsearch
+is lost while attempting to send data to Elasticsearch.
+---
+ plugins/omelasticsearch/omelasticsearch.c | 12 +++++++++---
+ 1 file changed, 9 insertions(+), 3 deletions(-)
+
+diff --git a/plugins/omelasticsearch/omelasticsearch.c b/plugins/omelasticsearch/omelasticsearch.c
+index 248a369d2..d0c3f91d5 100644
+--- a/plugins/omelasticsearch/omelasticsearch.c
++++ b/plugins/omelasticsearch/omelasticsearch.c
+@@ -1431,6 +1431,7 @@ curlPost(wrkrInstanceData_t *pWrkrData, uchar *message, int msglen, uchar **tpls
+ 	    || code == CURLE_COULDNT_RESOLVE_PROXY
+ 	    || code == CURLE_COULDNT_CONNECT
+ 	    || code == CURLE_WRITE_ERROR
++	    || code == CURLE_GOT_NOTHING
+ 	   ) {
+ 		STATSCOUNTER_INC(indexHTTPReqFail, mutIndexHTTPReqFail);
+ 		indexHTTPFail += nmsgs;
+@@ -1441,15 +1442,20 @@ curlPost(wrkrInstanceData_t *pWrkrData, uchar *message, int msglen, uchar **tpls
+ 	}
+ 
+ 	DBGPRINTF("omelasticsearch: pWrkrData replyLen = '%d'\n", pWrkrData->replyLen);
+-	if(pWrkrData->replyLen > 0) {
++	if(NULL != pWrkrData->reply) {
++	    if(pWrkrData->replyLen > 0) {
+ 		pWrkrData->reply[pWrkrData->replyLen] = '\0'; /* Append 0 Byte if replyLen is above 0 - byte has been reserved in malloc */
++	    }
++	    CHKiRet(checkResult(pWrkrData, message));
++	    DBGPRINTF("omelasticsearch: pWrkrData reply: '%s'\n", pWrkrData->reply);
++	} else {
++	    DBGPRINTF("omelasticsearch: pWrkrData reply is NULL\n");
+ 	}
+-	DBGPRINTF("omelasticsearch: pWrkrData reply: '%s'\n", pWrkrData->reply);
+ 
+-	CHKiRet(checkResult(pWrkrData, message));
+ finalize_it:
+ 	incrementServerIndex(pWrkrData);
+ 	free(pWrkrData->reply);
++	pWrkrData->reply = NULL;
+ 	RETiRet;
+ }
+ 
+-- 
+2.20.1
+
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1685901-symlink-error-flood.patch b/SOURCES/rsyslog-8.24.0-rhbz1685901-symlink-error-flood.patch
new file mode 100644
index 0000000..2b658fb
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1685901-symlink-error-flood.patch
@@ -0,0 +1,25 @@
+From 31350bc0b935920f9924317b4cb3602602420f83 Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Fri, 16 Nov 2018 13:16:13 +0100
+Subject: [PATCH] disable file vs directory error on symlinks
+
+The file/directory node-object alignment now ignores symlinks.
+Previously it reported error on each directory symlink spamming
+user error logs.
+---
+ plugins/imfile/imfile.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/plugins/imfile/imfile.c b/plugins/imfile/imfile.c
+index f42670ca2..1618d151c 100644
+--- a/plugins/imfile/imfile.c
++++ b/plugins/imfile/imfile.c
+@@ -869,7 +869,7 @@ poll_tree(fs_edge_t *const chld)
+ 					"directory - ignored", file);
+ 				continue;
+ 			}
+-			if(chld->is_file != is_file) {
++			if(!issymlink && (chld->is_file != is_file)) {
+ 				LogMsg(0, RS_RET_ERR, LOG_WARNING,
+ 					"imfile: '%s' is %s but %s expected - ignored",
+ 					file, (is_file) ? "FILE" : "DIRECTORY",
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1696686-imjournal-fsync.patch b/SOURCES/rsyslog-8.24.0-rhbz1696686-imjournal-fsync.patch
new file mode 100644
index 0000000..0f30b02
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1696686-imjournal-fsync.patch
@@ -0,0 +1,104 @@
+From 628d791b26062945fad4afa5985f6b84f46f16d4 Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Tue, 23 Jul 2019 11:28:31 +0200
+Subject: [PATCH] Add "fsync" option for imjournal
+
+The new option makes possible to force physical write of stateFile
+to persistent storage, ensuring we do not lose/duplicate messages
+in case of hard crash or power loss.
+---
+ plugins/imjournal/imjournal.c | 38 +++++++++++++++----
+ 1 file changed, 30 insertions(+), 8 deletitions(-)
+
+diff --git a/plugins/imjournal/imjournal.c b/plugins/imjournal/imjournal.c
+index 5739bf408c..6c8829243a 100644
+--- a/plugins/imjournal/imjournal.c
++++ b/plugins/imjournal/imjournal.c
+@@ -24,6 +24,7 @@
+ #include "config.h"
+ #include "rsyslog.h"
+ #include <stdio.h>
++#include <dirent.h>
+ #include <assert.h>
+ #include <string.h>
+ #include <stdarg.h>
+@@ -81,6 +82,7 @@ static struct configSettings_s {
+ 	int bUseJnlPID;
+ 	char *dfltTag;
+ 	int bWorkAroundJournalBug;
++	int bFsync;
+ } cs;
+ 
+ static rsRetVal facilityHdlr(uchar **pp, void *pVal);
+@@ -437,7 +437,7 @@ persistJournalState(int trySave)
+ persistJournalState(int trySave)
+ {
+ 	DEFiRet;
+-	FILE *sf; /* state file */
++	FILE *sf = NULL; /* state file */
+ 	char tmp_sf[MAXFNAME];
+ 	int ret = 0;
+ 
+@@ -468,13 +468,6 @@ persistJournalState(int trySave)
+ 	ret = fputs(last_cursor, sf);
+ 	if (ret < 0) {
+ 		errmsg.LogError(errno, RS_RET_IO_ERROR, "imjournal: failed to save cursor to: '%s'", tmp_sf);
+-		ret = fclose(sf);
+-		ABORT_FINALIZE(RS_RET_IO_ERROR);
+-	}
+-
+-	ret = fclose(sf);
+-	if (ret < 0) {
+-		errmsg.LogError(errno, RS_RET_IO_ERROR, "imjournal: fclose() failed for path: '%s'", tmp_sf);
+ 		ABORT_FINALIZE(RS_RET_IO_ERROR);
+ 	}
+ 
+@@ -484,7 +477,30 @@ persistJournalState(int trySave)
+ 		ABORT_FINALIZE(RS_RET_IO_ERROR);
+ 	}
+ 
++	if (cs.bFsync) {
++		if (fsync(fileno(sf)) != 0) {
++			LogError(errno, RS_RET_IO_ERROR, "imjournal: fsync on '%s' failed", cs.stateFile);
++			ABORT_FINALIZE(RS_RET_IO_ERROR);
++		}
++		/* In order to guarantee physical write we need to force parent sync as well */
++		DIR *wd;
++		if (!(wd = opendir((char *)glbl.GetWorkDir()))) {
++			LogError(errno, RS_RET_IO_ERROR, "imjournal: failed to open '%s' directory", glbl.GetWorkDir());
++			ABORT_FINALIZE(RS_RET_IO_ERROR);
++		}
++		if (fsync(dirfd(wd)) != 0) {
++			LogError(errno, RS_RET_IO_ERROR, "imjournal: fsync on '%s' failed", glbl.GetWorkDir());
++			ABORT_FINALIZE(RS_RET_IO_ERROR);
++		}
++	}
++
+ finalize_it:
++	if (sf != NULL) {
++		if (fclose(sf) == EOF) {
++			LogError(errno, RS_RET_IO_ERROR, "imjournal: fclose() failed for path: '%s'", tmp_sf);
++			iRet = RS_RET_IO_ERROR;
++		}
++	}
+ 	RETiRet;
+ }
+ 
+@@ -746,6 +747,8 @@ CODESTARTbeginCnfLoad
+ 	cs.iDfltFacility = DFLT_FACILITY;
+ 	cs.dfltTag = NULL;
+ 	cs.bUseJnlPID = 0;
++	cs.bWorkAroundJournalBug = 0;
++	cs.bFsync = 0;
+ ENDbeginCnfLoad
+ 
+ 
+@@ -943,6 +963,8 @@ CODESTARTsetModCnf
+ 			cs.dfltTag = (char *)es_str2cstr(pvals[i].val.d.estr, NULL);
+ 		} else if (!strcmp(modpblk.descr[i].name, "workaroundjournalbug")) {
+ 			cs.bWorkAroundJournalBug = (int) pvals[i].val.d.n;
++		} else if (!strcmp(modpblk.descr[i].name, "fsync")) {
++			cs.bFsync = (int) pvals[i].val.d.n;
+ 		} else {
+ 			dbgprintf("imjournal: program error, non-handled "
+ 				"param '%s' in beginCnfLoad\n", modpblk.descr[i].name);
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1725067-imjournal-memleak.patch b/SOURCES/rsyslog-8.24.0-rhbz1725067-imjournal-memleak.patch
new file mode 100644
index 0000000..b44b030
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1725067-imjournal-memleak.patch
@@ -0,0 +1,49 @@
+From 920c28ff705aac74f389b4613815b14b9482e497 Mon Sep 17 00:00:00 2001
+From: Jiri Vymazal <jvymazal@redhat.com>
+Date: Mon, 21 Jan 2019 10:58:03 +0100
+Subject: [PATCH] Added missing free() calls of received journal cursor
+
+---
+ plugins/imjournal/imjournal.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/plugins/imjournal/imjournal.c b/plugins/imjournal/imjournal.c
+index a85e521003..f5c2be4b6e 100644
+--- a/plugins/imjournal/imjournal.c
++++ b/plugins/imjournal/imjournal.c
+@@ -411,8 +411,7 @@ readjournal(void)
+ 
+ 	if (cs.bWorkAroundJournalBug) {
+ 		/* save journal cursor (at this point we can be sure it is valid) */
+-		sd_journal_get_cursor(j, &c);
+-		if (c) {
++		if (!sd_journal_get_cursor(j, &c)) {
+ 			free(last_cursor);
+ 			last_cursor = c;
+ 		}
+@@ -444,7 +443,9 @@ persistJournalState(void)
+ 		if (!last_cursor)
+ 			ABORT_FINALIZE(RS_RET_OK);
+ 	} else if (trySave) {
++		free(last_cursor);
+ 		if ((ret = sd_journal_get_cursor(j, &last_cursor))) {
++			last_cursor = NULL;
+ 			LogError(-ret, RS_RET_ERR, "imjournal: sd_journal_get_cursor() failed");
+ 			ABORT_FINALIZE(RS_RET_ERR);
+ 		}
+@@ -592,6 +593,7 @@ loadJournalState(void)
+ 							iRet = RS_RET_ERR;
+ 						}
+ 					} 
++					free(tmp_cursor);
+ 				}
+ 			} else {
+ 				errmsg.LogError(0, RS_RET_IO_ERROR, "imjournal: "
+@@ -748,6 +750,7 @@ BEGINfreeCnf
+ CODESTARTfreeCnf
+ 	free(cs.stateFile);
+ 	free(cs.dfltTag);
++	free(last_cursor);
+ ENDfreeCnf
+ 
+ /* open journal */
diff --git a/SOURCES/rsyslog-8.24.0-rhbz1744682-ratelimiter-segfault.patch b/SOURCES/rsyslog-8.24.0-rhbz1744682-ratelimiter-segfault.patch
new file mode 100644
index 0000000..6089930
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-rhbz1744682-ratelimiter-segfault.patch
@@ -0,0 +1,41 @@
+From b54769b4d8371ce1d60e3c43172a445336ec79b6 Mon Sep 17 00:00:00 2001
+From: Rainer Gerhards <rgerhards@adiscon.com>
+Date: Mon, 24 Sep 2018 13:27:26 +0200
+Subject: [PATCH] bugfix imfile: segfault in ratelimiter
+
+imfile crashes inside rate limit processing, often when log
+files are rotated. However, this could occur in any case where
+the monitored files was closed by imfile, it rotation is just
+the most probable cause for this (moving the file to another
+directory or deleting it also can trigger the same issue, for
+example). The root cause was invalid sequence of operations.
+
+closes https://github.com/rsyslog/rsyslog/issues/3021
+---
+ plugins/imfile/imfile.c | 8 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/plugins/imfile/imfile.c b/plugins/imfile/imfile.c
+index e710f7c44c..f4a4ef9b72 100644
+--- a/plugins/imfile/imfile.c
++++ b/plugins/imfile/imfile.c
+@@ -915,9 +915,6 @@ act_obj_destroy(act_obj_t *const act, const int is_deleted)
+ 			}
+ 		}
+ 	}
+-	if(act->ratelimiter != NULL) {
+-		ratelimitDestruct(act->ratelimiter);
+-	}
+ 	if(act->pStrm != NULL) {
+ 		const instanceConf_t *const inst = act->edge->instarr[0];// TODO: same file, multiple instances?
+ 		pollFile(act); /* get any left-over data */
+@@ -934,6 +931,9 @@ act_obj_destroy(act_obj_t *const act, const int is_deleted)
+ 			unlink((char*)statefn);
+ 		}
+ 	}
++	if(act->ratelimiter != NULL) {
++		ratelimitDestruct(act->ratelimiter);
++	}
+ 	#ifdef HAVE_INOTIFY_INIT
+ 	if(act->wd != -1) {
+ 		wdmapDel(act->wd);
diff --git a/SOURCES/rsyslog-8.24.0-sd-service.patch b/SOURCES/rsyslog-8.24.0-sd-service.patch
new file mode 100644
index 0000000..3018ada
--- /dev/null
+++ b/SOURCES/rsyslog-8.24.0-sd-service.patch
@@ -0,0 +1,30 @@
+From fc47fd36a8549fae46ab7dbff31d542c829c1004 Mon Sep 17 00:00:00 2001
+From: Radovan Sroka <rsroka@redhat.com>
+Date: Mon, 21 Nov 2016 16:49:48 +0100
+Subject: [PATCH 1/4] Rebased from: Patch0: rsyslog-7.4.1-sd-service.patch
+
+Resolves:
+	no bugzilla
+---
+ rsyslog.service.in | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/rsyslog.service.in b/rsyslog.service.in
+index cb629ee..74d2149 100644
+--- a/rsyslog.service.in
++++ b/rsyslog.service.in
+@@ -6,7 +6,10 @@ Documentation=http://www.rsyslog.com/doc/
+ 
+ [Service]
+ Type=notify
+-ExecStart=@sbindir@/rsyslogd -n
++EnvironmentFile=-/etc/sysconfig/rsyslog
++ExecStart=@sbindir@/rsyslogd -n $SYSLOGD_OPTIONS
++Restart=on-failure
++UMask=0066
+ StandardOutput=null
+ Restart=on-failure
+ 
+-- 
+2.7.4
+
diff --git a/SOURCES/rsyslog.conf b/SOURCES/rsyslog.conf
new file mode 100644
index 0000000..735472d
--- /dev/null
+++ b/SOURCES/rsyslog.conf
@@ -0,0 +1,91 @@
+# rsyslog configuration file
+
+# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
+# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
+
+#### MODULES ####
+
+# The imjournal module bellow is now used as a message source instead of imuxsock.
+$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
+$ModLoad imjournal # provides access to the systemd journal
+#$ModLoad imklog # reads kernel messages (the same are read from journald)
+#$ModLoad immark  # provides --MARK-- message capability
+
+# Provides UDP syslog reception
+#$ModLoad imudp
+#$UDPServerRun 514
+
+# Provides TCP syslog reception
+#$ModLoad imtcp
+#$InputTCPServerRun 514
+
+
+#### GLOBAL DIRECTIVES ####
+
+# Where to place auxiliary files
+$WorkDirectory /var/lib/rsyslog
+
+# Use default timestamp format
+$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
+
+# File syncing capability is disabled by default. This feature is usually not required,
+# not useful and an extreme performance hit
+#$ActionFileEnableSync on
+
+# Include all config files in /etc/rsyslog.d/
+$IncludeConfig /etc/rsyslog.d/*.conf
+
+# Turn off message reception via local log socket;
+# local messages are retrieved through imjournal now.
+$OmitLocalLogging on
+
+# File to store the position in the journal
+$IMJournalStateFile imjournal.state
+
+
+#### RULES ####
+
+# Log all kernel messages to the console.
+# Logging much else clutters up the screen.
+#kern.*                                                 /dev/console
+
+# Log anything (except mail) of level info or higher.
+# Don't log private authentication messages!
+*.info;mail.none;authpriv.none;cron.none                /var/log/messages
+
+# The authpriv file has restricted access.
+authpriv.*                                              /var/log/secure
+
+# Log all the mail messages in one place.
+mail.*                                                  -/var/log/maillog
+
+
+# Log cron stuff
+cron.*                                                  /var/log/cron
+
+# Everybody gets emergency messages
+*.emerg                                                 :omusrmsg:*
+
+# Save news errors of level crit and higher in a special file.
+uucp,news.crit                                          /var/log/spooler
+
+# Save boot messages also to boot.log
+local7.*                                                /var/log/boot.log
+
+
+# ### begin forwarding rule ###
+# The statement between the begin ... end define a SINGLE forwarding
+# rule. They belong together, do NOT split them. If you create multiple
+# forwarding rules, duplicate the whole block!
+# Remote Logging (we use TCP for reliable delivery)
+#
+# An on-disk queue is created for this action. If the remote host is
+# down, messages are spooled to disk and sent when it is up again.
+#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
+#$ActionQueueMaxDiskSpace 1g   # 1gb space limit (use as much as possible)
+#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
+#$ActionQueueType LinkedList   # run asynchronously
+#$ActionResumeRetryCount -1    # infinite retries if host is down
+# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
+#*.* @@remote-host:514
+# ### end of the forwarding rule ###
diff --git a/SOURCES/rsyslog.log b/SOURCES/rsyslog.log
new file mode 100644
index 0000000..e4b15af
--- /dev/null
+++ b/SOURCES/rsyslog.log
@@ -0,0 +1,12 @@
+/var/log/cron
+/var/log/maillog
+/var/log/messages
+/var/log/secure
+/var/log/spooler
+{
+    missingok
+    sharedscripts
+    postrotate
+	/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
+    endscript
+}
diff --git a/SOURCES/rsyslog.sysconfig b/SOURCES/rsyslog.sysconfig
new file mode 100644
index 0000000..bc65731
--- /dev/null
+++ b/SOURCES/rsyslog.sysconfig
@@ -0,0 +1,5 @@
+# Options for rsyslogd
+# Syslogd options are deprecated since rsyslog v3.
+# If you want to use them, switch to compatibility mode 2 by "-c 2"
+# See rsyslogd(8) for more details
+SYSLOGD_OPTIONS=""
diff --git a/SPECS/rsyslog.spec b/SPECS/rsyslog.spec
new file mode 100644
index 0000000..3274359
--- /dev/null
+++ b/SPECS/rsyslog.spec
@@ -0,0 +1,1666 @@
+%define rsyslog_statedir %{_sharedstatedir}/rsyslog
+%define rsyslog_pkidir %{_sysconfdir}/pki/rsyslog
+%define rsyslog_docdir %{_docdir}/%{name}-%{version}
+%if 0%{?rhel} >= 7
+%global want_hiredis 0
+%global want_mongodb 0
+%global want_rabbitmq 0
+%else
+%global want_hiredis 1
+%global want_mongodb 1
+%global want_rabbitmq 1
+%endif
+
+Summary: Enhanced system logging and kernel message trapping daemon
+Name: rsyslog
+Version: 8.24.0
+Release: 47%{?dist}
+License: (GPLv3+ and ASL 2.0)
+Group: System Environment/Daemons
+URL: http://www.rsyslog.com/
+Source0: http://www.rsyslog.com/files/download/rsyslog/%{name}-%{version}.tar.gz
+Source1: http://www.rsyslog.com/files/download/rsyslog/%{name}-doc-%{version}.tar.gz
+Source2: rsyslog.conf
+Source3: rsyslog.sysconfig
+Source4: rsyslog.log
+
+BuildRequires: automake
+BuildRequires: autoconf
+BuildRequires: libtool
+BuildRequires: bison
+BuildRequires: flex
+BuildRequires: libfastjson-devel
+BuildRequires: libestr-devel >= 0.1.9
+BuildRequires: libuuid-devel
+BuildRequires: pkgconfig
+BuildRequires: python-docutils
+BuildRequires: python-sphinx
+# it depens on rhbz#1419228
+BuildRequires: systemd-devel >= 219-39
+BuildRequires: zlib-devel
+
+Requires: logrotate >= 3.5.2
+Requires: bash >= 2.0
+Requires: libestr >= 0.1.9
+Requires(post): systemd
+Requires(preun): systemd
+Requires(postun): systemd
+
+Provides: syslog
+Obsoletes: sysklogd < 1.5-11
+
+# tweak the upstream service file to honour configuration from /etc/sysconfig/rsyslog
+Patch0: rsyslog-8.24.0-sd-service.patch
+Patch1: rsyslog-8.24.0-msg_c_nonoverwrite_merge.patch
+#Patch2: rsyslog-8.24.0-rhbz1188503-imjournal-default-tag.patch
+
+Patch3: rsyslog-8.24.0-rhbz1303617-imfile-wildcards.patch
+Patch4: rsyslog-8.24.0-doc-polling-by-default.patch
+Patch5: rsyslog-8.24.0-rhbz1399569-flushontxend.patch
+Patch6: rsyslog-8.24.0-rhbz1400594-tls-config.patch
+Patch7: rsyslog-8.24.0-rhbz1401870-watermark.patch
+
+Patch8: rsyslog-8.24.0-rhbz1403831-missing-cmd-line-switches.patch
+Patch9: rsyslog-8.24.0-rhbz1245194-imjournal-ste-file.patch
+Patch10: rsyslog-8.24.0-doc-rhbz1507028-recover_qi.patch
+Patch11: rsyslog-8.24.0-rhbz1088021-systemd-time-backwards.patch
+Patch12: rsyslog-8.24.0-rhbz1403907-imudp-deprecated-parameter.patch
+Patch13: rsyslog-8.24.0-rhbz1196230-ratelimit-add-source.patch
+Patch14: rsyslog-8.24.0-rhbz1422789-missing-chdir-w-chroot.patch
+Patch15: rsyslog-8.24.0-rhbz1422414-glbDoneLoadCnf-segfault.patch
+Patch16: rsyslog-8.24.0-rhbz1427828-set-unset-not-checking-varName.patch
+
+Patch17: rsyslog-8.24.0-rhbz1427821-backport-num2ipv4.patch
+Patch18: rsyslog-8.24.0-rhbz1427821-str2num-emty-string-handle.patch
+
+Patch19: rsyslog-8.24.0-rhbz1165236-snmp-mib.patch
+Patch20: rsyslog-8.24.0-rhbz1419228-journal-switch-persistent.patch
+Patch21: rsyslog-8.24.0-rhbz1431616-pmrfc3164sd-backport.patch
+
+Patch22: rsyslog-8.24.0-rhbz1056548-getaddrinfo.patch
+
+Patch23: rsyslog-8.24.0-rhbz1401456-sd-service-network.patch
+Patch24: rsyslog-8.24.0-doc-rhbz1459896-queues-defaults.patch
+Patch25: rsyslog-8.24.0-rhbz1497985-journal-reloaded-message.patch
+Patch26: rsyslog-8.24.0-rhbz1462160-set.statement-crash.patch
+Patch27: rsyslog-8.24.0-rhbz1488186-fixed-nullptr-check.patch
+Patch28: rsyslog-8.24.0-rhbz1505103-omrelp-rebindinterval.patch
+
+Patch29: rsyslog-8.24.0-rhbz1538372-imjournal-duplicates.patch
+Patch30: rsyslog-8.24.0-rhbz1511485-deserialize-property-name.patch
+
+Patch31: rsyslog-8.24.0-rhbz1512551-caching-sockaddr.patch
+Patch32: rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch
+Patch33: rsyslog-8.24.0-rhbz1582517-buffer-overflow-memcpy-in-parser.patch
+Patch34: rsyslog-8.24.0-rhbz1591819-msg-loss-shutdown.patch
+Patch35: rsyslog-8.24.0-rhbz1539193-mmkubernetes-new-plugin.patch
+Patch36: rsyslog-8.24.0-rhbz1507145-omelastic-client-cert.patch
+Patch37: rsyslog-8.24.0-doc-rhbz1507145-omelastic-client-cert-and-config.patch
+Patch38: rsyslog-8.24.0-rhbz1565214-omelasticsearch-replace-cJSON-with-libfastjson.patch
+Patch39: rsyslog-8.24.0-rhbz1565214-omelasticsearch-write-op-types-bulk-rejection-retries.patch
+Patch40: rsyslog-8.24.0-doc-rhbz1539193-mmkubernetes-new-plugin.patch
+Patch41: rsyslog-8.24.0-doc-rhbz1538372-imjournal-duplicates.patch
+Patch42: rsyslog-8.24.0-rhbz1597264-man-page-fix.patch
+Patch43: rsyslog-8.24.0-rhbz1559408-async-writer.patch
+Patch44: rsyslog-8.24.0-rhbz1600462-wrktable-realloc-null.patch
+
+Patch45: rsyslog-8.24.0-rhbz1632659-omfwd-mem-corruption.patch
+Patch46: rsyslog-8.24.0-rhbz1649250-imfile-rotation.patch
+Patch47: rsyslog-8.24.0-rhbz1658288-imptcp-octet-segfault.patch
+Patch48: rsyslog-8.24.0-rhbz1622767-mmkubernetes-stop-on-pod-delete.patch
+Patch49: rsyslog-8.24.0-rhbz1685901-symlink-error-flood.patch
+Patch50: rsyslog-8.24.0-rhbz1632211-journal-cursor-fix.patch
+Patch51: rsyslog-8.24.0-rhbz1666365-internal-messages-memory-leak.patch
+Patch52: rsyslog-8.24.0-doc-rhbz1625935-mmkubernetes-CRI-O.patch
+Patch53: rsyslog-8.24.0-rhbz1656860-imfile-buffer-overflow.patch
+Patch54: rsyslog-8.24.0-rhbz1725067-imjournal-memleak.patch
+
+Patch55: rsyslog-8.24.0-doc-rhbz1696686-imjournal-fsync.patch
+Patch56: rsyslog-8.24.0-rhbz1696686-imjournal-fsync.patch
+Patch57: rsyslog-8.24.0-rhbz1684236-omelastic-sigsegv.patch
+Patch58: rsyslog-8.24.0-doc-rhbz1309698-imudp-case-sensitive-option.patch
+Patch59: rsyslog-8.24.0-rhbz1309698-imudp-case-sensitive-option.patch
+Patch60: rsyslog-8.24.0-rhbz1627799-cert-chains.patch
+Patch61: rsyslog-8.24.0-rhbz1744682-ratelimiter-segfault.patch
+
+%package crypto
+Summary: Encryption support
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: libgcrypt-devel
+
+%package doc
+Summary: HTML Documentation for rsyslog
+Group: Documentation
+#no reason to have arched documentation
+BuildArch: noarch
+
+%package elasticsearch
+Summary: ElasticSearch output module for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: libcurl-devel
+
+%if %{want_hiredis}
+%package hiredis
+Summary: Redis support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: hiredis-devel
+%endif
+
+%package mmjsonparse
+Summary: JSON enhanced logging support
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+
+%package mmnormalize
+Summary: Log normalization support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: libee-devel
+BuildRequires: liblognorm-devel
+
+%package mmaudit
+Summary: Message modification module supporting Linux audit format
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+
+%package mmsnmptrapd
+Summary: Message modification module for snmptrapd generated messages
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+
+%package libdbi
+Summary: Libdbi database support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: libdbi-devel
+
+%package mysql
+Summary: MySQL support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: mysql >= 4.0
+BuildRequires: mysql-devel >= 4.0
+
+%if %{want_mongodb}
+%package mongodb
+Summary: MongoDB support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: libmongo-client-devel
+%endif
+
+%package pgsql
+Summary: PostgresSQL support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: postgresql-devel
+
+%if %{want_rabbitmq}
+%package rabbitmq
+Summary: RabbitMQ support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: librabbitmq-devel >= 0.2
+%endif
+
+%package gssapi
+Summary: GSSAPI authentication and encryption support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: krb5-devel
+
+%package relp
+Summary: RELP protocol support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+Requires: librelp >= 1.0.3
+BuildRequires: librelp-devel >= 1.0.3
+
+%package gnutls
+Summary: TLS protocol support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: gnutls-devel
+
+%package snmp
+Summary: SNMP protocol support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: net-snmp-devel
+
+%package udpspoof
+Summary: Provides the omudpspoof module
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: libnet-devel
+
+%package kafka
+Summary: Provides kafka support for rsyslog
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: librdkafka-devel
+
+%package mmkubernetes
+Summary: Provides the mmkubernetes module
+Group: System Environment/Daemons
+Requires: %name = %version-%release
+BuildRequires: libcurl-devel
+
+%description
+Rsyslog is an enhanced, multi-threaded syslog daemon. It supports MySQL,
+syslog/TCP, RFC 3195, permitted sender lists, filtering on any message part,
+and fine grain output format control. It is compatible with stock sysklogd
+and can be used as a drop-in replacement. Rsyslog is simple to set up, with
+advanced features suitable for enterprise-class, encryption-protected syslog
+relay chains.
+
+%description crypto
+This package contains a module providing log file encryption and a
+command line tool to process encrypted logs.
+
+%description doc
+This subpackage contains documentation for rsyslog.
+
+%description elasticsearch
+This module provides the capability for rsyslog to feed logs directly into
+Elasticsearch.
+
+%if %{want_hiredis}
+%description hiredis
+This module provides output to Redis.
+%endif
+
+%description mmjsonparse
+This module provides the capability to recognize and parse JSON enhanced
+syslog messages.
+
+%description mmnormalize
+This module provides the capability to normalize log messages via liblognorm.
+
+%description mmaudit
+This module provides message modification supporting Linux audit format
+in various settings.
+
+%description mmsnmptrapd
+This message modification module takes messages generated from snmptrapd and
+modifies them so that they look like they originated from the read originator.
+
+%description libdbi
+This module supports a large number of database systems via
+libdbi. Libdbi abstracts the database layer and provides drivers for
+many systems. Drivers are available via the libdbi-drivers project.
+
+%description mysql
+The rsyslog-mysql package contains a dynamic shared object that will add
+MySQL database support to rsyslog.
+
+%if %{want_mongodb}
+%description mongodb
+The rsyslog-mongodb package contains a dynamic shared object that will add
+MongoDB database support to rsyslog.
+%endif
+
+%description pgsql
+The rsyslog-pgsql package contains a dynamic shared object that will add
+PostgreSQL database support to rsyslog.
+
+%if %{want_rabbitmq}
+%description rabbitmq
+This module allows rsyslog to send messages to a RabbitMQ server.
+%endif
+
+%description gssapi
+The rsyslog-gssapi package contains the rsyslog plugins which support GSSAPI
+authentication and secure connections. GSSAPI is commonly used for Kerberos
+authentication.
+
+%description relp
+The rsyslog-relp package contains the rsyslog plugins that provide
+the ability to receive syslog messages via the reliable RELP
+protocol.
+
+%description gnutls
+The rsyslog-gnutls package contains the rsyslog plugins that provide the
+ability to receive syslog messages via upcoming syslog-transport-tls
+IETF standard protocol.
+
+%description snmp
+The rsyslog-snmp package contains the rsyslog plugin that provides the
+ability to send syslog messages as SNMPv1 and SNMPv2c traps.
+
+%description udpspoof
+This module is similar to the regular UDP forwarder, but permits to
+spoof the sender address. Also, it enables to circle through a number
+of source ports.
+
+%description kafka
+The rsyslog-kafka package provides module for Apache Kafka output.
+
+%description mmkubernetes
+The rsyslog-mmkubernetes package provides module for adding kubernetes 
+container metadata. 
+
+%prep
+# set up rsyslog-doc sources
+%setup -q -a 1 -T -c
+%patch4 -p1 
+%patch10 -p1
+%patch24 -p1
+%patch37 -p1
+%patch40 -p1
+%patch41 -p1
+%patch52 -p1
+%patch55 -p1
+%patch58 -p1
+#regenerate the docs
+mv build/searchindex.js searchindex_backup.js
+sphinx-build -b html source build
+#clean up
+mv searchindex_backup.js build/searchindex.js
+rm -r LICENSE README.md build.sh source build/objects.inv
+mv build doc
+
+# set up rsyslog sources
+%setup -q -D
+
+%patch0 -p1 -b .service
+%patch1 -p1 -b .msg_merge
+#%patch2 is obsoleted by patch25
+%patch3 -p1 -b .wildcards
+#%patch4 is applied right after doc setup 
+
+%patch5 -p1 -b .flushontxend
+%patch6 -p1 -b .tls-config
+%patch7 -p1 -b .watermark 
+
+%patch8 -p1 -b .missg-cmd-line-switches
+%patch9 -p1 -b .ste-file
+#%patch10 is applied right after doc setup 
+%patch11 -p1 -b .systemd-time
+%patch12 -p1 -b .imudp-deprecated-parameter
+%patch13 -p1 -b .ratelimit-add-source
+%patch14 -p1 -b .missing-chdir-w-chroot
+%patch15 -p1 -b .glbDoneLoadCnf-segfault
+%patch16 -p1 -b .set-unset-check-varName
+
+%patch17 -p1 -b .num2ipv4
+%patch18 -p1 -b .str2num-handle-emty-strings
+%patch19 -p1 -b .snmp-mib
+%patch20 -p1 -b .journal-switch
+%patch21 -p1 -b .pmrfc3164sd
+%patch22 -p1 -b .getaddrinfo
+
+%patch23 -p1 -b .sd-service-network
+#%patch24 is applied right after doc setup
+%patch25 -p1 -b .journal-reloaded
+%patch26 -p1 -b .set-statement-crash
+%patch27 -p1 -b .nullptr-check
+%patch28 -p1 -b .rebindinterval
+
+%patch29 -p1 -b .imjournal-duplicates
+%patch30 -p1 -b .property-deserialize
+
+%patch31 -p1 -b .caching-sockaddr
+%patch32 -p1 -b .imfile-symlink
+%patch33 -p1 -b .buffer-overflow
+%patch34 -p1 -b .msg-loss-shutdown
+%patch35 -p1 -b .kubernetes-metadata
+%patch36 -p1 -b .omelasticsearch-cert
+#%patch37 is applied right after doc setup
+%patch38 -p1 -b .omelasticsearch-libfastjson
+%patch39 -p1 -b .omelasticsearch-bulk-rejection
+#%patch40 is applied right after doc setup
+#%patch41 is applied right after doc setup
+%patch42 -p1 -b .manpage
+%patch43 -p1 -b .async-writer
+%patch44 -p1 -b .null-realloc-chk
+
+%patch45 -p1 -b .omfwd-mem-corrupt
+%patch46 -p1 -b .imfile-rotation
+%patch47 -p1 -b .imptcp-octet-count
+%patch48 -p1 -b .mmkubernetes-stop
+%patch49 -p1 -b .symlink-err-flood
+%patch50 -p1 -b .imjournal-cursor
+%patch51 -p1 -b .internal-msg-memleak
+#%patch52 is applied right after doc setup
+%patch53 -p1 -b .imfile-buffer-overflow
+%patch54 -p1 -b .imjournal-memleak
+
+#%patch55 is applied right after doc setup
+%patch56 -p1 -b .imjournal-fsync
+%patch57 -p1 -b .elastic-sigsegv
+#%patch58 is applied right after doc setup
+%patch59 -p1 -b .udp-case-sensitive
+%patch60 -p1 -b .cert-chains
+%patch61 -p1 -b .ratelimit-crash
+
+autoreconf 
+
+%build
+%ifarch sparc64
+#sparc64 need big PIE
+export CFLAGS="$RPM_OPT_FLAGS -fPIE -DPATH_PIDFILE=\\\"/var/run/syslogd.pid\\\""
+export LDFLAGS="-pie -Wl,-z,relro -Wl,-z,now"
+%else
+export CFLAGS="$RPM_OPT_FLAGS -fpie -DPATH_PIDFILE=\\\"/var/run/syslogd.pid\\\""
+export LDFLAGS="-pie -Wl,-z,relro -Wl,-z,now"
+%endif
+
+%if %{want_hiredis}
+# the hiredis-devel package doesn't provide a pkg-config file
+export HIREDIS_CFLAGS=-I/usr/include/hiredis
+export HIREDIS_LIBS=-L%{_libdir}
+%endif
+sed -i 's/%{version}/%{version}-%{release}/g' configure.ac
+%configure \
+	--prefix=/usr \
+	--disable-static \
+	--disable-testbench \
+	--disable-liblogging-stdlog \
+	--enable-elasticsearch \
+	--enable-generate-man-pages \
+	--enable-gnutls \
+	--enable-gssapi-krb5 \
+	--enable-imdiag \
+	--enable-imfile \
+	--enable-imjournal \
+	--enable-impstats \
+	--enable-imptcp \
+	--enable-libdbi \
+	--enable-mail \
+	--enable-mmanon \
+	--enable-mmaudit \
+	--enable-mmcount \
+	--enable-mmjsonparse \
+	--enable-mmnormalize \
+	--enable-mmsnmptrapd \
+	--enable-mmutf8fix \
+	--enable-mmkubernetes \
+	--enable-mysql \
+%if %{want_hiredis}
+	--enable-omhiredis \
+%endif
+	--enable-omjournal \
+%if %{want_mongodb}
+	--enable-ommongodb \
+%endif
+	--enable-omprog \
+%if %{want_rabbitmq}
+	--enable-omrabbitmq \
+%endif
+	--enable-omruleset \
+	--enable-omstdout \
+	--enable-omudpspoof \
+	--enable-omuxsock \
+	--enable-omkafka \
+	--enable-pgsql \
+	--enable-pmaixforwardedfrom \
+	--enable-pmcisconames \
+	--enable-pmlastmsg \
+	--enable-pmrfc3164sd \
+	--enable-pmsnare \
+	--enable-relp \
+	--enable-snmp \
+	--enable-unlimited-select \
+	--enable-usertools \
+
+make
+
+%install
+make DESTDIR=%{buildroot} install
+
+install -d -m 755 %{buildroot}%{_sysconfdir}/sysconfig
+install -d -m 755 %{buildroot}%{_sysconfdir}/logrotate.d
+install -d -m 755 %{buildroot}%{_sysconfdir}/rsyslog.d
+install -d -m 700 %{buildroot}%{rsyslog_statedir}
+install -d -m 700 %{buildroot}%{rsyslog_pkidir}
+install -d -m 755 %{buildroot}%{rsyslog_docdir}/html
+
+install -p -m 644 %{SOURCE2} %{buildroot}%{_sysconfdir}/rsyslog.conf
+install -p -m 644 %{SOURCE3} %{buildroot}%{_sysconfdir}/sysconfig/rsyslog
+install -p -m 644 %{SOURCE4} %{buildroot}%{_sysconfdir}/logrotate.d/syslog
+install -p -m 644 plugins/ommysql/createDB.sql %{buildroot}%{rsyslog_docdir}/mysql-createDB.sql
+install -p -m 644 plugins/ompgsql/createDB.sql %{buildroot}%{rsyslog_docdir}/pgsql-createDB.sql
+# extract documentation
+cp -r doc/* %{buildroot}%{rsyslog_docdir}/html
+# get rid of libtool libraries
+rm -f %{buildroot}%{_libdir}/rsyslog/*.la
+# get rid of socket activation by default
+sed -i '/^Alias/s/^/;/;/^Requires=syslog.socket/s/^/;/' %{buildroot}%{_unitdir}/rsyslog.service
+
+# convert line endings from "\r\n" to "\n"
+cat tools/recover_qi.pl | tr -d '\r' > %{buildroot}%{_bindir}/rsyslog-recover-qi.pl
+
+%post
+for n in /var/log/{messages,secure,maillog,spooler}
+do
+	[ -f $n ] && continue
+	umask 066 && touch $n
+done
+%systemd_post rsyslog.service
+
+%preun
+%systemd_preun rsyslog.service
+
+%postun
+%systemd_postun_with_restart rsyslog.service
+
+%files
+%defattr(-,root,root,-)
+%doc AUTHORS COPYING* ChangeLog
+%exclude %{rsyslog_docdir}/html
+%exclude %{rsyslog_docdir}/mysql-createDB.sql
+%exclude %{rsyslog_docdir}/pgsql-createDB.sql
+%dir %{_libdir}/rsyslog
+%dir %{_sysconfdir}/rsyslog.d
+%dir %{rsyslog_statedir}
+%dir %{rsyslog_pkidir}
+%{_sbindir}/rsyslogd
+%attr(755,root,root) %{_bindir}/rsyslog-recover-qi.pl
+%{_mandir}/man5/rsyslog.conf.5.gz
+%{_mandir}/man8/rsyslogd.8.gz
+%{_unitdir}/rsyslog.service
+%config(noreplace) %{_sysconfdir}/rsyslog.conf
+%config(noreplace) %{_sysconfdir}/sysconfig/rsyslog
+%config(noreplace) %{_sysconfdir}/logrotate.d/syslog
+# plugins
+%{_libdir}/rsyslog/imdiag.so
+%{_libdir}/rsyslog/imfile.so
+%{_libdir}/rsyslog/imjournal.so
+%{_libdir}/rsyslog/imklog.so
+%{_libdir}/rsyslog/immark.so
+%{_libdir}/rsyslog/impstats.so
+%{_libdir}/rsyslog/imptcp.so
+%{_libdir}/rsyslog/imtcp.so
+%{_libdir}/rsyslog/imudp.so
+%{_libdir}/rsyslog/imuxsock.so
+%{_libdir}/rsyslog/lmnet.so
+%{_libdir}/rsyslog/lmnetstrms.so
+%{_libdir}/rsyslog/lmnsd_ptcp.so
+%{_libdir}/rsyslog/lmregexp.so
+%{_libdir}/rsyslog/lmstrmsrv.so
+%{_libdir}/rsyslog/lmtcpclt.so
+%{_libdir}/rsyslog/lmtcpsrv.so
+%{_libdir}/rsyslog/lmzlibw.so
+%{_libdir}/rsyslog/mmanon.so
+%{_libdir}/rsyslog/mmcount.so
+%{_libdir}/rsyslog/mmexternal.so
+%{_libdir}/rsyslog/mmutf8fix.so
+%{_libdir}/rsyslog/omjournal.so
+%{_libdir}/rsyslog/ommail.so
+%{_libdir}/rsyslog/omprog.so
+%{_libdir}/rsyslog/omruleset.so
+%{_libdir}/rsyslog/omstdout.so
+%{_libdir}/rsyslog/omtesting.so
+%{_libdir}/rsyslog/omuxsock.so
+%{_libdir}/rsyslog/pmaixforwardedfrom.so
+%{_libdir}/rsyslog/pmcisconames.so
+%{_libdir}/rsyslog/pmlastmsg.so
+%{_libdir}/rsyslog/pmrfc3164sd.so
+%{_libdir}/rsyslog/pmsnare.so
+
+%files crypto
+%defattr(-,root,root)
+%{_bindir}/rscryutil
+%{_mandir}/man1/rscryutil.1.gz
+%{_libdir}/rsyslog/lmcry_gcry.so
+
+%files doc
+%defattr(-,root,root)
+%doc %{rsyslog_docdir}/html
+
+%files elasticsearch
+%defattr(-,root,root)
+%{_libdir}/rsyslog/omelasticsearch.so
+
+%if %{want_hiredis}
+%files hiredis
+%defattr(-,root,root)
+%{_libdir}/rsyslog/omhiredis.so
+%endif
+
+%files libdbi
+%defattr(-,root,root)
+%{_libdir}/rsyslog/omlibdbi.so
+
+%files mmaudit
+%defattr(-,root,root)
+%{_libdir}/rsyslog/mmaudit.so
+
+%files mmjsonparse
+%defattr(-,root,root)
+%{_libdir}/rsyslog/mmjsonparse.so
+
+%files mmnormalize
+%defattr(-,root,root)
+%{_libdir}/rsyslog/mmnormalize.so
+
+%files mmsnmptrapd
+%defattr(-,root,root)
+%{_libdir}/rsyslog/mmsnmptrapd.so
+
+%files mysql
+%defattr(-,root,root)
+%doc %{rsyslog_docdir}/mysql-createDB.sql
+%{_libdir}/rsyslog/ommysql.so
+
+%if %{want_mongodb}
+%files mongodb
+%defattr(-,root,root)
+%{_bindir}/logctl
+%{_libdir}/rsyslog/ommongodb.so
+%endif
+
+%files pgsql
+%defattr(-,root,root)
+%doc %{rsyslog_docdir}/pgsql-createDB.sql
+%{_libdir}/rsyslog/ompgsql.so
+
+%if %{want_rabbitmq}
+%files rabbitmq
+%defattr(-,root,root)
+%{_libdir}/rsyslog/omrabbitmq.so
+%endif
+
+%files gssapi
+%defattr(-,root,root)
+%{_libdir}/rsyslog/lmgssutil.so
+%{_libdir}/rsyslog/imgssapi.so
+%{_libdir}/rsyslog/omgssapi.so
+
+%files relp
+%defattr(-,root,root)
+%{_libdir}/rsyslog/imrelp.so
+%{_libdir}/rsyslog/omrelp.so
+
+%files gnutls
+%defattr(-,root,root)
+%{_libdir}/rsyslog/lmnsd_gtls.so
+
+%files snmp
+%defattr(-,root,root)
+%{_libdir}/rsyslog/omsnmp.so
+
+%files udpspoof
+%defattr(-,root,root)
+%{_libdir}/rsyslog/omudpspoof.so
+
+%files kafka
+%{_libdir}/rsyslog/omkafka.so
+
+%files mmkubernetes
+%{_libdir}/rsyslog/mmkubernetes.so
+
+%changelog
+* Thu Sep 05 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-47
+RHEL 7.8 ERRATUM
+- edited imfile truncation detection patch with reression fix
+  resolves: rhbz#1744856
+
+* Wed Aug 28 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-46
+RHEL 7.8 ERRATUM
+- Support Intermediate Certificate Chains in rsyslog
+  resolves: rhbz#1627799
+- fixed WorAroundJournalBug patch to not cause leaks
+  resolves: rhbz#1744617
+- added patch fixing possible segfault in rate-limiter
+  resolves: rhbz#1744682
+
+* Mon Aug 12 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-45
+RHEL 7.8 ERRATUM
+- fixed fsync patch according to covscan results
+  resolves: rhbz#1696686
+
+* Fri Aug 09 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-44
+RHEL 7.8 ERRATUM
+- added patch and doc-patch for new caseSensitive imUDP/TCP option
+  resolves: rhbz#1309698
+
+* Fri Aug 02 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-41
+RHEL 7.8 ERRATUM
+- added patch and doc-patch with new "fsync" imjournal option
+  resolves: rhbz#1696686
+- added patch resolving omelasticsearch crash on "nothing" reply
+  resolves: rhbz#1684236
+
+* Mon Jul 01 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-39
+RHEL 7.7.z ERRATUM
+- added patch resolving memory leaks in imjournal
+  resolves: rhbz#1725067
+
+* Mon Apr 08 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-38
+RHEL 7.7 ERRATUM
+- added patch increasing max path size preventing buffer overflow
+  with too long paths
+  resolves: rhbz#1656860
+
+* Wed Mar 20 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-37
+RHEL 7.7 ERRATUM
+- edited patch fixing mmkubernetes halt after pod deletition
+  (covscan found an issue in previous version)
+  resolves: rhbz#1622767
+- added patch stopping flooding logs with journald errors
+  resolves: rhbz#1632211
+- added patch stopping flooding logs with symlink false-positives
+  resolves: rhbz#1685901
+- added patch stopping memory leak when processing internal msgs
+  resolves: rhbz#1666365
+- added documentation patch with info about CRI-O to mmkubernetes
+  resolves: rhbz#1625935
+
+* Wed Feb 27 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-36
+RHEL 7.7 ERRATUM
+- added patch fixing mmkubernetes halt after pod deletition
+  resolves: rhbz#1622767
+
+* Mon Jan 28 2019 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-35
+RHEL 7.7 ERRATUM
+- added patch fixing memory corruption in omfwd module
+  resolves: rhbz#1632659
+- added patch fixing imfile sopping monitor after rotation
+  resolves: rhbz#1649250
+- added patch addressing imptcp CVE-2018-16881
+  resolves: rhbz#1658288
+
+* Tue Aug 07 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-34
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with parent name bugfix
+  resolves: rhbz#1531295
+
+* Tue Aug 07 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-33
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with extended symlink watching
+  resolves: rhbz#1531295
+- updated mmkubernetes patch to accept dots in pod name
+  resolves: rhbz#1539193
+
+* Fri Aug 03 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-32
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with no log on EACCES
+  resolves: rhbz#1531295
+- removed now needless build-deps
+
+* Mon Jul 30 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-31
+RHEL 7.6 ERRATUM
+- added new patch fixing ompipe dropping messages when pipe full
+  resolves: rhbz#1591819
+- updated mmkubernetes patch to accept non-kubernetes containers
+  resolves: rhbz#1539193
+  resolves: rhbz#1609023
+- removed json-parsing patches as the bug is now fixed in liblognorm
+
+* Wed Jul 25 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-30
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with next bugfix
+  resolves: rhbz#1531295
+- updated imjournal duplicates patch making slower code optional
+  and added corresponding doc patch
+  resolves: rhbz#1538372
+
+* Mon Jul 23 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-29
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with another bugfix
+  resolves: rhbz#1531295
+
+* Fri Jul 20 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-28
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch fixing next round of regressions
+  resolves: rhbz#1531295
+  resolves: rhbz#1602156
+- updated mmkubernetes patch with NULL ret-check
+  resolves: rhbz#1539193
+
+* Tue Jul 17 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-27
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch fixing last update regressions
+  resolves: rhbz#1531295
+- added patch fixing deadlock in async logging
+  resolves: rhbz#1559408
+- added patch fixing NULL access in worktable create
+  resolves: rhbz#1600462
+- now putting release number into configure to have it present
+  in error messages
+
+* Mon Jul 09 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-26
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch according to early testing
+  resolves: rhbz#1531295
+- added patch fixing pid file name in manpage
+  resolves: rhbz#1597264
+- updated json-parsing patch with one more bugfix
+  resolves: rhbz#1565219
+
+* Fri Jun 29 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-24
+RHEL 7.6 ERRATUM
+- updated imfile rewrite patch with fixes from covscan
+  resolves: rhbz#1531295
+- updated mmkubernetes patch with fixes from covscan
+  resolves: rhbz#1539193
+- updated imjournal duplicates patch with fixes from covscan
+  resolves: rhbz#1538372
+- updated omelastic enhancement patch with fixes from covscan
+  resolves: rhbz#1565214
+
+* Wed Jun 27 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-23
+RHEL 7.6 ERRATUM
+- added backport of leading $ support to json-parsing patch
+  resolves: rhbz#1565219
+- The required info is already contained in rsyslog-doc package
+  so there is no patch for this one
+  resolves: rhbz#1553700
+
+* Tue Jun 26 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-22
+RHEL 7.6 ERRATUM
+- edited patch for top-level json parsing with bugfix
+  resolves: rhbz#1565219
+- renamed doc patches and added/updated new ones for mmkubernetes
+  omelasticsearch and json parsing
+- renamed patch fixing buffer overflow in parser - memcpy()
+  resolves: rhbz#1582517
+
+* Mon Jun 25 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-21
+RHEL 7.6 ERRATUM
+- fixed imfile rewrite backport patch, added few more bugfixes
+  resolves: rhbz#1531295
+- added also doc patch for omelastic client certs
+  resolves: rhbz#1507145
+- cleaned and shortened patch for omelastic error handling
+  resolves: rhbz#1565214
+- enabled patch for json top-level parsing
+  resolves: rhbz#1565219
+- merged mmkubernetes patches into one and enabled the module
+  resolves: rhbz#1539193
+  resolves: rhbz#1589924
+  resolves: rhbz#1590582
+
+* Sun Jun 24 2018 Noriko Hosoi <nhosoi@redhat.com> - 8.24.0-21
+RHEL 7.6 ERRATUM
+resolves: rhbz#1582517 - Buffer overflow in memcpy() in parser.c
+resolves: rhbz#1539193 - RFE: Support for mm kubernetes plugin
+resolves: rhbz#1589924 - RFE: Several fixes for mmkubernetes
+resolves: rhbz#1590582 - mmkubernetes - use version=2 in rulebase files to avoid memory leak
+resolves: rhbz#1507145 - RFE: omelasticsearch support client cert authentication
+resolves: rhbz#1565214 - omelasticsearch needs better handling for bulk index rejections and other errors
+Disables Patch32: rsyslog-8.24.0-rhbz1531295-imfile-rewrite-with-symlink.patch
+Disables Patch34: rsyslog-8.24.0-rhbz1565219-parse-json-into-top-level-fields-in-mess.patch; It BuildRequires/Requires: libfastjson >= 0.99.4-3
+
+* Fri Jun 01 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-20
+RHEL 7.6 ERRATUM
+- added a patch backporting imfile module rewrite and 
+  adding symlink support
+	resolves: rhbz#1531295
+
+* Tue May 29 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-19
+RHEL 7.6 ERRATUM
+- added new kafka sub-package with enabling of omkafka module
+  resolves: rhbz#1482819
+
+* Thu May 17 2018 Radovan Sroka <rsroka@redhat.com> - 8.24.0-18
+- caching the whole sockaddr structure instead of sin_addr causing memory leak
+  resolves: rhbz#1512551
+
+* Fri Apr 27 2018 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-17
+RHEL 7.6 ERRATUM
+- fixed imjournal duplicating messages on log rotation
+  resolves: rhbz#1538372
+- re-enabled 32-bit arches to not break dependent packages
+  resolves: rhbz#1571850
+
+* Thu Nov 09 2017 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-16
+RHEL 7.5 ERRATUM
+- edited the patch to conform to latest upstream doc
+  resolves: rhbz#1459896 (failedQA)
+- disabled 32-bit builds on all arches as they are not shipped
+  anymore in RHEL7
+
+* Tue Oct 31 2017 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-15
+RHEL 7.5 ERRATUM
+- made rsyslog-doc noarch and fixed search on doc regeneration
+  resolves: rhbz#1507028
+
+* Tue Oct 31 2017 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-14
+RHEL 7.5 ERRATUM
+- renamed patch for undocumented recover_qi script to correct bz number
+  resolves: rhbz#1507028
+- added patch ensuring relp conneciton is active before closing it
+  resolves: rhbz#1505103
+
+* Mon Oct 09 2017 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-13
+RHEL 7.5 ERRATUM
+- added patch to properly resolve FQDN
+  resolves: rhbz#1401456
+- added documentation patch correcting qeues default values
+  resolves: rhbz#1459896
+- added patch adjusting log level of journal reloaded msg
+  resolves: rhbz#1497985 
+- added patch to prevent crash with invalid set statement
+  this also obsoletes patch2 (for 1188503) 
+  resolves: rhbz#1462160
+- added patch with nullptr check to prevent ABRT
+  resolves: rhbz#1488186
+
+* Wed May 10 2017 Radovan Sroka <rsroka@redhat.com> - 8.24.0-12
+- added BuildRequires for systemd >= 219-39 depends on rhbz#1419228
+
+* Tue May 09 2017 Radovan Sroka <rsroka@redhat.com> - 8.24.0-11
+RHEL 7.4 ERRATUM
+- added new patch that backports num2ipv4 due to rhbz#1427821
+  resolves: rhbz#1427821
+- enable pmrfc3164sd module
+  resolves: rhbz#1431616
+
+* Wed May 03 2017 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-10
+RHEL 7.4 ERRATUM
+- edited patches Patch19 and Patch21
+  resolves: rhbz#1419228(coverity scan problems)
+  resolves: rhbz#1056548(failed QA, coverity scan problems)
+
+* Tue May 02 2017 Jiri Vymazal <jvymazal@redhat.com> - 8.24.0-9
+RHEL 7.4 ERRATUM
+- added autoreconf call
+- added patch to replace gethostbyname with getaddrinfo call
+  resolves: rhbz#1056548(failed QA)
+
+* Wed Apr 19 2017 Radovan Sroka <rsroka@redhat.com> - 8.24.0-8
+RHEL 7.4 ERRATUM
+- added dependency automake autoconf libtool due to yum-builddep
+- reenable omruleset module
+  resolves: rhbz#1431615
+  resolves: rhbz#1428403
+  resolves: rhbz#1427821(fix regression, failed QA)
+  resolves: rhbz#1432069
+- resolves: rhbz#1165236
+  resolves: rhbz#1419228
+  resolves: rhbz#1431616 
+  resolves: rhbz#1196230(failed QA)
+
+* Thu Mar 02 2017 Radovan Sroka <rsroka@redhat.com> - 8.24.0-7
+- reverted logrotate file that was added by mistake
+
+* Wed Mar 01 2017 Radovan Sroka <rsroka@redhat.com> - 8.24.0-6
+- RHEL 7.4 ERRATUM
+- rsyslog rebase to 8.24
+- added patch to prevent segfault while setting aclResolveHostname
+  config options
+  resolves: rhbz#1422414
+- added patch to check config variable names at startup
+  resolves: rhbz#1427828
+- added patch for str2num to handle empty strings
+  resolves: rhbz#1427821
+- fixed typo in added-chdir patch
+  resolves: rhbz#1422789
+- added patch to log source process when beginning rate-limiting
+  resolves: rhbz#1196230
+- added patch to chdir after chroot
+  resolves: rhbz#1422789
+- added patch to remove "inputname" imudp module parameter 
+  deprecation warnings
+  resolves: rhbz#1403907
+- added patch which resolves situation when time goes backwards
+  and statefile is invalid
+  resolves rhbz#1088021
+- added a patch to bring back deprecated cmd-line switches and
+  remove associated warnings
+  resolves: rhbz#1403831
+- added documentation recover_qi.pl
+  resolves: rhbz#1286707
+- add another setup for doc package
+- add --enable-generate-man-pages to configure parameters;
+  the rscryutil man page isn't generated without it
+  https://github.com/rsyslog/rsyslog/pull/469
+- enable mmcount, mmexternal modules
+- remove omruleset and pmrfc3164sd modules
+
+* Thu Jul 14 2016 Tomas Heinrich <theinric@redhat.com> 7.4.7-16
+- add a patch to prevent races in libjson-c calls
+  resolves: rhbz#1222746
+
+* Sun Jul 10 2016 Tomas Heinrich <theinric@redhat.com> 7.4.7-15
+- add a patch to make state file handling in imjournal more robust
+  resolves: rhbz#1245194
+- add a patch to support wildcards in imfile
+  resolves: rhbz#1303617
+
+* Fri May 20 2016 Tomas Heinrich <theinric@redhat.com> 7.4.7-14
+- add a patch to prevent loss of partial messages
+  resolves: rhbz#1312459
+- add a patch to allow multiple rulesets in imrelp
+  resolves: rhbz#1223566
+- add a patch to fix a race condition during shutdown
+  resolves: rhbz#1295798
+- add a patch to backport the mmutf8fix plugin
+  resolves: rhbz#1146237
+- add a patch to order service startup after the network
+  resolves: rhbz#1263853
+
+* Mon May 16 2016 Tomas Heinrich <theinric@redhat.com> 7.4.7-13
+- add a patch to prevent crashes when using multiple rulesets
+  resolves: rhbz#1224336
+- add a patch to keep the imjournal state file updated
+  resolves: rhbz#1216957
+- add a patch to fix an undefined behavior caused by the maxMessageSize directive
+  resolves: rhbz#1214257
+- add a patch to prevent crashes when using rulesets with a parser
+  resolves: rhbz#1282687
+
+* Fri Aug 28 2015 Tomas Heinrich <theinric@redhat.com> 7.4.7-12
+- amend the patch for rhbz#1151041
+  resolves: rhbz#1257150
+
+* Tue Aug 18 2015 Radovan Sroka <rsroka@redhat.com> 7.4.7-11
+- add patch that resolves config.guess system-recognition on ppc64le architecture
+  resolves: rhbz:1254511
+
+* Mon Aug 03 2015 Tomas Heinrich <theinric@redhat.com> 7.4.7-10
+- add a patch to prevent field truncation in imjournal
+  resolves: rhbz#1101602
+- add a patch to enable setting a default TAG
+  resolves: rhbz#1188503
+- add a patch to fix a nonfunction hostname setting in imuxsock
+  resolves: rhbz#1184402
+
+* Mon Jul 20 2015 Tomas Heinrich <theinric@redhat.com> 7.4.7-9
+- update the patch fixing a race condition in directory creation
+  resolves: rhbz#1202489
+- improve provided documentation
+  - move documentation from all subpackages under a single directory
+  - add missing images
+  - remove doc files without content
+  - add a patch making various corrections to the HTML documentation
+  resolves: rhbz#1238713
+- add a patch to prevent division-by-zero errors
+  resolves: rhbz#1078878
+- add a patch to clarify usage of the SysSock.Use option
+  resolves: rhbz#1143846
+- add a patch to support arbitrary number of listeners in imuxsock
+  - drop patch for rhbz#1053669 as it has been merged into this one
+  resolves: rhbz#1151041
+
+* Fri Jul 03 2015 Tomas Heinrich <theinric@redhat.com> 7.4.7-8
+- modify the service file to automatically restart rsyslog on failure
+  resolves: rhbz#1061322
+- add explicitly versioned dependencies on libraries which do not have
+  correctly versioned sonames
+  resolves: rhbz#1107839
+- make logrotate tolerate missing log files
+  resolves: rhbz#1144465
+- backport the mmcount plugin
+  resolves: rhbz#1151037
+- set the default service umask to 0066
+  resolves: rhbz#1228192
+- add a patch to make imjournal sanitize messages as imuxsock does it
+  resolves: rhbz#743890
+- add a patch to fix a bug preventing certain imuxsock directives from
+  taking effect
+  resolves: rhbz#1184410
+- add a patch to fix a race condition in directory creation
+  resolves: rhbz#1202489
+
+* Tue Oct 07 2014 Tomas Heinrich <theinric@redhat.com> 7.4.7-7
+- fix CVE-2014-3634
+  resolves: #1149153
+
+* Wed Mar 26 2014 Tomas Heinrich <theinric@redhat.com> 7.4.7-6
+- disable the imklog plugin by default
+  the patch for rhbz#1038136 caused duplication of kernel messages since the
+  messages read by the imklog plugin were now also pulled in from journald
+  resolves: #1078654
+
+* Wed Feb 19 2014 Tomas Heinrich <theinric@redhat.com> 7.4.7-5
+- move the rscryutil man page to the crypto subpackage
+  resolves: #1056565
+- add a patch to prevent message loss in imjournal
+  rsyslog-7.4.7-bz1038136-imjournal-message-loss.patch
+  resolves: #1038136
+
+* Fri Jan 24 2014 Daniel Mach <dmach@redhat.com> - 7.4.7-4
+- Mass rebuild 2014-01-24
+
+* Mon Jan 20 2014 Tomas Heinrich <theinric@redhat.com> 7.4.7-3
+- replace rsyslog-7.3.15-imuxsock-warning.patch
+  with rsyslog-7.4.7-bz1053669-imuxsock-wrn.patch
+  resolves: #1053669
+- add rsyslog-7.4.7-bz1052266-dont-link-libee.patch to prevent
+  linking the main binary with libee
+  resolves: #1052266
+- add rsyslog-7.4.7-bz1054171-omjournal-warning.patch to fix
+  a condition for issuing a warning in omjournal
+  resolves: #1054171
+- drop the "v5" string from the conf file as it's misleading
+  resolves: #1040036
+
+* Wed Jan 15 2014 Honza Horak <hhorak@redhat.com> - 7.4.7-2
+- Rebuild for mariadb-libs
+  Related: #1045013
+
+* Mon Jan 06 2014 Tomas Heinrich <theinric@redhat.com> 7.4.7-1
+- rebase to 7.4.7
+  add requirement on libestr >= 0.1.9
+  resolves: #836485
+  resolves: #1020854
+  resolves: #1040036
+- drop patch 4; not needed anymore
+  rsyslog-7.4.2-imuxsock-rfc3339.patch
+- install the rsyslog-recover-qi.pl tool
+- fix a typo in a package description
+- add missing defattr directives
+- add a patch to remove references to Google ads in the html docs
+  rsyslog-7.4.7-bz1030044-remove-ads.patch
+  Resolves: #1030043
+- add a patch to allow numeric specification of UIDs/GUIDs
+  rsyslog-7.4.7-numeric-uid.patch
+  resolves: #1032198
+- change the installation prefix to "/usr"
+  resolves: #1032223
+- fix a bad date in the changelog
+  resolves: #1043622
+- resolve a build issue with missing mysql_config by adding
+  additional BuildRequires for the mysql package
+- add a patch to resolve build issue on ppc
+  rsyslog-7.4.7-omelasticsearch-atomic-inst.patch
+
+* Fri Dec 27 2013 Daniel Mach <dmach@redhat.com> - 7.4.2-5
+- Mass rebuild 2013-12-27
+
+* Wed Nov 06 2013 Tomas Heinrich <theinric@redhat.com> 7.4.2-4
+- add a patch to fix issues with rfc 3339 timestamp parsing
+  resolves: #1020826
+
+* Fri Jul 12 2013 Jan Safranek <jsafrane@redhat.com> - 7.4.2-3
+- Rebuilt for new net-snmp
+
+* Wed Jul 10 2013 Tomas Heinrich <theinric@redhat.com> 7.4.2-2
+- make compilation of the rabbitmq plugin optional
+  resolves: #978919
+
+* Tue Jul 09 2013 Tomas Heinrich <theinric@redhat.com> 7.4.2-1
+- rebase to 7.4.2
+  most importantly, this release fixes a potential vulnerability,
+  see http://www.lsexperts.de/advisories/lse-2013-07-03.txt
+  the impact should be low as only those using the omelasticsearch
+  plugin with a specific configuration are exposed
+
+* Mon Jun 17 2013 Tomas Heinrich <theinric@redhat.com> 7.4.1-1
+- rebase to 7.4.1
+  this release adds code that somewhat mitigates damage in cases
+  where large amounts of messages are received from systemd
+  journal (see rhbz#974132)
+- regenerate patch 0
+- drop patches merged upstream: 4..8
+- add a dependency on the version of systemd which resolves the bug
+  mentioned above
+- update option name in rsyslog.conf
+
+* Wed Jun 12 2013 Tomas Heinrich <theinric@redhat.com> 7.4.0-1
+- rebase to 7.4.0
+- drop autoconf automake libtool from BuildRequires
+- depends on systemd >= 201 because of the sd_journal_get_events() api
+- add a patch to prevent a segfault in imjournal caused by a bug in
+  systemd journal
+- add a patch to prevent an endless loop in the ratelimiter
+- add a patch to prevent another endless loop in the ratelimiter
+- add a patch to prevent a segfault in imjournal for undefined state file
+- add a patch to correctly reset state in the ratelimiter
+
+* Tue Jun 04 2013 Tomas Heinrich <theinric@redhat.com> 7.3.15-1.20130604git6e72fa6
+- rebase to an upstream snapshot, effectively version 7.3.15
+  plus several more changes
+- drop patches 3, 4 - merged upstream
+- add a patch to silence warnings emitted by the imuxsock module
+- drop the imkmsg plugin
+- enable compilation of additional modules
+  imjournal, mmanon, omjournal, omrabbitmq
+- new subpackages: crypto, rabbitmq
+- add python-docutils and autoconf to global BuildRequires
+- drop the option for backwards compatibility from the
+  sysconfig file - it is no longer supported
+- call autoreconf to prepare the snapshot for building
+- switch the local message source from imuxsock to imjournal
+  the imuxsock module is left enabled so it is easy to swich back to
+  it and because systemd drops a file into /etc/rsyslog.d which only
+  imuxsock can parse
+
+* Wed Apr 10 2013 Tomas Heinrich <theinric@redhat.com> 7.3.10-1
+- rebase to 7.3.10
+- add a patch to resolve #950088 - ratelimiter segfault, merged upstream
+  rsyslog-7.3.10-ratelimit-segv.patch
+- add a patch to correct a default value, merged upstream
+  rsyslog-7.3.10-correct-def-val.patch
+- drop patch 5 - fixed upstream
+
+* Thu Apr 04 2013 Tomas Heinrich <theinric@redhat.com> 7.3.9-1
+- rebase to 7.3.9
+
+* Thu Feb 14 2013 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 7.2.5-3
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_19_Mass_Rebuild
+
+* Mon Jan 21 2013 Tomas Heinrich <theinric@redhat.com> 7.2.5-2
+- update a line in rsyslog.conf for the new syntax
+
+* Sun Jan 13 2013 Tomas Heinrich <theinric@redhat.com> 7.2.5-1
+- upgrade to upstream version 7.2.5
+- update the compatibility mode in sysconfig file
+
+* Mon Dec 17 2012 Tomas Heinrich <theinric@redhat.com> 7.2.4-2
+- add a condition to disable several subpackages
+
+* Mon Dec 10 2012 Tomas Heinrich <theinric@redhat.com> 7.2.4-1
+- upgrade to upstream version 7.2.4
+- remove trailing whitespace
+
+* Tue Nov 20 2012 Tomas Heinrich <theinric@redhat.com> 7.2.2-1
+- upgrade to upstream version 7.2.2
+  update BuildRequires
+- remove patches merged upstream
+  rsyslog-5.8.7-sysklogd-compat-1-template.patch
+  rsyslog-5.8.7-sysklogd-compat-2-option.patch
+  rsyslog-5.8.11-close-fd1-when-forking.patch
+- add patch from Milan Bartos <mbartos@redhat.com>
+  rsyslog-7.2.1-msg_c_nonoverwrite_merge.patch
+- remove the rsyslog-sysvinit package
+- clean up BuildRequires, Requires
+- remove the 'BuildRoot' tag
+- split off a doc package
+- compile additional modules (some of them in separate packages):
+  elasticsearch
+  hiredis
+  mmjsonparse
+  mmnormalize
+  mmaudit
+  mmsnmptrapd
+  mongodb
+- correct impossible timestamps in older changelog entries
+- correct typos, trailing spaces, etc
+- s/RPM_BUILD_ROOT/{buildroot}/
+- remove the 'clean' section
+- replace post* scriptlets with systemd macros
+
+* Sat Jul 21 2012 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 5.8.11-3
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_18_Mass_Rebuild
+
+* Wed Jun 20 2012 Tomas Heinrich <theinric@redhat.com> 5.8.11-2
+- update systemd patch: remove the 'ExecStartPre' option
+
+* Wed May 23 2012 Tomas Heinrich <theinric@redhat.com> 5.8.11-1
+- upgrade to new upstream stable version 5.8.11
+- add impstats and imptcp modules
+- include new license text files
+- consider lock file in 'status' action
+- add patch to update information on debugging in the man page
+- add patch to prevent debug output to stdout after forking
+- add patch to support ssl certificates with domain names longer than 128 chars
+
+* Fri Mar 30 2012 Jon Ciesla <limburgher@gmail.com> 5.8.7-2
+- libnet rebuild.
+
+* Mon Jan 23 2012 Tomas Heinrich <theinric@redhat.com> 5.8.7-1
+- upgrade to new upstream version 5.8.7
+- change license from 'GPLv3+' to '(GPLv3+ and ASL 2.0)'
+  http://blog.gerhards.net/2012/01/rsyslog-licensing-update.html
+- use a specific version for obsoleting sysklogd
+- add patches for better sysklogd compatibility (taken from upstream)
+
+* Sat Jan 14 2012 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 5.8.6-2
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_17_Mass_Rebuild
+
+* Tue Oct 25 2011 Tomas Heinrich <theinric@redhat.com> 5.8.6-1
+- upgrade to new upstream version 5.8.6
+- obsolete sysklogd
+  Resolves: #748495
+
+* Tue Oct 11 2011 Tomas Heinrich <theinric@redhat.com> 5.8.5-3
+- modify logrotate configuration to omit boot.log
+  Resolves: #745093
+
+* Tue Sep 06 2011 Tomas Heinrich <theinric@redhat.com> 5.8.5-2
+- add systemd-units to BuildRequires for the _unitdir macro definition
+
+* Mon Sep 05 2011 Tomas Heinrich <theinric@redhat.com> 5.8.5-1
+- upgrade to new upstream version (CVE-2011-3200)
+
+* Fri Jul 22 2011 Tomas Heinrich <theinric@redhat.com> 5.8.2-3
+- move the SysV init script into a subpackage
+- Resolves: 697533
+
+* Mon Jul 11 2011 Tomas Heinrich <theinric@redhat.com> 5.8.2-2
+- rebuild for net-snmp-5.7 (soname bump in libnetsnmp)
+
+* Mon Jun 27 2011 Tomas Heinrich <theinric@redhat.com> 5.8.2-1
+- upgrade to new upstream version 5.8.2
+
+* Mon Jun 13 2011 Tomas Heinrich <theinric@redhat.com> 5.8.1-2
+- scriptlet correction
+- use macro in unit file's path
+
+* Fri May 20 2011 Tomas Heinrich <theinric@redhat.com> 5.8.1-1
+- upgrade to new upstream version
+- correct systemd scriptlets (#705829)
+
+* Mon May 16 2011 Bill Nottingham <notting@redhat.com> - 5.7.9-3
+- combine triggers (as rpm will only execute one) - fixes upgrades (#699198)
+
+* Tue Apr 05 2011 Tomas Heinrich <theinric@redhat.com> 5.7.10-1
+- upgrade to new upstream version 5.7.10
+
+* Wed Mar 23 2011 Dan Horák <dan@danny.cz> - 5.7.9-2
+- rebuilt for mysql 5.5.10 (soname bump in libmysqlclient)
+
+* Fri Mar 18 2011 Tomas Heinrich <theinric@redhat.com> 5.7.9-1
+- upgrade to new upstream version 5.7.9
+- enable compilation of several new modules,
+  create new subpackages for some of them
+- integrate changes from Lennart Poettering
+  to add support for systemd
+  - add rsyslog-5.7.9-systemd.patch to tweak the upstream
+    service file to honour configuration from /etc/sysconfig/rsyslog
+
+* Fri Mar 18 2011 Dennis Gilmore <dennis@ausil.us> - 5.6.2-3
+- sparc64 needs big PIE
+
+* Wed Feb 09 2011 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 5.6.2-2
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_15_Mass_Rebuild
+
+* Mon Dec 20 2010 Tomas Heinrich <theinric@redhat.com> 5.6.2-1
+- upgrade to new upstream stable version 5.6.2
+- drop rsyslog-5.5.7-remove_include.patch; applied upstream
+- provide omsnmp module
+- use correct name for lock file (#659398)
+- enable specification of the pid file (#579411)
+- init script adjustments
+
+* Wed Oct 06 2010 Tomas Heinrich <theinric@redhat.com> 5.5.7-1
+- upgrade to upstream version 5.5.7
+- update configuration and init files for the new major version
+- add several directories for storing auxiliary data
+- add ChangeLog to documentation
+- drop unlimited-select.patch; integrated upstream
+- add rsyslog-5.5.7-remove_include.patch to fix compilation
+
+* Tue Sep 07 2010 Tomas Heinrich <theinric@redhat.com> 4.6.3-2
+- build rsyslog with PIE and RELRO
+
+* Thu Jul 15 2010 Tomas Heinrich <theinric@redhat.com> 4.6.3-1
+- upgrade to new upstream stable version 4.6.3
+
+* Wed Apr 07 2010 Tomas Heinrich <theinric@redhat.com> 4.6.2-1
+- upgrade to new upstream stable version 4.6.2
+- correct the default value of the OMFileFlushOnTXEnd directive
+
+* Thu Feb 11 2010 Tomas Heinrich <theinric@redhat.com> 4.4.2-6
+- modify rsyslog-4.4.2-unlimited-select.patch so that
+  running autoreconf is not needed
+- remove autoconf, automake, libtool from BuildRequires
+- change exec-prefix to nil
+
+* Wed Feb 10 2010 Tomas Heinrich <theinric@redhat.com> 4.4.2-5
+- remove '_smp_mflags' make argument as it seems to be
+  producing corrupted builds
+
+* Mon Feb 08 2010 Tomas Heinrich <theinric@redhat.com> 4.4.2-4
+- redefine _libdir as it doesn't use _exec_prefix
+
+* Thu Dec 17 2009 Tomas Heinrich <theinric@redhat.com> 4.4.2-3
+- change exec-prefix to /
+
+* Wed Dec 09 2009 Robert Scheck <robert@fedoraproject.org> 4.4.2-2
+- run libtoolize to avoid errors due mismatching libtool version
+
+* Thu Dec 03 2009 Tomas Heinrich <theinric@redhat.com> 4.4.2-1
+- upgrade to new upstream stable version 4.4.2
+- add support for arbitrary number of open file descriptors
+
+* Mon Sep 14 2009 Tomas Heinrich <theinric@redhat.com> 4.4.1-2
+- adjust init script according to guidelines (#522071)
+
+* Thu Sep 03 2009 Tomas Heinrich <theinric@redhat.com> 4.4.1-1
+- upgrade to new upstream stable version
+
+* Fri Aug 21 2009 Tomas Mraz <tmraz@redhat.com> - 4.2.0-3
+- rebuilt with new openssl
+
+* Sun Jul 26 2009 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 4.2.0-2
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_12_Mass_Rebuild
+
+* Tue Jul 14 2009 Tomas Heinrich <theinric@redhat.com> 4.2.0-1
+- upgrade
+
+* Mon Apr 13 2009 Tomas Heinrich <theinric@redhat.com> 3.21.11-1
+- upgrade
+
+* Tue Mar 31 2009 Lubomir Rintel <lkundrak@v3.sk> 3.21.10-4
+- Backport HUPisRestart option
+
+* Wed Mar 18 2009 Tomas Heinrich <theinric@redhat.com> 3.21.10-3
+- fix variables' type conversion in expression-based filters (#485937)
+
+* Wed Feb 25 2009 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 3.21.10-2
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_11_Mass_Rebuild
+
+* Tue Feb 10 2009 Tomas Heinrich <theinric@redhat.com> 3.21.10-1
+- upgrade
+
+* Sat Jan 24 2009 Caolán McNamara <caolanm@redhat.com> 3.21.9-3
+- rebuild for dependencies
+
+* Wed Jan 07 2009 Tomas Heinrich <theinric@redhat.com> 3.21.9-2
+- fix several legacy options handling
+- fix internal message output (#478612)
+
+* Mon Dec 15 2008 Peter Vrabec <pvrabec@redhat.com> 3.21.9-1
+- update is fixing $AllowedSender security issue
+
+* Mon Sep 15 2008 Peter Vrabec <pvrabec@redhat.com> 3.21.3-4
+- use RPM_OPT_FLAGS
+- use same pid file and logrotate file as syslog-ng (#441664)
+- mark config files as noreplace (#428155)
+
+* Mon Sep 01 2008 Tomas Heinrich <theinric@redhat.com> 3.21.3-3
+- fix a wrong module name in the rsyslog.conf manual page (#455086)
+- expand the rsyslog.conf manual page (#456030)
+
+* Thu Aug 28 2008 Tomas Heinrich <theinric@redhat.com> 3.21.3-2
+- fix clock rollback issue (#460230)
+
+* Wed Aug 20 2008 Peter Vrabec <pvrabec@redhat.com> 3.21.3-1
+- upgrade to bugfix release
+
+* Wed Jul 23 2008 Peter Vrabec <pvrabec@redhat.com> 3.21.0-1
+- upgrade
+
+* Mon Jul 14 2008 Peter Vrabec <pvrabec@redhat.com> 3.19.9-2
+- adjust default config file
+
+* Fri Jul 11 2008 Lubomir Rintel <lkundrak@v3.sk> 3.19.9-1
+- upgrade
+
+* Wed Jun 25 2008 Peter Vrabec <pvrabec@redhat.com> 3.19.7-3
+- rebuild because of new gnutls
+
+* Fri Jun 13 2008 Peter Vrabec <pvrabec@redhat.com> 3.19.7-2
+- do not translate Oopses (#450329)
+
+* Fri Jun 13 2008 Peter Vrabec <pvrabec@redhat.com> 3.19.7-1
+- upgrade
+
+* Wed May 28 2008 Peter Vrabec <pvrabec@redhat.com> 3.19.4-1
+- upgrade
+
+* Mon May 26 2008 Peter Vrabec <pvrabec@redhat.com> 3.19.3-1
+- upgrade to new upstream release
+
+* Wed May 14 2008 Tomas Heinrich <theinric@redhat.com> 3.16.1-1
+- upgrade
+
+* Tue Apr 08 2008 Peter Vrabec <pvrabec@redhat.com> 3.14.1-5
+- prevent undesired error description in legacy
+  warning messages
+
+* Tue Apr 08 2008 Peter Vrabec <pvrabec@redhat.com> 3.14.1-4
+- adjust symbol lookup method to 2.6 kernel
+
+* Tue Apr 08 2008 Peter Vrabec <pvrabec@redhat.com> 3.14.1-3
+- fix segfault of expression based filters
+
+* Mon Apr 07 2008 Peter Vrabec <pvrabec@redhat.com> 3.14.1-2
+- init script fixes (#441170,#440968)
+
+* Fri Apr 04 2008 Peter Vrabec <pvrabec@redhat.com> 3.14.1-1
+- upgrade
+
+* Tue Mar 25 2008 Peter Vrabec <pvrabec@redhat.com> 3.12.4-1
+- upgrade
+
+* Wed Mar 19 2008 Peter Vrabec <pvrabec@redhat.com> 3.12.3-1
+- upgrade
+- fix some significant memory leaks
+
+* Tue Mar 11 2008 Peter Vrabec <pvrabec@redhat.com> 3.12.1-2
+- init script fixes (#436854)
+- fix config file parsing (#436722)
+
+* Thu Mar 06 2008 Peter Vrabec <pvrabec@redhat.com> 3.12.1-1
+- upgrade
+
+* Wed Mar 05 2008 Peter Vrabec <pvrabec@redhat.com> 3.12.0-1
+- upgrade
+
+* Mon Feb 25 2008 Peter Vrabec <pvrabec@redhat.com> 3.11.5-1
+- upgrade
+
+* Fri Feb 01 2008 Peter Vrabec <pvrabec@redhat.com> 3.11.0-1
+- upgrade to the latests development release
+- provide PostgresSQL support
+- provide GSSAPI support
+
+* Mon Jan 21 2008 Peter Vrabec <pvrabec@redhat.com> 2.0.0-7
+- change from requires sysklogd to conflicts sysklogd
+
+* Fri Jan 18 2008 Peter Vrabec <pvrabec@redhat.com> 2.0.0-6
+- change logrotate file
+- use rsyslog own pid file
+
+* Thu Jan 17 2008 Peter Vrabec <pvrabec@redhat.com> 2.0.0-5
+- fixing bad descriptor (#428775)
+
+* Wed Jan 16 2008 Peter Vrabec <pvrabec@redhat.com> 2.0.0-4
+- rename logrotate file
+
+* Wed Jan 16 2008 Peter Vrabec <pvrabec@redhat.com> 2.0.0-3
+- fix post script and init file
+
+* Wed Jan 16 2008 Peter Vrabec <pvrabec@redhat.com> 2.0.0-2
+- change pid filename and use logrotata script from sysklogd
+
+* Tue Jan 15 2008 Peter Vrabec <pvrabec@redhat.com> 2.0.0-1
+- upgrade to stable release
+- spec file clean up
+
+* Wed Jan 02 2008 Peter Vrabec <pvrabec@redhat.com> 1.21.2-1
+- new upstream release
+
+* Thu Dec 06 2007 Release Engineering <rel-eng at fedoraproject dot org> - 1.19.11-2
+- Rebuild for deps
+
+* Thu Nov 29 2007 Peter Vrabec <pvrabec@redhat.com> 1.19.11-1
+- new upstream release
+- add conflicts (#400671)
+
+* Mon Nov 19 2007 Peter Vrabec <pvrabec@redhat.com> 1.19.10-1
+- new upstream release
+
+* Wed Oct 03 2007 Peter Vrabec <pvrabec@redhat.com> 1.19.6-3
+- remove NUL character from recieved messages
+
+* Tue Sep 25 2007 Tomas Heinrich <theinric@redhat.com> 1.19.6-2
+- fix message suppression (303341)
+
+* Tue Sep 25 2007 Tomas Heinrich <theinric@redhat.com> 1.19.6-1
+- upstream bugfix release
+
+* Tue Aug 28 2007 Peter Vrabec <pvrabec@redhat.com> 1.19.2-1
+- upstream bugfix release
+- support for negative app selector, patch from
+  theinric@redhat.com
+
+* Fri Aug 17 2007 Peter Vrabec <pvrabec@redhat.com> 1.19.0-1
+- new upstream release with MySQL support(as plugin)
+
+* Wed Aug 08 2007 Peter Vrabec <pvrabec@redhat.com> 1.18.1-1
+- upstream bugfix release
+
+* Mon Aug 06 2007 Peter Vrabec <pvrabec@redhat.com> 1.18.0-1
+- new upstream release
+
+* Thu Aug 02 2007 Peter Vrabec <pvrabec@redhat.com> 1.17.6-1
+- upstream bugfix release
+
+* Mon Jul 30 2007 Peter Vrabec <pvrabec@redhat.com> 1.17.5-1
+- upstream bugfix release
+- fix typo in provides
+
+* Wed Jul 25 2007 Jeremy Katz <katzj@redhat.com> - 1.17.2-4
+- rebuild for toolchain bug
+
+* Tue Jul 24 2007 Peter Vrabec <pvrabec@redhat.com> 1.17.2-3
+- take care of sysklogd configuration files in %%post
+
+* Tue Jul 24 2007 Peter Vrabec <pvrabec@redhat.com> 1.17.2-2
+- use EVR in provides/obsoletes sysklogd
+
+* Mon Jul 23 2007 Peter Vrabec <pvrabec@redhat.com> 1.17.2-1
+- upstream bug fix release
+
+* Fri Jul 20 2007 Peter Vrabec <pvrabec@redhat.com> 1.17.1-1
+- upstream bug fix release
+- include html docs (#248712)
+- make "-r" option compatible with sysklogd config (248982)
+
+* Tue Jul 17 2007 Peter Vrabec <pvrabec@redhat.com> 1.17.0-1
+- feature rich upstream release
+
+* Thu Jul 12 2007 Peter Vrabec <pvrabec@redhat.com> 1.15.1-2
+- use obsoletes and hadle old config files
+
+* Wed Jul 11 2007 Peter Vrabec <pvrabec@redhat.com> 1.15.1-1
+- new upstream bugfix release
+
+* Tue Jul 10 2007 Peter Vrabec <pvrabec@redhat.com> 1.15.0-1
+- new upstream release introduce capability to generate output
+  file names based on templates
+
+* Tue Jul 03 2007 Peter Vrabec <pvrabec@redhat.com> 1.14.2-1
+- new upstream bugfix release
+
+* Mon Jul 02 2007 Peter Vrabec <pvrabec@redhat.com> 1.14.1-1
+- new upstream release with IPv6 support
+
+* Tue Jun 26 2007 Peter Vrabec <pvrabec@redhat.com> 1.13.5-3
+- add BuildRequires for zlib compression feature
+
+* Mon Jun 25 2007 Peter Vrabec <pvrabec@redhat.com> 1.13.5-2
+- some spec file adjustments.
+- fix syslog init script error codes (#245330)
+
+* Fri Jun 22 2007 Peter Vrabec <pvrabec@redhat.com> 1.13.5-1
+- new upstream release
+
+* Fri Jun 22 2007 Peter Vrabec <pvrabec@redhat.com> 1.13.4-2
+- some spec file adjustments.
+
+* Mon Jun 18 2007 Peter Vrabec <pvrabec@redhat.com> 1.13.4-1
+- upgrade to new upstream release
+
+* Wed Jun 13 2007 Peter Vrabec <pvrabec@redhat.com> 1.13.2-2
+- DB support off
+
+* Tue Jun 12 2007 Peter Vrabec <pvrabec@redhat.com> 1.13.2-1
+- new upstream release based on redhat patch
+
+* Fri Jun 08 2007 Peter Vrabec <pvrabec@redhat.com> 1.13.1-2
+- rsyslog package provides its own kernel log. daemon (rklogd)
+
+* Mon Jun 04 2007 Peter Vrabec <pvrabec@redhat.com> 1.13.1-1
+- Initial rpm build