diff --git a/.github/workflows/containers.yml b/.github/workflows/containers.yml
index 9c062971e..bfef352a7 100644
--- a/.github/workflows/containers.yml
+++ b/.github/workflows/containers.yml
@@ -15,7 +15,7 @@ env:
jobs:
container:
- if: github.repository_owner == env.IMAGE_NAMESPACE
+ if: github.repository_owner == 'ovn-org'
runs-on: ubuntu-24.04
strategy:
matrix:
diff --git a/.github/workflows/ovn-fake-multinode-tests.yml b/.github/workflows/ovn-fake-multinode-tests.yml
index 795dafc22..f3f25ddf3 100644
--- a/.github/workflows/ovn-fake-multinode-tests.yml
+++ b/.github/workflows/ovn-fake-multinode-tests.yml
@@ -72,7 +72,7 @@ jobs:
multinode-tests:
runs-on: ubuntu-22.04
- timeout-minutes: 15
+ timeout-minutes: 30
needs: [build]
strategy:
fail-fast: false
diff --git a/Documentation/automake.mk b/Documentation/automake.mk
index c6cc37e49..5f7500fb7 100644
--- a/Documentation/automake.mk
+++ b/Documentation/automake.mk
@@ -56,6 +56,7 @@ DOC_SOURCE = \
Documentation/internals/security.rst \
Documentation/internals/contributing/index.rst \
Documentation/internals/contributing/backporting-patches.rst \
+ Documentation/internals/contributing/inclusive-language.rst \
Documentation/internals/contributing/coding-style.rst \
Documentation/internals/contributing/documentation-style.rst \
Documentation/internals/contributing/submitting-patches.rst \
diff --git a/Documentation/index.rst b/Documentation/index.rst
index 04e757505..9fb298c28 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -81,6 +81,7 @@ Learn more about the Open Virtual Network (OVN) project and about how you can co
- **Contributing:** :doc:`internals/contributing/submitting-patches` |
:doc:`internals/contributing/backporting-patches` |
+ :doc:`internals/contributing/inclusive-language` |
:doc:`internals/contributing/coding-style`
- **Maintaining:** :doc:`internals/maintainers` |
diff --git a/Documentation/internals/contributing/inclusive-language.rst b/Documentation/internals/contributing/inclusive-language.rst
new file mode 100644
index 000000000..65e9c4fbd
--- /dev/null
+++ b/Documentation/internals/contributing/inclusive-language.rst
@@ -0,0 +1,57 @@
+..
+ Licensed under the Apache License, Version 2.0 (the "License"); you may
+ not use this file except in compliance with the License. You may obtain
+ a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ License for the specific language governing permissions and limitations
+ under the License.
+
+ Convention for heading levels in OVN documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ~~~~~~~ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
+==================
+Inclusive Language
+==================
+
+In order to help facilitate an inclusive environment in the OVN
+community we recognise the role of language in framing our
+communication with each other. It is important that terms that
+may exclude people through racial, cultural or other bias, are avoided
+as they may make people feel excluded.
+
+We recognise that this is subjective, and to some extent is a journey.
+But we also recognise that we cannot begin that journey without taking
+positive action. To this end OVN is adopting the practice of an
+inclusive word list, which helps to guide the use of language within
+the project.
+
+.. _word list:
+
+Word List
+---------
+
+The intent of this document is to formally document the acceptance of a
+inclusive word list by OVN. Accordingly, this document specifies
+use of the use the `Inclusive Naming Word List
+<https://inclusivenaming.org/word-lists/>`__ v1.0 (the word list) for
+OVN.
+
+The adoption of the word list intended that this act as a guide for
+developers creating patches to the OVN repository, including both
+source code and documentation. And to aid maintainers in their role of
+shepherding changes into the repository.
+
+Further steps to align usage of language in OVN, including clarification
+of application of the word list, to new and existing work, may follow.
diff --git a/Documentation/internals/contributing/index.rst b/Documentation/internals/contributing/index.rst
index ba6b6094e..9dab48110 100644
--- a/Documentation/internals/contributing/index.rst
+++ b/Documentation/internals/contributing/index.rst
@@ -31,6 +31,7 @@ The below guides provide information on contributing to OVN itself.
:maxdepth: 2
submitting-patches
+ inclusive-language
backporting-patches
coding-style
documentation-style
diff --git a/NEWS b/NEWS
index d8717810a..201d5d40f 100644
--- a/NEWS
+++ b/NEWS
@@ -1,4 +1,7 @@
-OVN v24.09.0 - xx xxx xxxx
+OVN v24.09.1 - xx xxx xxxx
+--------------------------
+
+OVN v24.09.0 - 13 Sep 2024
--------------------------
- Added a new logical switch port option "pkt_clone_type".
If the value is set to "mc_unknown", packets destined to the port gets
@@ -56,7 +59,7 @@ OVN v24.09.0 - xx xxx xxxx
- The NB_Global.debug_drop_domain_id configured value is now overridden by
the ID associated with the Sampling_App record created for drop sampling
(Sampling_App.type configured as "drop").
- - Add support for ACL sampling through the new Sample_Collector and Sample
+ - Add support for ACL sampling through the new Sample_Collector and Sample
tables. Sampling is supported for both traffic that creates new
connections and for traffic that is part of an existing connection.
- Add "external_ids:ovn-encap-ip-default" config for ovn-controller to
@@ -64,6 +67,10 @@ OVN v24.09.0 - xx xxx xxxx
configured.
- Added a new column in the southbound database "flow_desc" to provide
human readable context to flows.
+ - Added two new experimental logical router port options,
+ "routing-protocol-redirect" and "routing-protocols", that allow
+ redirection of routing protocol traffic received by a router port
+ to a different logical switch port.
OVN v24.03.0 - 01 Mar 2024
--------------------------
diff --git a/configure.ac b/configure.ac
index cd6a5d0c9..53c834faa 100644
--- a/configure.ac
+++ b/configure.ac
@@ -13,7 +13,7 @@
# limitations under the License.
AC_PREREQ(2.63)
-AC_INIT(ovn, 24.09.0, bugs@openvswitch.org)
+AC_INIT(ovn, 24.09.1, bugs@openvswitch.org)
AC_CONFIG_MACRO_DIR([m4])
AC_CONFIG_AUX_DIR([build-aux])
AC_CONFIG_HEADERS([config.h])
diff --git a/controller/chassis.c b/controller/chassis.c
index 2991a0af3..ee839084a 100644
--- a/controller/chassis.c
+++ b/controller/chassis.c
@@ -390,6 +390,7 @@ chassis_build_other_config(const struct ovs_chassis_cfg *ovs_cfg,
smap_replace(config, OVN_FEATURE_CT_COMMIT_TO_ZONE, "true");
smap_replace(config, OVN_FEATURE_SAMPLE_WITH_REGISTERS,
ovs_cfg->sample_with_regs ? "true" : "false");
+ smap_replace(config, OVN_FEATURE_CT_NEXT_ZONE, "true");
}
/*
@@ -549,6 +550,12 @@ chassis_other_config_changed(const struct ovs_chassis_cfg *ovs_cfg,
return true;
}
+ if (!smap_get_bool(&chassis_rec->other_config,
+ OVN_FEATURE_CT_NEXT_ZONE,
+ false)) {
+ return true;
+ }
+
return false;
}
@@ -706,6 +713,7 @@ update_supported_sset(struct sset *supported)
sset_add(supported, OVN_FEATURE_CT_COMMIT_NAT_V2);
sset_add(supported, OVN_FEATURE_CT_COMMIT_TO_ZONE);
sset_add(supported, OVN_FEATURE_SAMPLE_WITH_REGISTERS);
+ sset_add(supported, OVN_FEATURE_CT_NEXT_ZONE);
}
static void
diff --git a/controller/ct-zone.c b/controller/ct-zone.c
index 77eb16ac9..469a8fc54 100644
--- a/controller/ct-zone.c
+++ b/controller/ct-zone.c
@@ -216,12 +216,15 @@ ct_zones_update(const struct sset *local_lports,
struct shash_node *node;
SHASH_FOR_EACH_SAFE (node, &ctx->current) {
struct ct_zone *ct_zone = node->data;
- if (!sset_contains(&all_users, node->name) ||
- ct_zone->zone < min_ct_zone || ct_zone->zone > max_ct_zone) {
+ if (!sset_contains(&all_users, node->name)) {
ct_zone_remove(ctx, node->name);
} else if (!simap_find(&req_snat_zones, node->name)) {
- bitmap_set1(unreq_snat_zones_map, ct_zone->zone);
- simap_put(&unreq_snat_zones, node->name, ct_zone->zone);
+ if (ct_zone->zone < min_ct_zone || ct_zone->zone > max_ct_zone) {
+ ct_zone_remove(ctx, node->name);
+ } else {
+ bitmap_set1(unreq_snat_zones_map, ct_zone->zone);
+ simap_put(&unreq_snat_zones, node->name, ct_zone->zone);
+ }
}
}
@@ -249,10 +252,11 @@ ct_zones_update(const struct sset *local_lports,
struct ct_zone *ct_zone = shash_find_data(&ctx->current,
snat_req_node->name);
+ bool flush = !(ct_zone && ct_zone->zone == snat_req_node->data);
if (ct_zone && ct_zone->zone != snat_req_node->data) {
ct_zone_remove(ctx, snat_req_node->name);
}
- ct_zone_add(ctx, snat_req_node->name, snat_req_node->data, true);
+ ct_zone_add(ctx, snat_req_node->name, snat_req_node->data, flush);
}
/* xxx This is wasteful to assign a zone to each port--even if no
diff --git a/controller/if-status.c b/controller/if-status.c
index 9a7488057..ada78a18b 100644
--- a/controller/if-status.c
+++ b/controller/if-status.c
@@ -219,7 +219,8 @@ ovs_iface_create(struct if_status_mgr *, const char *iface_id,
static void add_to_ovn_uninstall_hash(struct if_status_mgr *, const char *,
const struct uuid *);
static void ovs_iface_destroy(struct if_status_mgr *, struct ovs_iface *);
-static void ovn_uninstall_hash_destroy(struct if_status_mgr *mgr, char *name);
+static void ovn_uninstall_hash_destroy(struct if_status_mgr *mgr,
+ struct shash_node *node);
static void ovs_iface_set_state(struct if_status_mgr *, struct ovs_iface *,
enum if_state);
@@ -256,7 +257,7 @@ if_status_mgr_clear(struct if_status_mgr *mgr)
ovs_assert(shash_is_empty(&mgr->ifaces));
SHASH_FOR_EACH_SAFE (node, &mgr->ovn_uninstall_hash) {
- ovn_uninstall_hash_destroy(mgr, node->data);
+ ovn_uninstall_hash_destroy(mgr, node);
}
ovs_assert(shash_is_empty(&mgr->ovn_uninstall_hash));
@@ -789,20 +790,13 @@ ovs_iface_destroy(struct if_status_mgr *mgr, struct ovs_iface *iface)
}
static void
-ovn_uninstall_hash_destroy(struct if_status_mgr *mgr, char *name)
+ovn_uninstall_hash_destroy(struct if_status_mgr *mgr, struct shash_node *node)
{
- struct shash_node *node = shash_find(&mgr->ovn_uninstall_hash, name);
- char *node_name = NULL;
- if (node) {
- free(node->data);
- VLOG_DBG("Interface name %s destroy", name);
- node_name = shash_steal(&mgr->ovn_uninstall_hash, node);
- ovn_uninstall_hash_account_mem(name, true);
- free(node_name);
- } else {
- static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1);
- VLOG_WARN_RL(&rl, "Interface name %s not found", name);
- }
+ free(node->data);
+ VLOG_DBG("Interface name %s destroy", node->name);
+ char *node_name = shash_steal(&mgr->ovn_uninstall_hash, node);
+ ovn_uninstall_hash_account_mem(node_name, true);
+ free(node_name);
}
static void
diff --git a/controller/ofctrl.c b/controller/ofctrl.c
index e023cab9b..f9387d375 100644
--- a/controller/ofctrl.c
+++ b/controller/ofctrl.c
@@ -45,7 +45,6 @@
#include "ovn/actions.h"
#include "lib/extend-table.h"
#include "lib/lb.h"
-#include "lib/ovn-util.h"
#include "openvswitch/poll-loop.h"
#include "physical.h"
#include "openvswitch/rconn.h"
@@ -390,16 +389,9 @@ struct meter_band_entry {
static struct shash meter_bands;
-static unsigned long *ecmp_nexthop_ids;
-
static void ofctrl_meter_bands_destroy(void);
static void ofctrl_meter_bands_clear(void);
-static void ecmp_nexthop_monitor_run(
- const struct sbrec_ecmp_nexthop_table *enh_table,
- struct ovs_list *msgs);
-
-
/* MFF_* field ID for our Geneve option. In S_TLV_TABLE_MOD_SENT, this is
* the option we requested (we don't know whether we obtained it yet). In
* S_CLEAR_FLOWS or S_UPDATE_FLOWS, this is really the option we have. */
@@ -438,7 +430,6 @@ ofctrl_init(struct ovn_extend_table *group_table,
groups = group_table;
meters = meter_table;
shash_init(&meter_bands);
- ecmp_nexthop_ids = bitmap_allocate(ECMP_NEXTHOP_IDS_LEN);
}
/* S_NEW, for a new connection.
@@ -886,7 +877,6 @@ ofctrl_destroy(void)
expr_symtab_destroy(&symtab);
shash_destroy(&symtab);
ofctrl_meter_bands_destroy();
- bitmap_free(ecmp_nexthop_ids);
}
uint64_t
@@ -2316,47 +2306,6 @@ add_meter(struct ovn_extend_table_info *m_desired,
ofctrl_meter_bands_alloc(sb_meter, m_desired, msgs);
}
-static void
-ecmp_nexthop_monitor_flush_ct_entry(uint64_t id, struct ovs_list *msgs)
-{
- ovs_u128 mask = {
- /* ct_labels.label BITS[96-127] */
- .u64.hi = 0xffffffff00000000,
- };
- ovs_u128 nexthop = {
- .u64.hi = id << 32,
- };
- struct ofp_ct_match match = {
- .labels = nexthop,
- .labels_mask = mask,
- };
- struct ofpbuf *msg = ofp_ct_match_encode(&match, NULL,
- rconn_get_version(swconn));
- ovs_list_push_back(msgs, &msg->list_node);
-}
-
-static void
-ecmp_nexthop_monitor_run(const struct sbrec_ecmp_nexthop_table *enh_table,
- struct ovs_list *msgs)
-{
- unsigned long *ids = bitmap_allocate(ECMP_NEXTHOP_IDS_LEN);
-
- const struct sbrec_ecmp_nexthop *sbrec_ecmp_nexthop;
- SBREC_ECMP_NEXTHOP_TABLE_FOR_EACH (sbrec_ecmp_nexthop, enh_table) {
- bitmap_set1(ids, sbrec_ecmp_nexthop->id);
- }
-
- int id;
- BITMAP_FOR_EACH_1 (id, ECMP_NEXTHOP_IDS_LEN, ecmp_nexthop_ids) {
- if (!bitmap_is_set(ids, id)) {
- ecmp_nexthop_monitor_flush_ct_entry(id, msgs);
- }
- }
-
- bitmap_free(ecmp_nexthop_ids);
- ecmp_nexthop_ids = ids;
-}
-
static void
installed_flow_add(struct ovn_flow *d,
struct ofputil_bundle_ctrl_msg *bc,
@@ -2715,7 +2664,6 @@ ofctrl_put(struct ovn_desired_flow_table *lflow_table,
struct shash *pending_ct_zones,
struct hmap *pending_lb_tuples,
struct ovsdb_idl_index *sbrec_meter_by_name,
- const struct sbrec_ecmp_nexthop_table *enh_table,
uint64_t req_cfg,
bool lflows_changed,
bool pflows_changed)
@@ -2756,8 +2704,6 @@ ofctrl_put(struct ovn_desired_flow_table *lflow_table,
/* OpenFlow messages to send to the switch to bring it up-to-date. */
struct ovs_list msgs = OVS_LIST_INITIALIZER(&msgs);
- ecmp_nexthop_monitor_run(enh_table, &msgs);
-
/* Iterate through ct zones that need to be flushed. */
struct shash_node *iter;
SHASH_FOR_EACH(iter, pending_ct_zones) {
diff --git a/controller/ofctrl.h b/controller/ofctrl.h
index 33953a8a4..129e3b6ad 100644
--- a/controller/ofctrl.h
+++ b/controller/ofctrl.h
@@ -31,7 +31,6 @@ struct ofpbuf;
struct ovsrec_bridge;
struct ovsrec_open_vswitch_table;
struct sbrec_meter_table;
-struct sbrec_ecmp_nexthop_table;
struct shash;
struct ovn_desired_flow_table {
@@ -60,7 +59,6 @@ void ofctrl_put(struct ovn_desired_flow_table *lflow_table,
struct shash *pending_ct_zones,
struct hmap *pending_lb_tuples,
struct ovsdb_idl_index *sbrec_meter_by_name,
- const struct sbrec_ecmp_nexthop_table *enh_table,
uint64_t nb_cfg,
bool lflow_changed,
bool pflow_changed);
diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c
index 27a4996a8..3b2a0d6bb 100644
--- a/controller/ovn-controller.c
+++ b/controller/ovn-controller.c
@@ -4456,22 +4456,7 @@ pflow_output_if_status_mgr_handler(struct engine_node *node,
}
if (pb->n_additional_chassis) {
/* Update flows for all ports in datapath. */
- struct sbrec_port_binding *target =
- sbrec_port_binding_index_init_row(
- p_ctx.sbrec_port_binding_by_datapath);
- sbrec_port_binding_index_set_datapath(target, pb->datapath);
-
- const struct sbrec_port_binding *binding;
- SBREC_PORT_BINDING_FOR_EACH_EQUAL (
- binding, target, p_ctx.sbrec_port_binding_by_datapath) {
- bool removed = sbrec_port_binding_is_deleted(binding);
- if (!physical_handle_flows_for_lport(binding, removed, &p_ctx,
- &pfo->flow_table)) {
- destroy_physical_ctx(&p_ctx);
- return false;
- }
- }
- sbrec_port_binding_index_destroy_row(target);
+ physical_multichassis_reprocess(pb, &p_ctx, &pfo->flow_table);
} else {
/* If any multichassis ports, update flows for the port. */
bool removed = sbrec_port_binding_is_deleted(pb);
@@ -5501,17 +5486,14 @@ main(int argc, char *argv[])
br_int_remote.probe_interval)) {
VLOG_INFO("OVS feature set changed, force recompute.");
engine_set_force_recompute(true);
- if (ovs_feature_set_discovered()) {
- uint32_t max_groups = ovs_feature_max_select_groups_get();
- uint32_t max_meters = ovs_feature_max_meters_get();
- struct ed_type_lflow_output *lflow_out_data =
- engine_get_internal_data(&en_lflow_output);
-
- ovn_extend_table_reinit(&lflow_out_data->group_table,
- max_groups);
- ovn_extend_table_reinit(&lflow_out_data->meter_table,
- max_meters);
- }
+
+ struct ed_type_lflow_output *lflow_out_data =
+ engine_get_internal_data(&en_lflow_output);
+
+ ovn_extend_table_reinit(&lflow_out_data->group_table,
+ ovs_feature_max_select_groups_get());
+ ovn_extend_table_reinit(&lflow_out_data->meter_table,
+ ovs_feature_max_meters_get());
}
if (br_int) {
@@ -5725,8 +5707,6 @@ main(int argc, char *argv[])
&ct_zones_data->ctx.pending,
&lb_data->removed_tuples,
sbrec_meter_by_name,
- sbrec_ecmp_nexthop_table_get(
- ovnsb_idl_loop.idl),
ofctrl_seqno_get_req_cfg(),
engine_node_changed(&en_lflow_output),
engine_node_changed(&en_pflow_output));
diff --git a/controller/physical.c b/controller/physical.c
index 9e04ad5f2..c6db4f376 100644
--- a/controller/physical.c
+++ b/controller/physical.c
@@ -1258,6 +1258,12 @@ reply_imcp_error_if_pkt_too_big(struct ovn_desired_flow_table *flow_table,
ofpact_put_set_field(
&inner_ofpacts, mf_from_id(MFF_LOG_FLAGS), &value, &mask);
+ /* inport <-> outport */
+ put_stack(MFF_LOG_INPORT, ofpact_put_STACK_PUSH(&inner_ofpacts));
+ put_stack(MFF_LOG_OUTPORT, ofpact_put_STACK_PUSH(&inner_ofpacts));
+ put_stack(MFF_LOG_INPORT, ofpact_put_STACK_POP(&inner_ofpacts));
+ put_stack(MFF_LOG_OUTPORT, ofpact_put_STACK_POP(&inner_ofpacts));
+
/* eth.src <-> eth.dst */
put_stack(MFF_ETH_DST, ofpact_put_STACK_PUSH(&inner_ofpacts));
put_stack(MFF_ETH_SRC, ofpact_put_STACK_PUSH(&inner_ofpacts));
@@ -1658,7 +1664,8 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,
sbrec_port_binding_by_name, binding->parent_port);
if (parent_port
- && !lport_can_bind_on_this_chassis(chassis, parent_port)) {
+ && (lport_can_bind_on_this_chassis(chassis,
+ parent_port) != CAN_BIND_AS_MAIN)) {
/* Even though there is an ofport for this container
* parent port, it is requested on different chassis ignore
* this container port.
@@ -2397,33 +2404,9 @@ physical_handle_flows_for_lport(const struct sbrec_port_binding *pb,
}
}
- if (ldp) {
- bool multichassis_state_changed = (
- !!pb->additional_chassis ==
- !!shash_find(&ldp->multichassis_ports, pb->logical_port)
- );
- if (multichassis_state_changed) {
- if (pb->additional_chassis) {
- add_local_datapath_multichassis_port(
- ldp, pb->logical_port, pb);
- } else {
- remove_local_datapath_multichassis_port(
- ldp, pb->logical_port);
- }
-
- struct sbrec_port_binding *target =
- sbrec_port_binding_index_init_row(
- p_ctx->sbrec_port_binding_by_datapath);
- sbrec_port_binding_index_set_datapath(target, ldp->datapath);
-
- const struct sbrec_port_binding *port;
- SBREC_PORT_BINDING_FOR_EACH_EQUAL (
- port, target, p_ctx->sbrec_port_binding_by_datapath) {
- ofctrl_remove_flows(flow_table, &port->header_.uuid);
- physical_eval_port_binding(p_ctx, port, flow_table);
- }
- sbrec_port_binding_index_destroy_row(target);
- }
+ if (sbrec_port_binding_is_updated(
+ pb, SBREC_PORT_BINDING_COL_ADDITIONAL_CHASSIS) || removed) {
+ physical_multichassis_reprocess(pb, p_ctx, flow_table);
}
if (!removed) {
@@ -2440,6 +2423,25 @@ physical_handle_flows_for_lport(const struct sbrec_port_binding *pb,
return true;
}
+void
+physical_multichassis_reprocess(const struct sbrec_port_binding *pb,
+ struct physical_ctx *p_ctx,
+ struct ovn_desired_flow_table *flow_table)
+{
+ struct sbrec_port_binding *target =
+ sbrec_port_binding_index_init_row(
+ p_ctx->sbrec_port_binding_by_datapath);
+ sbrec_port_binding_index_set_datapath(target, pb->datapath);
+
+ const struct sbrec_port_binding *port;
+ SBREC_PORT_BINDING_FOR_EACH_EQUAL (port, target,
+ p_ctx->sbrec_port_binding_by_datapath) {
+ ofctrl_remove_flows(flow_table, &port->header_.uuid);
+ physical_eval_port_binding(p_ctx, port, flow_table);
+ }
+ sbrec_port_binding_index_destroy_row(target);
+}
+
void
physical_handle_mc_group_changes(struct physical_ctx *p_ctx,
struct ovn_desired_flow_table *flow_table)
diff --git a/controller/physical.h b/controller/physical.h
index dd4be7041..f0aecc852 100644
--- a/controller/physical.h
+++ b/controller/physical.h
@@ -81,4 +81,7 @@ bool physical_handle_flows_for_lport(const struct sbrec_port_binding *,
bool removed,
struct physical_ctx *,
struct ovn_desired_flow_table *);
+void physical_multichassis_reprocess(const struct sbrec_port_binding *,
+ struct physical_ctx *,
+ struct ovn_desired_flow_table *);
#endif /* controller/physical.h */
diff --git a/controller/pinctrl.c b/controller/pinctrl.c
index 7cbb0cf81..c86b4f940 100644
--- a/controller/pinctrl.c
+++ b/controller/pinctrl.c
@@ -1756,6 +1756,7 @@ pinctrl_handle_icmp(struct rconn *swconn, const struct flow *ip_flow,
if (mtu) {
put_16aligned_be32(ih->icmp6_data.be32, *mtu);
ih->icmp6_base.icmp6_type = ICMP6_PACKET_TOO_BIG;
+ ih->icmp6_base.icmp6_code = 0;
}
void *data = ih + 1;
diff --git a/debian/changelog b/debian/changelog
index 8168d1e83..dc867602c 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,8 +1,14 @@
+ovn (24.09.1-1) unstable; urgency=low
+ [ OVN team ]
+ * New upstream version
+
+ -- OVN team <dev@openvswitch.org> Fri, 13 Sep 2024 19:09:31 -0400
+
ovn (24.09.0-1) unstable; urgency=low
* New upstream version
- -- OVN team <dev@openvswitch.org> Fri, 09 Aug 2024 09:56:41 -0400
+ -- OVN team <dev@openvswitch.org> Fri, 13 Sep 2024 19:09:31 -0400
ovn (24.03.0-1) unstable; urgency=low
diff --git a/ic/ovn-ic.c b/ic/ovn-ic.c
index 69bac4ab2..db17630be 100644
--- a/ic/ovn-ic.c
+++ b/ic/ovn-ic.c
@@ -137,7 +137,7 @@ az_run(struct ic_context *ctx)
* "ovn-ic-sbctl destroy avail <az>". */
static char *az_name;
const struct icsbrec_availability_zone *az;
- if (az_name && strcmp(az_name, nb_global->name)) {
+ if (ctx->ovnisb_txn && az_name && strcmp(az_name, nb_global->name)) {
ICSBREC_AVAILABILITY_ZONE_FOR_EACH (az, ctx->ovnisb_idl) {
/* AZ name update locally need to update az in ISB. */
if (nb_global->name[0] && !strcmp(az->name, az_name)) {
diff --git a/include/ovn/actions.h b/include/ovn/actions.h
index c8dd66ed8..a95a0daf7 100644
--- a/include/ovn/actions.h
+++ b/include/ovn/actions.h
@@ -260,6 +260,7 @@ struct ovnact_push_pop {
/* OVNACT_CT_NEXT. */
struct ovnact_ct_next {
struct ovnact ovnact;
+ bool dnat_zone;
uint8_t ltable; /* Logical table ID of next table. */
};
diff --git a/include/ovn/features.h b/include/ovn/features.h
index 4275f7526..3566ab60f 100644
--- a/include/ovn/features.h
+++ b/include/ovn/features.h
@@ -30,6 +30,7 @@
#define OVN_FEATURE_CT_COMMIT_NAT_V2 "ct-commit-nat-v2"
#define OVN_FEATURE_CT_COMMIT_TO_ZONE "ct-commit-to-zone"
#define OVN_FEATURE_SAMPLE_WITH_REGISTERS "ovn-sample-with-registers"
+#define OVN_FEATURE_CT_NEXT_ZONE "ct-next-zone"
/* OVS datapath supported features. Based on availability OVN might generate
* different types of openflows.
diff --git a/lib/actions.c b/lib/actions.c
index c12d087e7..2e05d4134 100644
--- a/lib/actions.c
+++ b/lib/actions.c
@@ -701,13 +701,32 @@ parse_CT_NEXT(struct action_context *ctx)
}
add_prerequisite(ctx, "ip");
- ovnact_put_CT_NEXT(ctx->ovnacts)->ltable = ctx->pp->cur_ltable + 1;
+ struct ovnact_ct_next *ct_next = ovnact_put_CT_NEXT(ctx->ovnacts);
+ ct_next->dnat_zone = true;
+ ct_next->ltable = ctx->pp->cur_ltable + 1;
+
+ if (!lexer_match(ctx->lexer, LEX_T_LPAREN)) {
+ return;
+ }
+
+ if (lexer_match_id(ctx->lexer, "dnat")) {
+ ct_next->dnat_zone = true;
+ } else if (lexer_match_id(ctx->lexer, "snat")) {
+ ct_next->dnat_zone = false;
+ } else {
+ lexer_error(ctx->lexer, "\"ct_next\" action accepts only"
+ " \"dnat\" or \"snat\" parameter.");
+ return;
+ }
+
+ lexer_force_match(ctx->lexer, LEX_T_RPAREN);
}
static void
format_CT_NEXT(const struct ovnact_ct_next *ct_next OVS_UNUSED, struct ds *s)
{
- ds_put_cstr(s, "ct_next;");
+ ds_put_cstr(s, "ct_next");
+ ds_put_cstr(s, ct_next->dnat_zone ? "(dnat);" : "(snat);");
}
static void
@@ -719,11 +738,17 @@ encode_CT_NEXT(const struct ovnact_ct_next *ct_next,
struct ofpact_conntrack *ct = ofpact_put_CT(ofpacts);
ct->recirc_table = first_ptable(ep, ep->pipeline) + ct_next->ltable;
- ct->zone_src.field = ep->is_switch ? mf_from_id(MFF_LOG_CT_ZONE)
- : mf_from_id(MFF_LOG_DNAT_ZONE);
ct->zone_src.ofs = 0;
ct->zone_src.n_bits = 16;
+ if (ep->is_switch) {
+ ct->zone_src.field = mf_from_id(MFF_LOG_CT_ZONE);
+ } else {
+ ct->zone_src.field = mf_from_id(ct_next->dnat_zone
+ ? MFF_LOG_DNAT_ZONE
+ : MFF_LOG_SNAT_ZONE);
+ }
+
ct = ofpbuf_at_assert(ofpacts, ct_offset, sizeof *ct);
ofpacts->header = ct;
ofpact_finish_CT(ofpacts, &ct);
diff --git a/lib/logical-fields.c b/lib/logical-fields.c
index 2c9d3c61b..5a8b53f2b 100644
--- a/lib/logical-fields.c
+++ b/lib/logical-fields.c
@@ -293,6 +293,9 @@ ovn_init_symtab(struct shash *symtab)
"icmp6.type == {135, 136} && icmp6.code == 0 && ip.ttl == 255");
expr_symtab_add_predicate(symtab, "nd_ns",
"icmp6.type == 135 && icmp6.code == 0 && ip.ttl == 255");
+ expr_symtab_add_predicate(symtab, "nd_ns_mcast",
+ "ip6.mcast && icmp6.type == 135 && icmp6.code == 0 && "
+ "ip.ttl == 255");
expr_symtab_add_predicate(symtab, "nd_na",
"icmp6.type == 136 && icmp6.code == 0 && ip.ttl == 255");
expr_symtab_add_predicate(symtab, "nd_rs",
diff --git a/lib/ovn-util.h b/lib/ovn-util.h
index 622fec531..7b98b9b9a 100644
--- a/lib/ovn-util.h
+++ b/lib/ovn-util.h
@@ -38,8 +38,6 @@
#define STT_TUNNEL_OVERHEAD 18
#define VXLAN_TUNNEL_OVERHEAD 30
-#define ECMP_NEXTHOP_IDS_LEN 65535
-
struct eth_addr;
struct nbrec_logical_router_port;
struct ovsrec_flow_sample_collector_set_table;
diff --git a/northd/en-global-config.c b/northd/en-global-config.c
index 0ce7f8308..fff2aaa16 100644
--- a/northd/en-global-config.c
+++ b/northd/en-global-config.c
@@ -382,6 +382,7 @@ northd_enable_all_features(struct ed_type_global_config *data)
.ct_commit_nat_v2 = true,
.ct_commit_to_zone = true,
.sample_with_reg = true,
+ .ct_next_zone = true,
};
}
@@ -452,6 +453,15 @@ build_chassis_features(const struct sbrec_chassis_table *sbrec_chassis_table,
chassis_features->sample_with_reg) {
chassis_features->sample_with_reg = false;
}
+
+ bool ct_next_zone =
+ smap_get_bool(&chassis->other_config,
+ OVN_FEATURE_CT_NEXT_ZONE,
+ false);
+ if (!ct_next_zone &&
+ chassis_features->ct_next_zone) {
+ chassis_features->ct_next_zone = false;
+ }
}
}
diff --git a/northd/en-global-config.h b/northd/en-global-config.h
index 0cf34482a..767810542 100644
--- a/northd/en-global-config.h
+++ b/northd/en-global-config.h
@@ -20,6 +20,7 @@ struct chassis_features {
bool ct_commit_nat_v2;
bool ct_commit_to_zone;
bool sample_with_reg;
+ bool ct_next_zone;
};
struct global_config_tracked_data {
diff --git a/northd/en-lflow.c b/northd/en-lflow.c
index f9d7f2459..469d7c6b5 100644
--- a/northd/en-lflow.c
+++ b/northd/en-lflow.c
@@ -42,7 +42,8 @@ lflow_get_input_data(struct engine_node *node,
struct lflow_input *lflow_input)
{
struct northd_data *northd_data = engine_get_input_data("northd", node);
- struct bfd_data *bfd_data = engine_get_input_data("bfd_sync", node);
+ struct bfd_sync_data *bfd_sync_data =
+ engine_get_input_data("bfd_sync", node);
struct static_routes_data *static_routes_data =
engine_get_input_data("static_routes", node);
struct route_policies_data *route_policies_data =
@@ -55,8 +56,6 @@ lflow_get_input_data(struct engine_node *node,
engine_get_input_data("lr_stateful", node);
struct ed_type_ls_stateful *ls_stateful_data =
engine_get_input_data("ls_stateful", node);
- struct ecmp_nexthop_data *nexthop_data =
- engine_get_input_data("ecmp_nexthop", node);
lflow_input->sbrec_logical_flow_table =
EN_OVSDB_GET(engine_get_input("SB_logical_flow", node));
@@ -82,11 +81,10 @@ lflow_get_input_data(struct engine_node *node,
lflow_input->meter_groups = &sync_meters_data->meter_groups;
lflow_input->lb_datapaths_map = &northd_data->lb_datapaths_map;
lflow_input->svc_monitor_map = &northd_data->svc_monitor_map;
- lflow_input->bfd_connections = &bfd_data->bfd_connections;
+ lflow_input->bfd_ports = &bfd_sync_data->bfd_ports;
lflow_input->parsed_routes = &static_routes_data->parsed_routes;
lflow_input->route_tables = &static_routes_data->route_tables;
lflow_input->route_policies = &route_policies_data->route_policies;
- lflow_input->nexthops_table = &nexthop_data->nexthops;
struct ed_type_global_config *global_config =
engine_get_input_data("global_config", node);
diff --git a/northd/en-northd.c b/northd/en-northd.c
index 63f93bbf4..24ed31517 100644
--- a/northd/en-northd.c
+++ b/northd/en-northd.c
@@ -392,34 +392,15 @@ en_bfd_sync_run(struct engine_node *node, void *data)
= engine_get_input_data("static_routes", node);
const struct nbrec_bfd_table *nbrec_bfd_table =
EN_OVSDB_GET(engine_get_input("NB_bfd", node));
- struct bfd_data *bfd_sync_data = data;
+ struct bfd_sync_data *bfd_sync_data = data;
- bfd_destroy(data);
- bfd_init(data);
+ bfd_sync_destroy(data);
+ bfd_sync_init(data);
bfd_table_sync(eng_ctx->ovnsb_idl_txn, nbrec_bfd_table,
&northd_data->lr_ports, &bfd_data->bfd_connections,
&route_policies_data->bfd_active_connections,
&static_routes_data->bfd_active_connections,
- &bfd_sync_data->bfd_connections);
- engine_set_node_state(node, EN_UPDATED);
-}
-
-void
-en_ecmp_nexthop_run(struct engine_node *node, void *data)
-{
- const struct engine_context *eng_ctx = engine_get_context();
- struct static_routes_data *static_routes_data =
- engine_get_input_data("static_routes", node);
- struct ecmp_nexthop_data *enh_data = data;
- const struct sbrec_ecmp_nexthop_table *sbrec_ecmp_nexthop_table =
- EN_OVSDB_GET(engine_get_input("SB_ecmp_nexthop", node));
-
- ecmp_nexthop_destroy(data);
- ecmp_nexthop_init(data);
- build_ecmp_nexthop_table(eng_ctx->ovnsb_idl_txn,
- &static_routes_data->parsed_routes,
- &enh_data->nexthops,
- sbrec_ecmp_nexthop_table);
+ &bfd_sync_data->bfd_ports);
engine_set_node_state(node, EN_UPDATED);
}
@@ -468,18 +449,8 @@ void
*en_bfd_sync_init(struct engine_node *node OVS_UNUSED,
struct engine_arg *arg OVS_UNUSED)
{
- struct bfd_data *data = xzalloc(sizeof *data);
- bfd_init(data);
- return data;
-}
-
-void
-*en_ecmp_nexthop_init(struct engine_node *node OVS_UNUSED,
- struct engine_arg *arg OVS_UNUSED)
-{
- struct ecmp_nexthop_data *data = xzalloc(sizeof *data);
-
- ecmp_nexthop_init(data);
+ struct bfd_sync_data *data = xzalloc(sizeof *data);
+ bfd_sync_init(data);
return data;
}
@@ -553,11 +524,5 @@ en_bfd_cleanup(void *data)
void
en_bfd_sync_cleanup(void *data)
{
- bfd_destroy(data);
-}
-
-void
-en_ecmp_nexthop_cleanup(void *data)
-{
- ecmp_nexthop_destroy(data);
+ bfd_sync_destroy(data);
}
diff --git a/northd/en-northd.h b/northd/en-northd.h
index 2666cc67e..631a7c17a 100644
--- a/northd/en-northd.h
+++ b/northd/en-northd.h
@@ -42,9 +42,5 @@ bool bfd_sync_northd_change_handler(struct engine_node *node,
void *data OVS_UNUSED);
void en_bfd_sync_run(struct engine_node *node, void *data);
void en_bfd_sync_cleanup(void *data OVS_UNUSED);
-void en_ecmp_nexthop_run(struct engine_node *node, void *data);
-void *en_ecmp_nexthop_init(struct engine_node *node OVS_UNUSED,
- struct engine_arg *arg OVS_UNUSED);
-void en_ecmp_nexthop_cleanup(void *data);
#endif /* EN_NORTHD_H */
diff --git a/northd/inc-proc-northd.c b/northd/inc-proc-northd.c
index cb880b439..1f79916a5 100644
--- a/northd/inc-proc-northd.c
+++ b/northd/inc-proc-northd.c
@@ -103,8 +103,7 @@ static unixctl_cb_func chassis_features_list;
SB_NODE(fdb, "fdb") \
SB_NODE(static_mac_binding, "static_mac_binding") \
SB_NODE(chassis_template_var, "chassis_template_var") \
- SB_NODE(logical_dp_group, "logical_dp_group") \
- SB_NODE(ecmp_nexthop, "ecmp_nexthop")
+ SB_NODE(logical_dp_group, "logical_dp_group")
enum sb_engine_node {
#define SB_NODE(NAME, NAME_STR) SB_##NAME,
@@ -163,7 +162,6 @@ static ENGINE_NODE(route_policies, "route_policies");
static ENGINE_NODE(static_routes, "static_routes");
static ENGINE_NODE(bfd, "bfd");
static ENGINE_NODE(bfd_sync, "bfd_sync");
-static ENGINE_NODE(ecmp_nexthop, "ecmp_nexthop");
void inc_proc_northd_init(struct ovsdb_idl_loop *nb,
struct ovsdb_idl_loop *sb)
@@ -266,9 +264,6 @@ void inc_proc_northd_init(struct ovsdb_idl_loop *nb,
engine_add_input(&en_bfd_sync, &en_route_policies, NULL);
engine_add_input(&en_bfd_sync, &en_northd, bfd_sync_northd_change_handler);
- engine_add_input(&en_ecmp_nexthop, &en_sb_ecmp_nexthop, NULL);
- engine_add_input(&en_ecmp_nexthop, &en_static_routes, NULL);
-
engine_add_input(&en_sync_meters, &en_nb_acl, NULL);
engine_add_input(&en_sync_meters, &en_nb_meter, NULL);
engine_add_input(&en_sync_meters, &en_sb_meter, NULL);
@@ -282,7 +277,6 @@ void inc_proc_northd_init(struct ovsdb_idl_loop *nb,
engine_add_input(&en_lflow, &en_bfd_sync, NULL);
engine_add_input(&en_lflow, &en_route_policies, NULL);
engine_add_input(&en_lflow, &en_static_routes, NULL);
- engine_add_input(&en_lflow, &en_ecmp_nexthop, NULL);
engine_add_input(&en_lflow, &en_global_config,
node_global_config_handler);
diff --git a/northd/northd.c b/northd/northd.c
index 5ad30d854..2c4703301 100644
--- a/northd/northd.c
+++ b/northd/northd.c
@@ -1126,7 +1126,7 @@ is_l3dgw_port(const struct ovn_port *op)
/* This function returns true if 'op' is a chassis resident
* derived port. False otherwise.
* There are 2 ways to check if 'op' is chassis resident port.
- * 1. op->sb->type is "chassisresident"
+ * 1. op->sb->type is "chassisredirect"
* 2. op->primary_port is not NULL. If op->primary_port is set,
* it means 'op' is derived from the ovn_port op->primary_port.
*
@@ -2136,7 +2136,7 @@ create_cr_port(struct ovn_port *op, struct hmap *ports,
struct ovn_port *crp = ovn_port_find(ports, redirect_name);
if (crp && crp->sb && crp->sb->datapath == op->od->sb) {
- ovn_port_set_nb(crp, NULL, op->nbrp);
+ ovn_port_set_nb(crp, op->nbsp, op->nbrp);
ovs_list_remove(&crp->list);
ovs_list_push_back(both_dbs, &crp->list);
} else {
@@ -2466,7 +2466,7 @@ join_logical_ports(const struct sbrec_port_binding_table *sbrec_pb_table,
}
- /* Create chassisresident port for the distributed gateway port's (DGP)
+ /* Create chassisredirect port for the distributed gateway port's (DGP)
* peer if
* - DGP's router has only one DGP and
* - Its peer is a logical switch port and
@@ -9633,16 +9633,21 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
op->lflow_ref);
}
- /* For ND solicitations, we need to listen for both the
- * unicast IPv6 address and its all-nodes multicast address,
- * but always respond with the unicast IPv6 address. */
+ /* For ND solicitations:
+ * - Reply only for the all-nodes multicast address(es) of the
+ * logical port IPv6 address(es).
+ *
+ * - Do not reply for unicast ND solicitations. Let the target
+ * reply to it, so that the sender has the ability to monitor
+ * the target liveness via the unicast ND solicitations.
+ */
for (size_t j = 0; j < op->lsp_addrs[i].n_ipv6_addrs; j++) {
ds_clear(match);
- ds_put_format(match,
- "nd_ns && ip6.dst == {%s, %s} && nd.target == %s",
- op->lsp_addrs[i].ipv6_addrs[j].addr_s,
- op->lsp_addrs[i].ipv6_addrs[j].sn_addr_s,
- op->lsp_addrs[i].ipv6_addrs[j].addr_s);
+ ds_put_format(
+ match,
+ "nd_ns_mcast && ip6.dst == %s && nd.target == %s",
+ op->lsp_addrs[i].ipv6_addrs[j].sn_addr_s,
+ op->lsp_addrs[i].ipv6_addrs[j].addr_s);
ds_clear(actions);
ds_put_format(actions,
@@ -10437,18 +10442,11 @@ bfd_port_lookup(const struct hmap *bfd_map, const char *logical_port,
}
static bool
-bfd_is_port_running(const struct hmap *bfd_map, const char *port)
+bfd_is_port_running(const struct sset *bfd_ports, const char *port)
{
- struct bfd_entry *bfd_e;
- HMAP_FOR_EACH (bfd_e, hmap_node, bfd_map) {
- if (!strcmp(bfd_e->logical_port, port)) {
- return true;
- }
- }
- return false;
+ return !!sset_find(bfd_ports, port);
}
-
#define BFD_DEF_MINTX 1000 /* 1s */
#define BFD_DEF_MINRX 1000 /* 1s */
#define BFD_DEF_DETECT_MULT 5
@@ -10536,17 +10534,18 @@ bfd_table_sync(struct ovsdb_idl_txn *ovnsb_txn,
const struct hmap *bfd_connections,
const struct hmap *rp_bfd_connections,
const struct hmap *sr_bfd_connections,
- struct hmap *sync_bfd_connections)
+ struct sset *bfd_ports)
{
if (!ovnsb_txn) {
return;
}
unsigned long *bfd_src_ports = bitmap_allocate(BFD_UDP_SRC_PORT_LEN);
+ struct hmap sync_bfd_connections = HMAP_INITIALIZER(&sync_bfd_connections);
struct bfd_entry *bfd_e;
HMAP_FOR_EACH (bfd_e, hmap_node, bfd_connections) {
- struct bfd_entry *e = bfd_alloc_entry(sync_bfd_connections,
+ struct bfd_entry *e = bfd_alloc_entry(&sync_bfd_connections,
bfd_e->logical_port,
bfd_e->dst_ip, bfd_e->status);
e->nb_bt = bfd_e->nb_bt;
@@ -10561,7 +10560,7 @@ bfd_table_sync(struct ovsdb_idl_txn *ovnsb_txn,
const struct nbrec_bfd *nb_bt;
NBREC_BFD_TABLE_FOR_EACH (nb_bt, nbrec_bfd_table) {
- bfd_e = bfd_port_lookup(sync_bfd_connections, nb_bt->logical_port,
+ bfd_e = bfd_port_lookup(&sync_bfd_connections, nb_bt->logical_port,
nb_bt->dst_ip);
if (!bfd_e) {
continue;
@@ -10619,16 +10618,17 @@ bfd_table_sync(struct ovsdb_idl_txn *ovnsb_txn,
}
}
+ sset_add(bfd_ports, nb_bt->logical_port);
bfd_e->stale = false;
}
- HMAP_FOR_EACH_SAFE (bfd_e, hmap_node, sync_bfd_connections) {
+ HMAP_FOR_EACH_POP (bfd_e, hmap_node, &sync_bfd_connections) {
if (bfd_e->stale) {
- hmap_remove(sync_bfd_connections, &bfd_e->hmap_node);
sbrec_bfd_delete(bfd_e->sb_bt);
- bfd_erase_entry(bfd_e);
}
+ bfd_erase_entry(bfd_e);
}
+ hmap_destroy(&sync_bfd_connections);
bitmap_free(bfd_src_ports);
}
@@ -10665,64 +10665,6 @@ build_bfd_map(const struct nbrec_bfd_table *nbrec_bfd_table,
}
}
-void
-build_ecmp_nexthop_table(
- struct ovsdb_idl_txn *ovnsb_txn,
- struct hmap *routes,
- struct simap *nexthops,
- const struct sbrec_ecmp_nexthop_table *sbrec_ecmp_nexthop_table)
-{
- if (!ovnsb_txn) {
- return;
- }
-
- unsigned long *nexthop_ids = bitmap_allocate(ECMP_NEXTHOP_IDS_LEN);
- const struct sbrec_ecmp_nexthop *sb_ecmp_nexthop;
- SBREC_ECMP_NEXTHOP_TABLE_FOR_EACH (sb_ecmp_nexthop,
- sbrec_ecmp_nexthop_table) {
- simap_put(nexthops, sb_ecmp_nexthop->nexthop,
- sb_ecmp_nexthop->id);
- bitmap_set1(nexthop_ids, sb_ecmp_nexthop->id);
- }
-
- struct sset nb_nexthops_sset = SSET_INITIALIZER(&nb_nexthops_sset);
-
- struct parsed_route *pr;
- HMAP_FOR_EACH (pr, key_node, routes) {
- if (!pr->ecmp_symmetric_reply) {
- continue;
- }
-
- const struct nbrec_logical_router_static_route *r = pr->route;
- if (!simap_contains(nexthops, r->nexthop)) {
- int id = bitmap_scan(nexthop_ids, 0, 1, ECMP_NEXTHOP_IDS_LEN);
- if (id == ECMP_NEXTHOP_IDS_LEN) {
- static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
- VLOG_WARN_RL(&rl, "nexthop id address space is exhausted");
- continue;
- }
- bitmap_set1(nexthop_ids, id);
- simap_put(nexthops, r->nexthop, id);
-
- sb_ecmp_nexthop = sbrec_ecmp_nexthop_insert(ovnsb_txn);
- sbrec_ecmp_nexthop_set_nexthop(sb_ecmp_nexthop, r->nexthop);
- sbrec_ecmp_nexthop_set_id(sb_ecmp_nexthop, id);
- }
- sset_add(&nb_nexthops_sset, r->nexthop);
- }
-
- SBREC_ECMP_NEXTHOP_TABLE_FOR_EACH_SAFE (sb_ecmp_nexthop,
- sbrec_ecmp_nexthop_table) {
- if (!sset_contains(&nb_nexthops_sset, sb_ecmp_nexthop->nexthop)) {
- simap_find_and_delete(nexthops, sb_ecmp_nexthop->nexthop);
- sbrec_ecmp_nexthop_delete(sb_ecmp_nexthop);
- }
- }
-
- sset_destroy(&nb_nexthops_sset);
- bitmap_free(nexthop_ids);
-}
-
/* Returns a string of the IP address of the router port 'op' that
* overlaps with 'ip_s". If one is not found, returns NULL.
*
@@ -11113,7 +11055,7 @@ parsed_route_lookup(struct hmap *routes, size_t hash,
static void
parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports,
const struct nbrec_logical_router_static_route *route,
- struct hmap *bfd_connections,
+ const struct hmap *bfd_connections,
struct hmap *routes, struct simap *route_tables,
struct hmap *bfd_active_connections)
{
@@ -11226,7 +11168,7 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports,
void
build_parsed_routes(struct ovn_datapath *od, const struct hmap *lr_ports,
- struct hmap *bfd_connections, struct hmap *routes,
+ const struct hmap *bfd_connections, struct hmap *routes,
struct simap *route_tables,
struct hmap *bfd_active_connections)
{
@@ -11512,8 +11454,7 @@ add_ecmp_symmetric_reply_flows(struct lflow_table *lflows,
struct ovn_port *out_port,
const struct parsed_route *route,
struct ds *route_match,
- struct lflow_ref *lflow_ref,
- struct simap *nexthops_table)
+ struct lflow_ref *lflow_ref)
{
const struct nbrec_logical_router_static_route *st_route = route->route;
struct ds match = DS_EMPTY_INITIALIZER;
@@ -11548,15 +11489,9 @@ add_ecmp_symmetric_reply_flows(struct lflow_table *lflows,
ds_put_cstr(&match, " && (ct.new || ct.est)");
ds_put_format(&actions,
"ct_commit { ct_label.ecmp_reply_eth = eth.src; "
- "ct_mark.ecmp_reply_port = %" PRId64 ";",
+ "ct_mark.ecmp_reply_port = %" PRId64 ";}; "
+ "next;",
out_port->sb->tunnel_key);
-
- struct simap_node *n = simap_find(nexthops_table, st_route->nexthop);
- if (n) {
- ds_put_format(&actions, " ct_label.label = %d;", n->data);
- }
- ds_put_cstr(&actions, " }; next;");
-
ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_ECMP_STATEFUL, 100,
ds_cstr(&match), ds_cstr(&actions),
&st_route->header_,
@@ -11613,8 +11548,7 @@ add_ecmp_symmetric_reply_flows(struct lflow_table *lflows,
static void
build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
const struct hmap *lr_ports, struct ecmp_groups_node *eg,
- struct lflow_ref *lflow_ref,
- struct simap *nexthops_table)
+ struct lflow_ref *lflow_ref)
{
bool is_ipv4 = IN6_IS_ADDR_V4MAPPED(&eg->prefix);
@@ -11631,21 +11565,27 @@ build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
struct ds actions = DS_EMPTY_INITIALIZER;
ds_put_format(&actions, "ip.ttl--; flags.loopback = 1; %s = %"PRIu16
- "; %s = select(", REG_ECMP_GROUP_ID, eg->id,
- REG_ECMP_MEMBER_ID);
+ "; %s = ", REG_ECMP_GROUP_ID, eg->id, REG_ECMP_MEMBER_ID);
- bool is_first = true;
- LIST_FOR_EACH (er, list_node, &eg->route_list) {
- if (is_first) {
- is_first = false;
- } else {
- ds_put_cstr(&actions, ", ");
+ if (!ovs_list_is_singleton(&eg->route_list)) {
+ bool is_first = true;
+
+ ds_put_cstr(&actions, "select(");
+ LIST_FOR_EACH (er, list_node, &eg->route_list) {
+ if (is_first) {
+ is_first = false;
+ } else {
+ ds_put_cstr(&actions, ", ");
+ }
+ ds_put_format(&actions, "%"PRIu16, er->id);
}
- ds_put_format(&actions, "%"PRIu16, er->id);
+ ds_put_cstr(&actions, ");");
+ } else {
+ er = CONTAINER_OF(ovs_list_front(&eg->route_list),
+ struct ecmp_route_list_node, list_node);
+ ds_put_format(&actions, "%"PRIu16"; next;", er->id);
}
- ds_put_cstr(&actions, ");");
-
ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING, priority,
ds_cstr(&route_match), ds_cstr(&actions),
lflow_ref);
@@ -11671,7 +11611,7 @@ build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
out_port->key)) {
add_ecmp_symmetric_reply_flows(lflows, od, lrp_addr_s, out_port,
route_, &route_match,
- lflow_ref, nexthops_table);
+ lflow_ref);
}
ds_clear(&match);
ds_put_format(&match, REG_ECMP_GROUP_ID" == %"PRIu16" && "
@@ -11704,7 +11644,7 @@ add_route(struct lflow_table *lflows, struct ovn_datapath *od,
const struct ovn_port *op, const char *lrp_addr_s,
const char *network_s, int plen, const char *gateway,
bool is_src_route, const uint32_t rtb_id,
- const struct hmap *bfd_connections,
+ const struct sset *bfd_ports,
const struct ovsdb_idl_row *stage_hint, bool is_discard_route,
int ofs, struct lflow_ref *lflow_ref)
{
@@ -11753,7 +11693,7 @@ add_route(struct lflow_table *lflows, struct ovn_datapath *od,
priority, ds_cstr(&match),
ds_cstr(&actions), stage_hint,
lflow_ref);
- if (op && bfd_is_port_running(bfd_connections, op->key)) {
+ if (op && bfd_is_port_running(bfd_ports, op->key)) {
ds_put_format(&match, " && udp.dst == 3784");
ovn_lflow_add_with_hint(lflows, op->od,
S_ROUTER_IN_IP_ROUTING,
@@ -11770,7 +11710,7 @@ static void
build_static_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
const struct hmap *lr_ports,
const struct parsed_route *route_,
- const struct hmap *bfd_connections,
+ const struct sset *bfd_ports,
struct lflow_ref *lflow_ref)
{
const char *lrp_addr_s = NULL;
@@ -11795,7 +11735,7 @@ build_static_route_flow(struct lflow_table *lflows, struct ovn_datapath *od,
add_route(lflows, route_->is_discard_route ? od : out_port->od, out_port,
lrp_addr_s, prefix_s, route_->plen, route->nexthop,
route_->is_src_route, route_->route_table_id,
- bfd_connections, &route->header_, route_->is_discard_route,
+ bfd_ports, &route->header_, route_->is_discard_route,
ofs, lflow_ref);
free(prefix_s);
@@ -12617,7 +12557,7 @@ build_lrouter_port_nat_arp_nd_flow(struct ovn_port *op,
if (op->peer && op->peer->cr_port) {
/* We don't add the below flows if the router port's peer has
- * a chassisresident port. That's because routing is centralized on
+ * a chassisredirect port. That's because routing is centralized on
* the gateway chassis for the router port networks/subnets.
*/
return;
@@ -12947,10 +12887,10 @@ build_lrouter_force_snat_flows_op(struct ovn_port *op,
static void
build_lrouter_bfd_flows(struct lflow_table *lflows, struct ovn_port *op,
const struct shash *meter_groups,
- const struct hmap *bfd_connections,
+ const struct sset *bfd_ports,
struct lflow_ref *lflow_ref)
{
- if (!bfd_is_port_running(bfd_connections, op->key)) {
+ if (!bfd_is_port_running(bfd_ports, op->key)) {
return;
}
@@ -13546,7 +13486,7 @@ build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
*/
static void
build_ip_routing_flows_for_lrp(struct ovn_port *op,
- const struct hmap *bfd_connections,
+ const struct sset *bfd_ports,
struct lflow_table *lflows,
struct lflow_ref *lflow_ref)
{
@@ -13555,7 +13495,7 @@ build_ip_routing_flows_for_lrp(struct ovn_port *op,
add_route(lflows, op->od, op, op->lrp_networks.ipv4_addrs[i].addr_s,
op->lrp_networks.ipv4_addrs[i].network_s,
op->lrp_networks.ipv4_addrs[i].plen, NULL, false, 0,
- bfd_connections, &op->nbrp->header_, false,
+ bfd_ports, &op->nbrp->header_, false,
ROUTE_PRIO_OFFSET_CONNECTED, lflow_ref);
}
@@ -13563,7 +13503,7 @@ build_ip_routing_flows_for_lrp(struct ovn_port *op,
add_route(lflows, op->od, op, op->lrp_networks.ipv6_addrs[i].addr_s,
op->lrp_networks.ipv6_addrs[i].network_s,
op->lrp_networks.ipv6_addrs[i].plen, NULL, false, 0,
- bfd_connections, &op->nbrp->header_, false,
+ bfd_ports, &op->nbrp->header_, false,
ROUTE_PRIO_OFFSET_CONNECTED, lflow_ref);
}
}
@@ -13572,8 +13512,8 @@ static void
build_static_route_flows_for_lrouter(
struct ovn_datapath *od, struct lflow_table *lflows,
const struct hmap *lr_ports, struct hmap *parsed_routes,
- struct simap *route_tables, const struct hmap *bfd_connections,
- struct lflow_ref *lflow_ref, struct simap *nexthops_table)
+ struct simap *route_tables, const struct sset *bfd_ports,
+ struct lflow_ref *lflow_ref)
{
ovs_assert(od->nbr);
ovn_lflow_add_default_drop(lflows, od, S_ROUTER_IN_IP_ROUTING_ECMP,
@@ -13607,6 +13547,11 @@ build_static_route_flows_for_lrouter(
if (group) {
ecmp_groups_add_route(group, route);
}
+ } else if (route->ecmp_symmetric_reply) {
+ /* Traffic for symmetric reply routes has to be conntracked
+ * even if there is only one next-hop, in case another next-hop
+ * is added later. */
+ ecmp_groups_add(&ecmp_groups, route);
} else {
unique_routes_add(&unique_routes, route);
}
@@ -13615,13 +13560,12 @@ build_static_route_flows_for_lrouter(
HMAP_FOR_EACH (group, hmap_node, &ecmp_groups) {
/* add a flow in IP_ROUTING, and one flow for each member in
* IP_ROUTING_ECMP. */
- build_ecmp_route_flow(lflows, od, lr_ports, group, lflow_ref,
- nexthops_table);
+ build_ecmp_route_flow(lflows, od, lr_ports, group, lflow_ref);
}
const struct unique_routes_node *ur;
HMAP_FOR_EACH (ur, hmap_node, &unique_routes) {
build_static_route_flow(lflows, od, lr_ports, ur->route,
- bfd_connections, lflow_ref);
+ bfd_ports, lflow_ref);
}
ecmp_groups_destroy(&ecmp_groups);
unique_routes_destroy(&unique_routes);
@@ -14002,6 +13946,234 @@ build_arp_resolve_flows_for_lrp(struct ovn_port *op,
}
}
+static void
+build_routing_protocols_redirect_rule__(
+ const char *s_addr, const char *redirect_port_name, int protocol_port,
+ const char *proto, bool is_ipv6, struct ovn_port *ls_peer,
+ struct lflow_table *lflows, struct ds *match, struct ds *actions,
+ struct lflow_ref *lflow_ref)
+{
+ int ip_ver = is_ipv6 ? 6 : 4;
+ ds_clear(actions);
+ ds_put_format(actions, "outport = \"%s\"; output;", redirect_port_name);
+
+ /* Redirect packets in the input pipeline destined for LR's IP
+ * and the routing protocol's port to the LSP specified in
+ * 'routing-protocol-redirect' option.*/
+ ds_clear(match);
+ ds_put_format(match, "ip%d.dst == %s && %s.dst == %d", ip_ver, s_addr,
+ proto, protocol_port);
+ ovn_lflow_add(lflows, ls_peer->od, S_SWITCH_IN_L2_LKUP, 100,
+ ds_cstr(match),
+ ds_cstr(actions),
+ lflow_ref);
+
+ /* To accomodate "peer" nature of the routing daemons, redirect also
+ * replies to the daemons' client requests. */
+ ds_clear(match);
+ ds_put_format(match, "ip%d.dst == %s && %s.src == %d", ip_ver, s_addr,
+ proto, protocol_port);
+ ovn_lflow_add(lflows, ls_peer->od, S_SWITCH_IN_L2_LKUP, 100,
+ ds_cstr(match),
+ ds_cstr(actions),
+ lflow_ref);
+}
+
+static void
+apply_routing_protocols_redirect__(
+ const char *s_addr, const char *redirect_port_name, int protocol_flags,
+ bool is_ipv6, struct ovn_port *ls_peer, struct lflow_table *lflows,
+ struct ds *match, struct ds *actions, struct lflow_ref *lflow_ref)
+{
+ if (protocol_flags & REDIRECT_BGP) {
+ build_routing_protocols_redirect_rule__(s_addr, redirect_port_name,
+ 179, "tcp", is_ipv6, ls_peer,
+ lflows, match, actions,
+ lflow_ref);
+ }
+
+ if (protocol_flags & REDIRECT_BFD) {
+ build_routing_protocols_redirect_rule__(s_addr, redirect_port_name,
+ 3784, "udp", is_ipv6, ls_peer,
+ lflows, match, actions,
+ lflow_ref);
+ }
+
+ /* Because the redirected port shares IP and MAC addresses with the LRP,
+ * special consideration needs to be given to the signaling protocols. */
+ ds_clear(actions);
+ ds_put_format(actions,
+ "clone { outport = \"%s\"; output; }; "
+ "outport = %s; output;",
+ redirect_port_name, ls_peer->json_key);
+ if (is_ipv6) {
+ /* Ensure that redirect port receives copy of NA messages destined to
+ * its IP.*/
+ ds_clear(match);
+ ds_put_format(match, "ip6.dst == %s && nd_na", s_addr);
+ ovn_lflow_add(lflows, ls_peer->od, S_SWITCH_IN_L2_LKUP, 100,
+ ds_cstr(match),
+ ds_cstr(actions),
+ lflow_ref);
+ } else {
+ /* Ensure that redirect port receives copy of ARP replies destined to
+ * its IP */
+ ds_clear(match);
+ ds_put_format(match, "arp.op == 2 && arp.tpa == %s", s_addr);
+ ovn_lflow_add(lflows, ls_peer->od, S_SWITCH_IN_L2_LKUP, 100,
+ ds_cstr(match),
+ ds_cstr(actions),
+ lflow_ref);
+ }
+}
+
+static int
+parse_redirected_routing_protocols(struct ovn_port *lrp) {
+ int redirected_protocol_flags = 0;
+ const char *redirect_protocols = smap_get(&lrp->nbrp->options,
+ "routing-protocols");
+ if (!redirect_protocols) {
+ return redirected_protocol_flags;
+ }
+
+ char *proto;
+ char *save_ptr = NULL;
+ char *tokstr = xstrdup(redirect_protocols);
+ for (proto = strtok_r(tokstr, ",", &save_ptr); proto != NULL;
+ proto = strtok_r(NULL, ",", &save_ptr)) {
+ if (!strcmp(proto, "BGP")) {
+ redirected_protocol_flags |= REDIRECT_BGP;
+ continue;
+ }
+
+ if (!strcmp(proto, "BFD")) {
+ redirected_protocol_flags |= REDIRECT_BFD;
+ continue;
+ }
+
+ static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+ VLOG_WARN_RL(&rl, "Option 'routing-protocols' encountered unknown "
+ "value %s",
+ proto);
+ }
+ free(tokstr);
+ return redirected_protocol_flags;
+}
+
+static void
+build_lrouter_routing_protocol_redirect(
+ struct ovn_port *op, struct lflow_table *lflows, struct ds *match,
+ struct ds *actions, struct lflow_ref *lflow_ref,
+ const struct hmap *ls_ports)
+{
+ /* LRP has to have a peer.*/
+ if (!op->peer) {
+ return;
+ }
+
+ /* LRP has to have NB record.*/
+ if (!op->nbrp) {
+ return;
+ }
+
+ /* Proceed only for LRPs that have 'routing-protocol-redirect' option set.
+ * Value of this option is the name of LSP to which the routing protocol
+ * traffic will be redirected. */
+ const char *redirect_port_name = smap_get(&op->nbrp->options,
+ "routing-protocol-redirect");
+ if (!redirect_port_name) {
+ return;
+ }
+
+ if (op->cr_port) {
+ static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+ VLOG_WARN_RL(&rl, "Option 'routing-protocol-redirect' is not "
+ "supported on Distributed Gateway Port '%s'",
+ op->key);
+ return;
+ }
+
+ /* Ensure that LSP, to which the routing protocol traffic is redirected,
+ * exists. */
+ struct ovn_port *lsp_in_peer = ovn_port_find(ls_ports,
+ redirect_port_name);
+ if (!lsp_in_peer) {
+ static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+ VLOG_WARN_RL(&rl, "Option 'routing-protocol-redirect' set on Logical "
+ "Router Port '%s' refers to non-existent Logical "
+ "Switch Port. Routing protocol redirecting won't be "
+ "configured.",
+ op->key);
+ return;
+ }
+ if (lsp_in_peer->od != op->peer->od) {
+ static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+ VLOG_WARN_RL(&rl, "Logical Router Port '%s' is connected to a "
+ "different switch than the Logical Switch Port "
+ "'%s' defined in its 'routing-protocol-redirect' "
+ "option. Routing protocol redirecting won't be "
+ "configured.",
+ op->key, redirect_port_name);
+ return;
+ }
+
+ int redirected_protocols = parse_redirected_routing_protocols(op);
+ if (!redirected_protocols) {
+ static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+ VLOG_WARN_RL(&rl, "Option 'routing-protocol-redirect' is set on "
+ "Logical Router Port '%s' but no known protocols "
+ "were set via 'routing-protocols' options. This "
+ "configuration has no effect.",
+ op->key);
+ return;
+ }
+
+ /* Redirect traffic destined for LRP's IPs and the specified routing
+ * protocol ports to the port defined in 'routing-protocol-redirect'
+ * option.*/
+ for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
+ const char *ip_s = op->lrp_networks.ipv4_addrs[i].addr_s;
+ apply_routing_protocols_redirect__(ip_s, redirect_port_name,
+ redirected_protocols, false,
+ op->peer, lflows, match, actions,
+ lflow_ref);
+ }
+ for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
+ const char *ip_s = op->lrp_networks.ipv6_addrs[i].addr_s;
+ apply_routing_protocols_redirect__(ip_s, redirect_port_name,
+ redirected_protocols, true,
+ op->peer, lflows, match, actions,
+ lflow_ref);
+ }
+
+ /* Drop ARP replies and IPv6 RA/NA packets originating from
+ * 'routing-protocol-redirect' LSP. As this port shares IP and MAC
+ * addresses with LRP, we don't want to create duplicates.*/
+ ds_clear(match);
+ ds_put_format(match, "inport == \"%s\" && arp.op == 2",
+ redirect_port_name);
+ ovn_lflow_add(lflows, op->peer->od, S_SWITCH_IN_CHECK_PORT_SEC, 80,
+ ds_cstr(match),
+ REGBIT_PORT_SEC_DROP " = 1; next;",
+ lflow_ref);
+
+ ds_clear(match);
+ ds_put_format(match, "inport == \"%s\" && nd_na",
+ redirect_port_name);
+ ovn_lflow_add(lflows, op->peer->od, S_SWITCH_IN_CHECK_PORT_SEC, 80,
+ ds_cstr(match),
+ REGBIT_PORT_SEC_DROP " = 1; next;",
+ lflow_ref);
+
+ ds_clear(match);
+ ds_put_format(match, "inport == \"%s\" && nd_ra",
+ redirect_port_name);
+ ovn_lflow_add(lflows, op->peer->od, S_SWITCH_IN_CHECK_PORT_SEC, 80,
+ ds_cstr(match),
+ REGBIT_PORT_SEC_DROP " = 1; next;",
+ lflow_ref);
+}
+
/* This function adds ARP resolve flows related to a LSP. */
static void
build_arp_resolve_flows_for_lsp(
@@ -15095,7 +15267,7 @@ build_lrouter_ipv4_ip_input(struct ovn_port *op,
struct lflow_table *lflows,
struct ds *match, struct ds *actions,
const struct shash *meter_groups,
- const struct hmap *bfd_connections,
+ const struct sset *bfd_ports,
struct lflow_ref *lflow_ref)
{
ovs_assert(op->nbrp);
@@ -15137,8 +15309,7 @@ build_lrouter_ipv4_ip_input(struct ovn_port *op,
}
/* BFD msg handling */
- build_lrouter_bfd_flows(lflows, op, meter_groups, bfd_connections,
- lflow_ref);
+ build_lrouter_bfd_flows(lflows, op, meter_groups, bfd_ports, lflow_ref);
/* ICMP time exceeded */
struct ds ip_ds = DS_EMPTY_INITIALIZER;
@@ -15824,7 +15995,6 @@ build_lrouter_out_snat_flow(struct lflow_table *lflows,
build_lrouter_out_snat_match(lflows, od, nat, match, distributed_nat,
cidr_bits, is_v6, l3dgw_port, lflow_ref,
false);
- size_t original_match_len = match->length;
if (!od->is_gw_router && distributed_nat) {
ds_put_format(actions, "eth.src = "ETH_ADDR_FMT"; ",
@@ -15846,14 +16016,13 @@ build_lrouter_out_snat_flow(struct lflow_table *lflows,
/* For the SNAT networks, we need to make sure that connections are
* properly tracked so we can decide whether to perform SNAT on traffic
* exiting the network. */
- if (features->ct_commit_to_zone && !strcmp(nat->type, "snat") &&
- !od->is_gw_router) {
+ if (features->ct_commit_to_zone && features->ct_next_zone &&
+ !strcmp(nat->type, "snat") && !od->is_gw_router) {
/* For traffic that comes from SNAT network, initiate CT state before
* entering S_ROUTER_OUT_SNAT to allow matching on various CT states.
*/
- ds_truncate(match, original_match_len);
ovn_lflow_add(lflows, od, S_ROUTER_OUT_POST_UNDNAT, 70,
- ds_cstr(match), "ct_snat;",
+ ds_cstr(match), "ct_next(snat);",
lflow_ref);
build_lrouter_out_snat_match(lflows, od, nat, match,
@@ -16116,7 +16285,7 @@ lrouter_check_nat_entry(const struct ovn_datapath *od,
*distributed = false;
/* NAT cannnot be distributed if the DGP's peer
- * has a chassisresident port (as the routing is centralized
+ * has a chassisredirect port (as the routing is centralized
* on the gateway chassis for the DGP's networks/subnets.)
*/
struct ovn_port *l3dgw_port = *nat_l3dgw_port;
@@ -16591,7 +16760,7 @@ build_lsp_lflows_for_lbnats(struct ovn_port *lsp,
static void
build_routable_flows_for_router_port(
struct ovn_port *lrp, const struct lr_stateful_record *lr_stateful_rec,
- const struct hmap *bfd_connections,
+ const struct sset *bfd_ports,
struct lflow_table *lflows,
struct ds *match,
struct ds *actions)
@@ -16628,7 +16797,7 @@ build_routable_flows_for_router_port(
router_port->lrp_networks.ipv4_addrs[0].addr_s,
laddrs->ipv4_addrs[k].network_s,
laddrs->ipv4_addrs[k].plen, NULL, false, 0,
- bfd_connections, &router_port->nbrp->header_,
+ bfd_ports, &router_port->nbrp->header_,
false, ROUTE_PRIO_OFFSET_CONNECTED,
lrp->stateful_lflow_ref);
}
@@ -16739,7 +16908,7 @@ build_lrp_lflows_for_lbnats(struct ovn_port *op,
static void
build_lbnat_lflows_iterate_by_lrp(
struct ovn_port *op, const struct lr_stateful_table *lr_stateful_table,
- const struct shash *meter_groups, const struct hmap *bfd_connections,
+ const struct shash *meter_groups, const struct sset *bfd_ports,
struct ds *match, struct ds *actions, struct lflow_table *lflows)
{
ovs_assert(op->nbrp);
@@ -16752,7 +16921,7 @@ build_lbnat_lflows_iterate_by_lrp(
build_lrp_lflows_for_lbnats(op, lr_stateful_rec, meter_groups, match,
actions, lflows);
- build_routable_flows_for_router_port(op, lr_stateful_rec, bfd_connections,
+ build_routable_flows_for_router_port(op, lr_stateful_rec, bfd_ports,
lflows, match, actions);
}
@@ -16816,7 +16985,7 @@ struct lswitch_flow_build_info {
const struct shash *meter_groups;
const struct hmap *lb_dps_map;
const struct hmap *svc_monitor_map;
- const struct hmap *bfd_connections;
+ const struct sset *bfd_ports;
const struct chassis_features *features;
char *svc_check_match;
struct ds match;
@@ -16827,7 +16996,6 @@ struct lswitch_flow_build_info {
struct hmap *parsed_routes;
struct hmap *route_policies;
struct simap *route_tables;
- struct simap *nexthops_table;
};
/* Helper function to combine all lflow generation which is iterated by
@@ -16874,8 +17042,7 @@ build_lswitch_and_lrouter_iterate_by_lr(struct ovn_datapath *od,
build_ip_routing_pre_flows_for_lrouter(od, lsi->lflows, NULL);
build_static_route_flows_for_lrouter(od, lsi->lflows, lsi->lr_ports,
lsi->parsed_routes, lsi->route_tables,
- lsi->bfd_connections, NULL,
- lsi->nexthops_table);
+ lsi->bfd_ports, NULL);
build_mcast_lookup_flows_for_lrouter(od, lsi->lflows, &lsi->match,
&lsi->actions, NULL);
build_ingress_policy_flows_for_lrouter(od, lsi->lflows, lsi->lr_ports,
@@ -16946,7 +17113,7 @@ build_lswitch_and_lrouter_iterate_by_lrp(struct ovn_port *op,
&lsi->actions, op->lflow_ref);
build_neigh_learning_flows_for_lrouter_port(op, lsi->lflows, &lsi->match,
&lsi->actions, op->lflow_ref);
- build_ip_routing_flows_for_lrp(op, lsi->bfd_connections,
+ build_ip_routing_flows_for_lrp(op, lsi->bfd_ports,
lsi->lflows, op->lflow_ref);
build_ND_RA_flows_for_lrouter_port(op, lsi->lflows, &lsi->match,
&lsi->actions, lsi->meter_groups,
@@ -16965,10 +17132,13 @@ build_lswitch_and_lrouter_iterate_by_lrp(struct ovn_port *op,
lsi->meter_groups,
op->lflow_ref);
build_lrouter_ipv4_ip_input(op, lsi->lflows, &lsi->match, &lsi->actions,
- lsi->meter_groups, lsi->bfd_connections,
+ lsi->meter_groups, lsi->bfd_ports,
op->lflow_ref);
build_lrouter_icmp_packet_toobig_admin_flows(op, lsi->lflows, &lsi->match,
&lsi->actions, op->lflow_ref);
+ build_lrouter_routing_protocol_redirect(op, lsi->lflows, &lsi->match,
+ &lsi->actions, op->lflow_ref,
+ lsi->ls_ports);
}
static void *
@@ -17055,7 +17225,7 @@ build_lflows_thread(void *arg)
build_lswitch_and_lrouter_iterate_by_lrp(op, lsi);
build_lbnat_lflows_iterate_by_lrp(
op, lsi->lr_stateful_table, lsi->meter_groups,
- lsi->bfd_connections, &lsi->match, &lsi->actions,
+ lsi->bfd_ports, &lsi->match, &lsi->actions,
lsi->lflows);
}
}
@@ -17196,14 +17366,13 @@ build_lswitch_and_lrouter_flows(
const struct shash *meter_groups,
const struct hmap *lb_dps_map,
const struct hmap *svc_monitor_map,
- const struct hmap *bfd_connections,
+ const struct sset *bfd_ports,
const struct chassis_features *features,
const char *svc_monitor_mac,
const struct sampling_app_table *sampling_apps,
struct hmap *parsed_routes,
struct hmap *route_policies,
- struct simap *route_tables,
- struct simap *nexthops_table)
+ struct simap *route_tables)
{
char *svc_check_match = xasprintf("eth.dst == %s", svc_monitor_mac);
@@ -17232,7 +17401,7 @@ build_lswitch_and_lrouter_flows(
lsiv[index].meter_groups = meter_groups;
lsiv[index].lb_dps_map = lb_dps_map;
lsiv[index].svc_monitor_map = svc_monitor_map;
- lsiv[index].bfd_connections = bfd_connections;
+ lsiv[index].bfd_ports = bfd_ports;
lsiv[index].features = features;
lsiv[index].svc_check_match = svc_check_match;
lsiv[index].thread_lflow_counter = 0;
@@ -17241,7 +17410,6 @@ build_lswitch_and_lrouter_flows(
lsiv[index].parsed_routes = parsed_routes;
lsiv[index].route_tables = route_tables;
lsiv[index].route_policies = route_policies;
- lsiv[index].nexthops_table = nexthops_table;
ds_init(&lsiv[index].match);
ds_init(&lsiv[index].actions);
@@ -17278,7 +17446,7 @@ build_lswitch_and_lrouter_flows(
.meter_groups = meter_groups,
.lb_dps_map = lb_dps_map,
.svc_monitor_map = svc_monitor_map,
- .bfd_connections = bfd_connections,
+ .bfd_ports = bfd_ports,
.features = features,
.svc_check_match = svc_check_match,
.svc_monitor_mac = svc_monitor_mac,
@@ -17288,7 +17456,6 @@ build_lswitch_and_lrouter_flows(
.route_policies = route_policies,
.match = DS_EMPTY_INITIALIZER,
.actions = DS_EMPTY_INITIALIZER,
- .nexthops_table = nexthops_table,
};
/* Combined build - all lflow generation from lswitch and lrouter
@@ -17318,7 +17485,7 @@ build_lswitch_and_lrouter_flows(
build_lswitch_and_lrouter_iterate_by_lrp(op, &lsi);
build_lbnat_lflows_iterate_by_lrp(op, lsi.lr_stateful_table,
lsi.meter_groups,
- lsi.bfd_connections,
+ lsi.bfd_ports,
&lsi.match,
&lsi.actions,
lsi.lflows);
@@ -17449,14 +17616,13 @@ void build_lflows(struct ovsdb_idl_txn *ovnsb_txn,
input_data->meter_groups,
input_data->lb_datapaths_map,
input_data->svc_monitor_map,
- input_data->bfd_connections,
+ input_data->bfd_ports,
input_data->features,
input_data->svc_monitor_mac,
input_data->sampling_apps,
input_data->parsed_routes,
input_data->route_policies,
- input_data->route_tables,
- input_data->nexthops_table);
+ input_data->route_tables);
if (parallelization_state == STATE_INIT_HASH_SIZES) {
parallelization_state = STATE_USE_PARALLELIZATION;
@@ -17818,7 +17984,7 @@ lflow_handle_lr_stateful_changes(struct ovsdb_idl_txn *ovnsb_txn,
build_lbnat_lflows_iterate_by_lrp(op,
lflow_input->lr_stateful_table,
lflow_input->meter_groups,
- lflow_input->bfd_connections,
+ lflow_input->bfd_ports,
&match, &actions,
lflows);
@@ -18409,7 +18575,8 @@ build_static_mac_binding_table(
struct hmap *lr_ports)
{
/* Cleanup SB Static_MAC_Binding entries which do not have corresponding
- * NB Static_MAC_Binding entries. */
+ * NB Static_MAC_Binding entries, and SB Static_MAC_Binding entries for
+ * which there is not a NB Logical_Router_Port of the same name. */
const struct nbrec_static_mac_binding *nb_smb;
const struct sbrec_static_mac_binding *sb_smb;
SBREC_STATIC_MAC_BINDING_TABLE_FOR_EACH_SAFE (sb_smb,
@@ -18419,6 +18586,12 @@ build_static_mac_binding_table(
sb_smb->ip);
if (!nb_smb) {
sbrec_static_mac_binding_delete(sb_smb);
+ continue;
+ }
+
+ struct ovn_port *op = ovn_port_find(lr_ports, nb_smb->logical_port);
+ if (!op || !op->nbrp || !op->od || !op->od->sb) {
+ sbrec_static_mac_binding_delete(sb_smb);
}
}
@@ -18554,9 +18727,9 @@ bfd_init(struct bfd_data *data)
}
void
-ecmp_nexthop_init(struct ecmp_nexthop_data *data)
+bfd_sync_init(struct bfd_sync_data *data)
{
- simap_init(&data->nexthops);
+ sset_init(&data->bfd_ports);
}
void
@@ -18615,6 +18788,12 @@ bfd_destroy(struct bfd_data *data)
__bfd_destroy(&data->bfd_connections);
}
+void
+bfd_sync_destroy(struct bfd_sync_data *data)
+{
+ sset_destroy(&data->bfd_ports);
+}
+
void
route_policies_destroy(struct route_policies_data *data)
{
@@ -18640,12 +18819,6 @@ static_routes_destroy(struct static_routes_data *data)
__bfd_destroy(&data->bfd_active_connections);
}
-void
-ecmp_nexthop_destroy(struct ecmp_nexthop_data *data)
-{
- simap_destroy(&data->nexthops);
-}
-
void
ovnnb_db_run(struct northd_input *input_data,
struct northd_data *data,
diff --git a/northd/northd.h b/northd/northd.h
index 6e0258ff4..8f76d642d 100644
--- a/northd/northd.h
+++ b/northd/northd.h
@@ -93,6 +93,13 @@ ovn_datapath_find_by_key(struct hmap *datapaths, uint32_t dp_key);
bool od_has_lb_vip(const struct ovn_datapath *od);
+/* List of routing and routing-related protocols which
+ * OVN is capable of redirecting from LRP to specific LSP. */
+enum redirected_routing_protcol_flag_type {
+ REDIRECT_BGP = (1 << 0),
+ REDIRECT_BFD = (1 << 1),
+};
+
struct tracked_ovn_ports {
/* tracked created ports.
* hmapx node data is 'struct ovn_port *' */
@@ -188,8 +195,8 @@ struct bfd_data {
struct hmap bfd_connections;
};
-struct ecmp_nexthop_data {
- struct simap nexthops;
+struct bfd_sync_data {
+ struct sset bfd_ports;
};
struct lr_nat_table;
@@ -213,7 +220,7 @@ struct lflow_input {
const struct ls_stateful_table *ls_stateful_table;
const struct shash *meter_groups;
const struct hmap *lb_datapaths_map;
- const struct hmap *bfd_connections;
+ const struct sset *bfd_ports;
const struct chassis_features *features;
const struct hmap *svc_monitor_map;
bool ovn_internal_version_changed;
@@ -222,7 +229,6 @@ struct lflow_input {
struct hmap *parsed_routes;
struct hmap *route_policies;
struct simap *route_tables;
- struct simap *nexthops_table;
};
extern int parallelization_state;
@@ -727,7 +733,7 @@ void northd_indices_create(struct northd_data *data,
void route_policies_init(struct route_policies_data *);
void route_policies_destroy(struct route_policies_data *);
void build_parsed_routes(struct ovn_datapath *, const struct hmap *,
- struct hmap *, struct hmap *, struct simap *,
+ const struct hmap *, struct hmap *, struct simap *,
struct hmap *);
uint32_t get_route_table_id(struct simap *, const char *);
void static_routes_init(struct static_routes_data *);
@@ -736,11 +742,8 @@ void static_routes_destroy(struct static_routes_data *);
void bfd_init(struct bfd_data *);
void bfd_destroy(struct bfd_data *);
-void build_ecmp_nexthop_table(struct ovsdb_idl_txn *,
- struct hmap *, struct simap *,
- const struct sbrec_ecmp_nexthop_table *);
-void ecmp_nexthop_init(struct ecmp_nexthop_data *);
-void ecmp_nexthop_destroy(struct ecmp_nexthop_data *);
+void bfd_sync_init(struct bfd_sync_data *);
+void bfd_sync_destroy(struct bfd_sync_data *);
struct lflow_table;
struct lr_stateful_tracked_data;
@@ -783,7 +786,8 @@ void build_route_policies(struct ovn_datapath *, const struct hmap *,
const struct hmap *, struct hmap *, struct hmap *);
void bfd_table_sync(struct ovsdb_idl_txn *, const struct nbrec_bfd_table *,
const struct hmap *, const struct hmap *,
- const struct hmap *, const struct hmap *, struct hmap *);
+ const struct hmap *, const struct hmap *,
+ struct sset *);
void build_bfd_map(const struct nbrec_bfd_table *,
const struct sbrec_bfd_table *, struct hmap *);
void run_update_worker_pool(int n_threads);
diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
index 3abd5f75b..ef5cd0c8c 100644
--- a/northd/ovn-northd.8.xml
+++ b/northd/ovn-northd.8.xml
@@ -284,6 +284,32 @@
dropped in the next stage.
</li>
+ <li>
+ <p>
+ For each logical port that's defined as a target of routing protocol
+ redirecting (via <code>routing-protocol-redirect</code> option set on
+ Logical Router Port), a filter is set in place that disallows
+ following traffic exiting this port:
+ </p>
+ <ul>
+ <li>
+ ARP replies
+ </li>
+ <li>
+ IPv6 Neighbor Discovery - Router Advertisements
+ </li>
+ <li>
+ IPv6 Neighbor Discovery - Neighbor Advertisements
+ </li>
+ </ul>
+ <p>
+ Since this port shares IP and MAC addresses with the Logical Router
+ Port, we wan't to prevent duplicate replies and advertisements. This
+ is achieved by a rule with priority 80 that sets
+ <code>REGBIT_PORT_SEC_DROP" = 1; next;"</code>.
+ </p>
+ </li>
+
<li>
For each (enabled) vtep logical port, a priority 70 flow is added which
matches on all packets and applies the action
@@ -2002,6 +2028,34 @@ output;
on the logical switch.
</li>
+ <li>
+ <p>
+ For any logical port that's defined as a target of routing protocol
+ redirecting (via <code>routing-protocol-redirect</code> option set on
+ Logical Router Port), we redirect the traffic related to protocols
+ specified in <code>routing-protocols</code> option. It's acoomplished
+ with following priority-100 flows:
+ </p>
+ <ul>
+ <li>
+ Flows that match Logical Router Port's IPs and destination port of
+ the routing daemon are redirected to this port to allow external
+ peers' connection to the daemon listening on this port.
+ </li>
+ <li>
+ Flows that match Logical Router Port's IPs and source port of
+ the routing daemon are redirected to this port to allow replies
+ from the peers.
+ </li>
+ </ul>
+ <p>
+ In addition to this, we add priority-100 rules that
+ <code>clone</code> ARP replies and IPv6 Neighbor Advertisements to
+ this port as well. These allow to build proper ARP/IPv6 neighbor
+ list on this port.
+ </p>
+ </li>
+
<li>
Priority-90 flows for transit switches that forward registered
IP multicast traffic to their corresponding multicast group , which
@@ -4274,7 +4328,18 @@ next;
ip.ttl--;
flags.loopback = 1;
reg8[0..15] = <var>GID</var>;
-select(reg8[16..31], <var>MID1</var>, <var>MID2</var>, ...);
+reg8[16..31] = select(<var>MID1</var>, <var>MID2</var>, ...);
+ </pre>
+ <p>
+ However, when there is only one route in an ECMP group, group actions
+ will be:
+ </p>
+
+ <pre>
+ip.ttl--;
+flags.loopback = 1;
+reg8[0..15] = <var>GID</var>;
+reg8[16..31] = <var>MID1</var>);
</pre>
</li>
diff --git a/ovn-nb.xml b/ovn-nb.xml
index bbda423a5..2836f58f5 100644
--- a/ovn-nb.xml
+++ b/ovn-nb.xml
@@ -3575,6 +3575,48 @@ or
</p>
</column>
+ <column name="options" key="routing-protocol-redirect"
+ type='{"type": "string"}'>
+ <p>
+ NOTE: this feature is experimental and may be subject to
+ removal/change in the future.
+ </p>
+ <p>
+ This option expects a name of a Logical Switch Port that's present
+ in the peer's Logical Switch. If set, it causes any traffic
+ that's destined for Logical Router Port's IP addresses (including
+ its IPv6 LLA) and the ports associated with routing protocols defined
+ ip <code>routing-protocols</code> option, to be redirected
+ to the specified Logical Switch Port.
+
+ This allows external routing daemons to be bound to a port in OVN's
+ Logical Switch and act as if they were listening on Logical Router
+ Port's IP addresses.
+ </p>
+ </column>
+
+ <column name="options" key="routing-protocols" type='{"type": "string"}'>
+ <p>
+ NOTE: this feature is experimental and may be subject to
+ removal/change in the future.
+ </p>
+ <p>
+ This option expects a comma-separated list of routing, and
+ routing-related protocols, whose control plane traffic will be
+ redirected to a port specified in
+ <code>routing-protocol-redirect</code> option. Currently supported
+ options are:
+ </p>
+ <ul>
+ <li>
+ <code>BGP</code> (forwards TCP port 179)
+ </li>
+ <li>
+ <code>BFD</code> (forwards UDP port 3784)
+ </li>
+ </ul>
+ </column>
+
<column name="options" key="gateway_mtu_bypass">
<p>
When configured, represents a match expression, in the same
diff --git a/ovn-sb.ovsschema b/ovn-sb.ovsschema
index 9d8e0ac46..73abf2c8d 100644
--- a/ovn-sb.ovsschema
+++ b/ovn-sb.ovsschema
@@ -1,7 +1,7 @@
{
"name": "OVN_Southbound",
- "version": "20.36.0",
- "cksum": "1845967275 32154",
+ "version": "20.37.0",
+ "cksum": "1950136776 31493",
"tables": {
"SB_Global": {
"columns": {
@@ -610,20 +610,6 @@
"refTable": "Datapath_Binding"}}}},
"indexes": [["logical_port", "ip"]],
"isRoot": true},
- "ECMP_Nexthop": {
- "columns": {
- "nexthop": {"type": "string"},
- "id": {"type": {"key": {"type": "integer",
- "minInteger": 0,
- "maxInteger": 65535}}},
- "external_ids": {
- "type": {"key": "string", "value": "string",
- "min": 0, "max": "unlimited"}},
- "options": {
- "type": {"key": "string", "value": "string",
- "min": 0, "max": "unlimited"}}},
- "indexes": [["nexthop"]],
- "isRoot": true},
"Chassis_Template_Var": {
"columns": {
"chassis": {"type": "string"},
diff --git a/ovn-sb.xml b/ovn-sb.xml
index de0bd636f..5285cae30 100644
--- a/ovn-sb.xml
+++ b/ovn-sb.xml
@@ -1146,6 +1146,7 @@
<ul>
<li><code>eth.bcast</code> expands to <code>eth.dst == ff:ff:ff:ff:ff:ff</code></li>
<li><code>eth.mcast</code> expands to <code>eth.dst[40]</code></li>
+ <li><code>eth.mcastv6</code> expands to <code>eth.dst[32..47] == 0x3333</code></li>
<li><code>vlan.present</code> expands to <code>vlan.tci[12]</code></li>
<li><code>ip4</code> expands to <code>eth.type == 0x800</code></li>
<li><code>ip4.src_mcast</code> expands to
@@ -1161,8 +1162,10 @@
<li><code>ip.first_frag</code> expands to <code>ip.is_frag && !ip.later_frag</code></li>
<li><code>arp</code> expands to <code>eth.type == 0x806</code></li>
<li><code>rarp</code> expands to <code>eth.type == 0x8035</code></li>
+ <li><code>ip6.mcast</code> expands to <code>eth.mcastv6 && ip6.dst[120..127] == 0xff</code></li>
<li><code>nd</code> expands to <code>icmp6.type == {135, 136} && icmp6.code == 0 && ip.ttl == 255</code></li>
<li><code>nd_ns</code> expands to <code>icmp6.type == 135 && icmp6.code == 0 && ip.ttl == 255</code></li>
+ <li><code>nd_ns_mcast</code> expands to <code>ip6.mcast && icmp6.type == 135 && icmp6.code == 0 && ip.ttl == 255</code></li>
<li><code>nd_na</code> expands to <code>icmp6.type == 136 && icmp6.code == 0 && ip.ttl == 255</code></li>
<li><code>nd_rs</code> expands to <code>icmp6.type == 133 &&
icmp6.code == 0 && ip.ttl == 255</code></li>
@@ -1366,6 +1369,8 @@
</dd>
<dt><code>ct_next;</code></dt>
+ <dt><code>ct_next(dnat);</code></dt>
+ <dt><code>ct_next(snat);</code></dt>
<dd>
<p>
Apply connection tracking to the flow, initializing
@@ -5175,35 +5180,4 @@ tcp.flags = RST;
The set of variable values for a given chassis.
</column>
</table>
-
- <table name="ECMP_Nexthop">
- <p>
- Each record in this table represents an active ECMP route committed by
- <code>ovn-northd</code> to <code>ovs</code> connection-tracking table.
- <code>ECMP_Nexthop</code> table is used by <code>ovn-controller</code>
- to track active ct entries and to flush stale ones.
- </p>
- <column name="nexthop">
- <p>
- Nexthop IP address for this ECMP route. Nexthop IP address should
- be the IP address of a connected router port or the IP address of
- an external device used as nexthop for the given destination.
- </p>
- </column>
-
- <column name="id">
- <p>
- Nexthop unique identifier. Nexthop ID is used to track active
- ecmp-symmetric-reply connections and flush stale ones.
- </p>
- </column>
-
- <column name="options">
- Reserved for future use.
- </column>
-
- <column name="external_ids">
- See <em>External IDs</em> at the beginning of this document.
- </column>
- </table>
</database>
diff --git a/tests/multinode.at b/tests/multinode.at
index a04ce7597..a0eb8fc67 100644
--- a/tests/multinode.at
+++ b/tests/multinode.at
@@ -1062,6 +1062,10 @@ check_fake_multinode_setup
# Delete the multinode NB and OVS resources before starting the test.
cleanup_multinode_resources
+m_as ovn-chassis-1 ip link del sw0p1-p
+m_as ovn-chassis-2 ip link del sw0p2-p
+m_as ovn-chassis-2 ip link del sw1p1-p
+
check multinode_nbctl ls-add sw0
check multinode_nbctl lsp-add sw0 sw0-port1
check multinode_nbctl lsp-set-addresses sw0-port1 "50:54:00:00:00:03 10.0.0.3 1000::3"
diff --git a/tests/ovn-controller.at b/tests/ovn-controller.at
index dcab5f2e9..a2e451880 100644
--- a/tests/ovn-controller.at
+++ b/tests/ovn-controller.at
@@ -3140,62 +3140,116 @@ ovn_start
check_ct_zone_min() {
min_val=$1
- OVS_WAIT_UNTIL([test $(ovn-appctl -t ovn-controller ct-zone-list | awk '{print $2}' | sort | head -n1) -ge ${min_val}])
+ OVS_WAIT_UNTIL([test $(ovn-appctl -t ovn-controller ct-zone-list | awk '{printf "%02d\n", $2}' | sort | head -n1) -ge ${min_val}])
}
check_ct_zone_max() {
max_val=$1
- AT_CHECK([test $(ovn-appctl -t ovn-controller ct-zone-list | awk '{print $2}' | sort | tail -n1) -le ${max_val}])
+ OVS_WAIT_UNTIL([test $(ovn-appctl -t ovn-controller ct-zone-list | awk '{printf "%02d\n", $2}' | sort | tail -n1) -le ${max_val}])
}
net_add n1
sim_add hv1
as hv1
check ovs-vsctl add-br br-phys
+check ovs-appctl vlog/disable-rate-limit
+check ovs-appctl vlog/set vconn:DBG
+
ovn_attach n1 br-phys 192.168.0.1
check ovn-appctl -t ovn-controller vlog/set dbg:ct_zone
check ovs-vsctl add-port br-int lsp0 \
-- set Interface lsp0 external-ids:iface-id=lsp0
+check ovs-vsctl add-port br-int lsp1 \
+ -- set Interface lsp1 external-ids:iface-id=lsp1
check ovn-nbctl lr-add lr
+check ovn-nbctl ls-add ls0
+check ovn-nbctl ls-add ls1
-check ovn-nbctl ls-add ls
-check ovn-nbctl lsp-add ls ls-lr
-check ovn-nbctl lsp-set-type ls-lr router
-check ovn-nbctl lsp-set-addresses ls-lr router
-check ovn-nbctl lrp-add lr lr-ls 00:00:00:00:00:01 10.0.0.1
+check ovn-nbctl set logical_router lr options:chassis=hv1
+
+check ovn-nbctl lrp-add lr lr-ls0 00:00:00:00:ff:01 10.0.0.1/24
+check ovn-nbctl lsp-add ls0 ls0-lr
+check ovn-nbctl lsp-set-type ls0-lr router
+check ovn-nbctl lsp-set-addresses ls0-lr 00:00:00:00:ff:01
+check ovn-nbctl lsp-set-options ls0-lr router-port=lr-ls0
-check ovn-nbctl lsp-add ls lsp0
+check ovn-nbctl lsp-add ls0 lsp0
check ovn-nbctl lsp-set-addresses lsp0 "00:00:00:00:00:02 10.0.0.2"
-check ovn-nbctl lrp-add lr lrp-gw 01:00:00:00:00:01 172.16.0.1
-check ovn-nbctl lrp-set-gateway-chassis lrp-gw hv1
+check ovn-nbctl lrp-add lr lr-ls1 00:00:00:00:ff:02 172.16.0.1/24
+check ovn-nbctl lsp-add ls1 ls1-lr
+check ovn-nbctl lsp-set-type ls1-lr router
+check ovn-nbctl lsp-set-addresses ls1-lr 00:00:00:00:ff:02
+check ovn-nbctl lsp-set-options ls1-lr router-port=lr-ls1
-# check regular boundaries
+check ovn-nbctl lsp-add ls1 lsp1
+check ovn-nbctl lsp-set-addresses lsp1 "00:00:00:00:00:02 172.16.0.2"
+wait_for_ports_up
+check ovn-nbctl --wait=hv sync
+
+AS_BOX([Check regular boundaries])
check_ct_zone_min 1
-check_ct_zone_max 10
+check_ct_zone_max 12
+
+AS_BOX([Increase boundaries])
+ovs-vsctl set Open_vSwitch . external_ids:ct-zone-range=\"10-30\"
-# increase boundaries
-ovs-vsctl set Open_vSwitch . external_ids:ct-zone-range=\"10-20\"
check_ct_zone_min 10
-check_ct_zone_max 20
+check_ct_zone_max 22
-# reset min boundary
-ovs-vsctl set Open_vSwitch . external_ids:ct-zone-range=\"5-20\"
+AS_BOX([Reset min boundary])
+check ovs-vsctl set Open_vSwitch . external_ids:ct-zone-range=\"5-30\"
-# add a new port to the switch
-check ovs-vsctl add-port br-int lsp1 \
- -- set Interface lsp1 external-ids:iface-id=lsp1
-check ovn-nbctl lsp-add ls lsp1
-check ovn-nbctl lsp-set-addresses lsp1 "00:00:00:00:00:03 10.0.0.3"
+# Add a new port to the ls0 switch
+check ovs-vsctl add-port br-int lsp2 \
+ -- set Interface lsp2 external-ids:iface-id=lsp2
+check ovn-nbctl lsp-add ls0 lsp2
+check ovn-nbctl lsp-set-addresses lsp2 "00:00:00:00:00:03 10.0.0.3"
check_ct_zone_min 5
-check_ct_zone_max 20
+check_ct_zone_max 22
check ovn-nbctl set logical_router lr options:snat-ct-zone=2
+wait_for_ports_up
+check ovn-nbctl --wait=hv sync
+
check_ct_zone_min 2
-check_ct_zone_max 20
+check_ct_zone_max 22
+
+AS_BOX([Check LR snat requested zone 2])
+AT_CHECK([test $(ovn-appctl -t ovn-controller ct-zone-list | awk '/lr_snat/{print $2}') -eq 2])
+
+n_flush=$(grep -c -i flush hv1/ovs-vswitchd.log)
+check ovn-appctl -t ovn-controller exit --restart
+# Make sure ovn-controller stopped before restarting it
+OVS_WAIT_UNTIL([test "$(ovn-appctl -t ovn-controller debug/status)" != "running"])
+start_daemon ovn-controller --verbose="ct_zone:dbg"
+wait_for_ports_up
+check ovn-nbctl --wait=hv sync
+
+# Check we do not have unexpected ct-flush restarting ovn-controller
+AT_CHECK([test $(grep -c -i flush hv1/ovs-vswitchd.log) -eq ${n_flush}])
+
+AS_BOX([Check LR snat allowed requested zone 0])
+check ovn-nbctl set logical_router lr options:snat-ct-zone=0
+check ovn-nbctl --wait=hv sync
+
+check_ct_zone_min 0
+check_ct_zone_max 22
+AT_CHECK([test $(ovn-appctl -t ovn-controller ct-zone-list | awk '/lr_snat/{print $2}') -eq 0])
+
+n_flush=$(grep -c -i flush hv1/ovs-vswitchd.log)
+check ovn-appctl -t ovn-controller exit --restart
+# Make sure ovn-controller stopped before restarting it
+OVS_WAIT_UNTIL([test "$(ovn-appctl -t ovn-controller debug/status)" != "running"])
+start_daemon ovn-controller --verbose="ct_zone:dbg"
+wait_for_ports_up
+check ovn-nbctl --wait=hv sync
+
+# Check we do not have unexpected ct-flush restarting ovn-controller
+AT_CHECK([test $(grep -c -i flush hv1/ovs-vswitchd.log) -eq ${n_flush}])
OVN_CLEANUP([hv1])
AT_CLEANUP
diff --git a/tests/ovn-ic.at b/tests/ovn-ic.at
index 8497cb194..9fa41200e 100644
--- a/tests/ovn-ic.at
+++ b/tests/ovn-ic.at
@@ -173,7 +173,7 @@ done
create_ic_infra() {
az_id=$1
ts_id=$2
- az=az$i
+ az=az$1
lsp=lsp${az_id}-${ts_id}
lrp=lrp${az_id}-${ts_id}
@@ -184,7 +184,7 @@ create_ic_infra() {
check ovn-ic-nbctl --wait=sb ts-add $ts
check ovn-nbctl lr-add $lr
- check ovn-nbctl lrp-add $lr $lrp 00:00:00:00:00:0$az_id 10.0.$az_id.1/24
+ check ovn-nbctl --wait=sb lrp-add $lr $lrp 00:00:00:00:00:0$az_id 10.0.$az_id.1/24
check ovn-nbctl lrp-set-gateway-chassis $lrp gw-$az
check ovn-nbctl lsp-add $ts $lsp -- \
@@ -192,7 +192,7 @@ create_ic_infra() {
lsp-set-type $lsp router -- \
lsp-set-options $lsp router-port=$lrp
- check ovn-nbctl lr-route-add $lr 192.168.0.0/16 10.0.$az_id.10
+ check ovn-nbctl --wait=sb lr-route-add $lr 192.168.0.0/16 10.0.$az_id.10
}
create_ic_infra 1 1
@@ -209,7 +209,7 @@ check_row_count ic-sb:Route 3 ip_prefix=192.168.0.0/16
check ovn-ic-nbctl --wait=sb ts-del ts1-1
ovn-ic-sbctl list route
ovn-ic-nbctl list transit_switch
-checl_row_count ic-sb:route 2 ip_prefix=192.168.0.0/16
+check_row_count ic-sb:route 2 ip_prefix=192.168.0.0/16
ovn-ic-sbctl list route
for i in 1 2; do
@@ -255,7 +255,7 @@ for i in 1 2; do
check ovn-nbctl lrp-add lr1 lrp$i 00:00:00:00:0$i:01 10.0.$i.1/24
check ovn-nbctl lrp-set-gateway-chassis lrp$i gw-az$i
- check ovn-nbctl lsp-add ts1 lsp$i -- \
+ check ovn-nbctl --wait=sb lsp-add ts1 lsp$i -- \
lsp-set-addresses lsp$i router -- \
lsp-set-type lsp$i router -- \
lsp-set-options lsp$i router-port=lrp$i
@@ -263,13 +263,11 @@ done
ovn_as az1
-ovn-nbctl \
- --id=@id create logical-router-static-route ip_prefix=1.1.1.1/32 nexthop=10.0.1.10 -- \
- add logical-router lr1 static_routes @id
ovn-nbctl \
--id=@id create logical-router-static-route ip_prefix=1.1.1.1/32 nexthop=10.0.1.10 -- \
add logical-router lr1 static_routes @id
+check ovn-ic-nbctl --wait=sb sync
check ovn-ic-nbctl --wait=sb sync
check_row_count ic-sb:route 1 ip_prefix=1.1.1.1/32
@@ -455,6 +453,7 @@ Route Table <main>:
# Delete route in AZ1, AZ2's learned route should be deleted.
ovn_as az1 ovn-nbctl lr-route-del lr1 10.11.1.0/24
ovn-ic-nbctl --wait=sb sync
+ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az2 ovn-nbctl lr-route-list lr2 | grep -c learned], [1], [dnl
0
])
@@ -462,6 +461,7 @@ AT_CHECK([ovn_as az2 ovn-nbctl lr-route-list lr2 | grep -c learned], [1], [dnl
# Add the route back
ovn_as az1 ovn-nbctl lr-route-add lr1 10.11.1.0/24 169.254.0.1
ovn-ic-nbctl --wait=sb sync
+ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az2 ovn-nbctl lr-route-list lr2 | grep -c learned], [0], [dnl
1
])
@@ -485,6 +485,7 @@ ovn_as az1 ovn-nbctl set nb_global . options:ic-route-adv=false
# AZ2 shouldn't have the route learned, because AZ1 should have stopped
# advertising.
check ovn-ic-nbctl --wait=sb sync
+check ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az2 ovn-nbctl lr-route-list lr2], [0], [dnl
IPv4 Routes
Route Table <main>:
@@ -499,6 +500,7 @@ ovn_as az1 ovn-nbctl lr-route-add lr1 0.0.0.0/0 169.254.0.3
ovn_as az1 ovn-nbctl set nb_global . options:ic-route-adv=true
ovn_as az1 ovn-nbctl set nb_global . options:ic-route-learn=true
check ovn-ic-nbctl --wait=sb sync
+check ovn-ic-nbctl --wait=sb sync
# Default route should NOT get advertised or learned, by default.
AT_CHECK([ovn_as az2 ovn-nbctl lr-route-list lr2], [0], [dnl
@@ -576,7 +578,7 @@ for i in 1 2; do
done
# Create directly-connected routes
-ovn_as az2 ovn-nbctl lrp-add lr12 lrp-lr12-ls2 aa:aa:aa:aa:bb:01 "192.168.0.1/24"
+ovn_as az2 ovn-nbctl --wait=sb lrp-add lr12 lrp-lr12-ls2 aa:aa:aa:aa:bb:01 "192.168.0.1/24"
ovn_as az2 ovn-nbctl lr-route-add lr12 10.10.10.0/24 192.168.0.10
ovn_as az1 ovn-nbctl --wait=sb sync
@@ -585,6 +587,7 @@ ovn_as az1 ovn-nbctl show
echo az2
ovn_as az2 ovn-nbctl show
check ovn-ic-nbctl --wait=sb sync
+check ovn-ic-nbctl --wait=sb sync
# Test routes from lr12 were learned to lr11
AT_CHECK([ovn_as az1 ovn-nbctl lr-route-list lr11 |
@@ -626,7 +629,7 @@ for i in 1 2; do
ovn-nbctl lrp-add lr$i lrp-lr$i-p$i 00:00:00:00:00:0$i 192.168.$i.1/24
# Create static routes
- ovn-nbctl lr-route-add lr$i 10.11.$i.0/24 169.254.0.1
+ ovn-nbctl --wait=sb lr-route-add lr$i 10.11.$i.0/24 169.254.0.1
# Create a src-ip route, which shouldn't be synced
ovn-nbctl --policy=src-ip lr-route-add lr$i 10.22.$i.0/24 169.254.0.2
@@ -665,7 +668,6 @@ ovn-ic-nbctl ts-add ts1
for i in 1 2; do
ovn_start az$i
ovn_as az$i
-
# Enable route learning at AZ level
ovn-nbctl set nb_global . options:ic-route-learn=true
# Enable route advertising at AZ level
@@ -680,9 +682,10 @@ for i in 1 2; do
-- lsp-set-type lsp-ts1-lr$i router \
-- lsp-set-options lsp-ts1-lr$i router-port=lrp-lr$i-ts1
- ovn-nbctl lrp-add lr$i lrp-lr$i-p$i 00:00:00:00:00:0$i 2002:db8:1::$i/64
+ ovn-nbctl --wait=sb lrp-add lr$i lrp-lr$i-p$i 00:00:00:00:00:0$i 2002:db8:1::$i/64
done
+check ovn-ic-nbctl --wait=sb sync
check ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az1 ovn-nbctl lr-route-list lr1 | awk '/learned/{print $1, $2}'], [0], [dnl
2002:db8:1::/64 2001:db8:1::2
@@ -733,6 +736,7 @@ for i in 1 2; do
ovn-nbctl --policy=src-ip --route-table=rtb1 lr-route-add lr$i 10.22.$i.0/24 169.254.0.2
done
+check ovn-ic-nbctl --wait=sb sync
check ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az1 ovn-nbctl lr-route-list lr1], [0], [dnl
IPv4 Routes
@@ -750,6 +754,7 @@ for i in 1 2; do
ovn_as az$i ovn-nbctl --route-table=rtb1 lr-route-add lr$i 10.11.$i.0/24 169.254.0.1
done
+check ovn-ic-nbctl --wait=sb sync
check ovn-ic-nbctl --wait=sb sync
# ensure route from rtb1 is not learned to any route table as route table is
# not set to TS port
@@ -975,6 +980,7 @@ ovn_as az2 ovn-nbctl --route-table=rtb3 lr-route-add lr12 10.10.10.0/24 192.168.
# Create directly-connected route in VPC2
ovn_as az2 ovn-nbctl --wait=sb lrp-add lr22 lrp-lr22 aa:aa:aa:aa:bb:01 "192.168.0.1/24"
check ovn-ic-nbctl --wait=sb sync
+check ovn-ic-nbctl --wait=sb sync
# Test direct routes from lr12 were learned to lr11
OVS_WAIT_FOR_OUTPUT([ovn_as az1 ovn-nbctl lr-route-list lr11 | grep 192.168 |
grep learned | awk '{print $1, $2, $5}' | sort ], [0], [dnl
@@ -1102,6 +1108,10 @@ ovn_as az2 ovn-nbctl --route-table=rtb3 lr-route-add lr12 2001:db8:aaaa::/64 200
ovn_as az2 ovn-nbctl --wait=sb lrp-add lr22 lrp-lr22 aa:aa:aa:aa:bb:01 "2001:db8:200::1/64"
# Test direct routes from lr12 were learned to lr11
+#
+# We need to wait twice: first time for az2/ic to handle port addition and update ic/sb and
+# second time for az1/ic to handle ic/sb update.
+check ovn-ic-nbctl --wait=sb sync
check ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az1 ovn-nbctl lr-route-list lr11 | grep 2001:db8:200 |
grep learned | awk '{print $1, $2, $5}' | sort], [0], [dnl
@@ -1177,6 +1187,7 @@ for i in 1 2; do
check ovn-nbctl --wait=sb lr-route-add $lr 0.0.0.0/0 192.168.$i.11
done
+check ovn-ic-nbctl --wait=sb sync
check ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az1 ovn-nbctl lr-route-list lr11 | grep dst-ip | sort] , [0], [dnl
0.0.0.0/0 192.168.1.11 dst-ip
@@ -1249,14 +1260,14 @@ for i in 1 2; do
-- lsp-set-options $lsp router-port=$lrp
done
-
# Create directly-connected routes
ovn_as az1 ovn-nbctl lrp-add lr11 lrp-lr11 aa:aa:aa:aa:bb:01 "192.168.0.1/24"
ovn_as az2 ovn-nbctl lrp-add lr21 lrp-lr21 aa:aa:aa:aa:bc:01 "192.168.1.1/24"
-ovn_as az2 ovn-nbctl lrp-add lr22 lrp-lr22 aa:aa:aa:aa:bc:02 "192.168.2.1/24"
+ovn_as az2 ovn-nbctl --wait=sb lrp-add lr22 lrp-lr22 aa:aa:aa:aa:bc:02 "192.168.2.1/24"
# Test direct routes from lr21 and lr22 were learned to lr11
check ovn-ic-nbctl --wait=sb sync
+check ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az1 ovn-nbctl lr-route-list lr11 | grep 192.168 |
grep learned | awk '{print $1, $2}' | sort ], [0], [dnl
192.168.1.0/24 169.254.10.21
@@ -1335,7 +1346,6 @@ check ovn-ic-nbctl ts-add ts1
for i in 1 2; do
ovn_start az$i
ovn_as az$i
-
# Enable route learning at AZ level
check ovn-nbctl set nb_global . options:ic-route-learn=true
# Enable route advertising at AZ level
@@ -1369,10 +1379,11 @@ for i in 1 2; do
33:33:33:33:33:3$i 2005:1734:5678::$i/50
# additional not filtered prefix -> different subnet bits
- check ovn-nbctl lrp-add lr$i lrp-lr$i-p-ext4$i \
+ check ovn-nbctl --wait=sb lrp-add lr$i lrp-lr$i-p-ext4$i \
44:44:44:44:44:4$i 2005:1834:5678::$i/50
done
+check ovn-ic-nbctl --wait=sb sync
check ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az1 ovn-nbctl lr-route-list lr1 |
awk '/learned/{print $1, $2}' ], [0], [dnl
@@ -1387,6 +1398,7 @@ for i in 1 2; do
check ovn-nbctl remove nb_global . options ic-route-denylist
done
+check ovn-ic-nbctl --wait=sb sync
check ovn-ic-nbctl --wait=sb sync
AT_CHECK([ovn_as az1 ovn-nbctl lr-route-list lr1 |
awk '/learned/{print $1, $2}' | sort ], [0], [dnl
@@ -1750,6 +1762,7 @@ check ovn-nbctl lsp-add ts ts-lr3 \
wait_for_ports_up
check ovn-ic-nbctl --wait=sb sync
+check ovn-ic-nbctl --wait=sb sync
ovn_as az1
check ovn-nbctl lsp-set-options ts-lr2 requested-chassis=hv2
@@ -1970,6 +1983,7 @@ check ovn-nbctl lsp-add ts ts-lr3 \
wait_for_ports_up
check ovn-ic-nbctl --wait=sb sync
+check ovn-ic-nbctl --wait=sb sync
ovn_as az1
check ovn-nbctl lsp-set-options ts-lr2 requested-chassis=hv2
check ovn-nbctl lsp-set-options ts-lr3 requested-chassis=hv2
@@ -2223,6 +2237,7 @@ check ovn-nbctl lsp-add ts ts-lr3 \
wait_for_ports_up
check ovn-ic-nbctl --wait=sb sync
+check ovn-ic-nbctl --wait=sb sync
ovn_as az1
check ovn-nbctl lsp-set-options ts-lr2 requested-chassis=hv2
check ovn-nbctl lsp-set-options ts-lr3 requested-chassis=hv2
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index e4c882265..d6a8c4640 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -1133,7 +1133,8 @@ ovn_start
# DR is connected to S1 and CR is connected to S2
check ovn-sbctl chassis-add gw1 geneve 127.0.0.1 \
- -- set chassis gw1 other_config:ct-commit-to-zone="true"
+ -- set chassis gw1 other_config:ct-commit-to-zone="true" \
+ -- set chassis gw1 other_config:ct-next-zone="true"
check ovn-nbctl lr-add DR
check ovn-nbctl lrp-add DR DR-S1 02:ac:10:01:00:01 172.16.1.1/24
@@ -5721,7 +5722,8 @@ AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
])
check ovn-sbctl chassis-add gw1 geneve 127.0.0.1 \
- -- set chassis gw1 other_config:ct-commit-to-zone="true"
+ -- set chassis gw1 other_config:ct-commit-to-zone="true" \
+ -- set chassis gw1 other_config:ct-next-zone="true"
# Create a distributed gw port on lr0
check ovn-nbctl ls-add public
@@ -5822,8 +5824,8 @@ AT_CHECK([grep "lr_out_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
AT_CHECK([grep "lr_out_post_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
table=??(lr_out_post_undnat ), priority=0 , match=(1), action=(next;)
- table=??(lr_out_post_undnat ), priority=70 , match=(ip && ip4.src == 10.0.0.0/24 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
- table=??(lr_out_post_undnat ), priority=70 , match=(ip && ip4.src == 10.0.0.10 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+ table=??(lr_out_post_undnat ), priority=70 , match=(ip && ip4.src == 10.0.0.0/24 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public") && (!ct.trk || !ct.rpl)), action=(ct_next(snat);)
+ table=??(lr_out_post_undnat ), priority=70 , match=(ip && ip4.src == 10.0.0.10 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public") && (!ct.trk || !ct.rpl)), action=(ct_next(snat);)
])
AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
@@ -5980,8 +5982,8 @@ AT_CHECK([grep "lr_out_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
AT_CHECK([grep "lr_out_post_undnat" lr0flows | ovn_strip_lflows], [0], [dnl
table=??(lr_out_post_undnat ), priority=0 , match=(1), action=(next;)
- table=??(lr_out_post_undnat ), priority=70 , match=(ip && ip4.src == 10.0.0.0/24 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
- table=??(lr_out_post_undnat ), priority=70 , match=(ip && ip4.src == 10.0.0.10 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+ table=??(lr_out_post_undnat ), priority=70 , match=(ip && ip4.src == 10.0.0.0/24 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public") && (!ct.trk || !ct.rpl)), action=(ct_next(snat);)
+ table=??(lr_out_post_undnat ), priority=70 , match=(ip && ip4.src == 10.0.0.10 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public") && (!ct.trk || !ct.rpl)), action=(ct_next(snat);)
])
AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
@@ -6799,26 +6801,27 @@ check ovn-nbctl lsp-set-type public-lr0 router
check ovn-nbctl lsp-set-addresses public-lr0 router
check ovn-nbctl lsp-set-options public-lr0 router-port=lr0-public
+# ECMP flows will be added even if there is only one next-hop.
check ovn-nbctl --wait=sb --ecmp-symmetric-reply lr-route-add lr0 1.0.0.1 192.168.0.10
-check_row_count ECMP_Nexthop 1
ovn-sbctl dump-flows lr0 > lr0flows
AT_CHECK([grep -w "lr_in_ip_routing" lr0flows | ovn_strip_lflows], [0], [dnl
table=??(lr_in_ip_routing ), priority=0 , match=(1), action=(drop;)
+ table=??(lr_in_ip_routing ), priority=10300, match=(ct_mark.ecmp_reply_port == 1 && reg7 == 0 && ip4.dst == 1.0.0.1/32), action=(ip.ttl--; flags.loopback = 1; eth.src = 00:00:20:20:12:13; reg1 = 192.168.0.1; outport = "lr0-public"; next;)
table=??(lr_in_ip_routing ), priority=10550, match=(nd_rs || nd_ra), action=(drop;)
table=??(lr_in_ip_routing ), priority=194 , match=(inport == "lr0-public" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:20ff:fe20:1213; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;)
table=??(lr_in_ip_routing ), priority=74 , match=(ip4.dst == 192.168.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;)
- table=??(lr_in_ip_routing ), priority=97 , match=(reg7 == 0 && ip4.dst == 1.0.0.1/32), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;)
+ table=??(lr_in_ip_routing ), priority=97 , match=(reg7 == 0 && ip4.dst == 1.0.0.1/32), action=(ip.ttl--; flags.loopback = 1; reg8[[0..15]] = 1; reg8[[16..31]] = 1; next;)
])
AT_CHECK([grep -e "lr_in_ip_routing_ecmp" lr0flows | ovn_strip_lflows], [0], [dnl
table=??(lr_in_ip_routing_ecmp), priority=0 , match=(1), action=(drop;)
+ table=??(lr_in_ip_routing_ecmp), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 1), action=(reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; next;)
table=??(lr_in_ip_routing_ecmp), priority=150 , match=(reg8[[0..15]] == 0), action=(next;)
])
check ovn-nbctl --wait=sb --ecmp-symmetric-reply lr-route-add lr0 1.0.0.1 192.168.0.20
-check_row_count ECMP_Nexthop 2
ovn-sbctl dump-flows lr0 > lr0flows
AT_CHECK([grep -w "lr_in_ip_routing" lr0flows | ovn_strip_lflows], [0], [dnl
@@ -6848,7 +6851,6 @@ AT_CHECK([grep -e "lr_in_arp_resolve.*ecmp" lr0flows | ovn_strip_lflows], [0], [
# add ecmp route with wrong nexthop
check ovn-nbctl --wait=sb --ecmp-symmetric-reply lr-route-add lr0 1.0.0.1 192.168.1.20
-check_row_count ECMP_Nexthop 2
ovn-sbctl dump-flows lr0 > lr0flows
AT_CHECK([grep -w "lr_in_ip_routing" lr0flows | ovn_strip_lflows], [0], [dnl
@@ -6868,7 +6870,6 @@ AT_CHECK([grep -e "lr_in_ip_routing_ecmp" lr0flows | sed 's/192\.168\.0\..0/192.
check ovn-nbctl lr-route-del lr0
wait_row_count nb:Logical_Router_Static_Route 0
-check_row_count ECMP_Nexthop 0
check ovn-nbctl --wait=sb lr-route-add lr0 1.0.0.0/24 192.168.0.10
ovn-sbctl dump-flows lr0 > lr0flows
@@ -7876,13 +7877,16 @@ ovn_start
# distributed gateway LRPs.
check ovn-sbctl chassis-add gw1 geneve 127.0.0.1 \
- -- set chassis gw1 other_config:ct-commit-to-zone="true"
+ -- set chassis gw1 other_config:ct-commit-to-zone="true" \
+ -- set chassis gw1 other_config:ct-next-zone="true"
check ovn-sbctl chassis-add gw2 geneve 128.0.0.1 \
- -- set chassis gw2 other_config:ct-commit-to-zone="true"
+ -- set chassis gw2 other_config:ct-commit-to-zone="true" \
+ -- set chassis gw2 other_config:ct-next-zone="true"
check ovn-sbctl chassis-add gw3 geneve 129.0.0.1 \
- -- set chassis gw3 other_config:ct-commit-to-zone="true"
+ -- set chassis gw3 other_config:ct-commit-to-zone="true" \
+ -- set chassis gw3 other_config:ct-next-zone="true"
check ovn-nbctl lr-add DR
check ovn-nbctl lrp-add DR DR-S1 02:ac:10:01:00:01 172.16.1.1/24
@@ -8084,6 +8088,19 @@ wait_row_count Static_MAC_Binding 1 logical_port=lr0-p0 ip=192.168.10.100 mac="0
AT_CLEANUP
])
+OVN_FOR_EACH_NORTHD_NO_HV([
+AT_SETUP([LR NB Static_MAC_Binding garbage collection])
+ovn_start
+
+check ovn-nbctl lr-add lr -- lrp-add lr lr-lrp 00:00:00:00:00:01 1.1.1.1/24
+check ovn-nbctl static-mac-binding-add lr-lrp 1.1.1.2 00:00:00:00:00:02
+wait_row_count sb:Static_MAC_Binding 1 logical_port=lr-lrp ip=1.1.1.2 mac="00\:00\:00\:00\:00\:02"
+check ovn-nbctl lr-del lr
+wait_row_count sb:Static_MAC_Binding 0
+
+AT_CLEANUP
+])
+
OVN_FOR_EACH_NORTHD_NO_HV([
AT_SETUP([LR neighbor lookup and learning flows])
ovn_start
@@ -9490,9 +9507,9 @@ AT_CAPTURE_FILE([S1flows])
AT_CHECK([grep -e "ls_in_arp_rsp" S1flows | ovn_strip_lflows], [0], [dnl
table=??(ls_in_arp_rsp ), priority=0 , match=(1), action=(next;)
table=??(ls_in_arp_rsp ), priority=100 , match=(arp.tpa == 192.168.0.10 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && inport == "S1-vm1"), action=(next;)
- table=??(ls_in_arp_rsp ), priority=100 , match=(nd_ns && ip6.dst == {fd00::10, ff02::1:ff00:10} && nd.target == fd00::10 && inport == "S1-vm1"), action=(next;)
+ table=??(ls_in_arp_rsp ), priority=100 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:10 && nd.target == fd00::10 && inport == "S1-vm1"), action=(next;)
table=??(ls_in_arp_rsp ), priority=50 , match=(arp.tpa == 192.168.0.10 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff), action=(eth.dst = eth.src; eth.src = 50:54:00:00:00:10; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = 50:54:00:00:00:10; arp.tpa = arp.spa; arp.spa = 192.168.0.10; outport = inport; flags.loopback = 1; output;)
- table=??(ls_in_arp_rsp ), priority=50 , match=(nd_ns && ip6.dst == {fd00::10, ff02::1:ff00:10} && nd.target == fd00::10), action=(nd_na { eth.src = 50:54:00:00:00:10; ip6.src = fd00::10; nd.target = fd00::10; nd.tll = 50:54:00:00:00:10; outport = inport; flags.loopback = 1; output; };)
+ table=??(ls_in_arp_rsp ), priority=50 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:10 && nd.target == fd00::10), action=(nd_na { eth.src = 50:54:00:00:00:10; ip6.src = fd00::10; nd.target = fd00::10; nd.tll = 50:54:00:00:00:10; outport = inport; flags.loopback = 1; output; };)
])
#Set the disable_arp_nd_rsp option and verify the flow
@@ -12615,6 +12632,7 @@ AT_SETUP([Sampling_App incremental processing])
ovn_start
+ovn-nbctl --wait=sb sync
check as northd ovn-appctl -t ovn-northd inc-engine/clear-stats
ovn-nbctl create Sampling_App type="acl-new" id="42"
@@ -13761,3 +13779,96 @@ AT_CHECK([grep -e "172.168.0.110" -e "172.168.0.120" -e "10.0.0.3" -e "20.0.0.3"
AT_CLEANUP
])
+
+OVN_FOR_EACH_NORTHD_NO_HV([
+AT_SETUP([Routing protocol control plane redirect])
+ovn_start
+
+check ovn-sbctl chassis-add hv1 geneve 127.0.0.1
+
+check ovn-nbctl lr-add lr -- \
+ lrp-add lr lr-ls 02:ac:10:01:00:01 172.16.1.1/24
+check ovn-nbctl --wait=sb set logical_router lr options:chassis=hv1
+
+check ovn-nbctl ls-add ls -- \
+ lsp-add ls ls-lr -- \
+ lsp-set-type ls-lr router -- \
+ lsp-set-addresses ls-lr router -- \
+ lsp-set-options ls-lr router-port=lr-ls
+
+check ovn-nbctl lsp-add ls lsp-bgp -- \
+ lsp-set-addresses lsp-bgp unknown
+
+# Function that ensures that no redirect rules are installed.
+check_no_redirect() {
+ AT_CHECK([ovn-sbctl dump-flows ls | grep ls_in_l2_lkup | grep -E "tcp.dst == 179|tcp.src == 179" | wc -l], [0], [0
+])
+
+ AT_CHECK([ovn-sbctl dump-flows ls | grep ls_in_check_port_sec | grep -E "priority=80" | wc -l], [0], [0
+])
+ check_no_bfd_redirect
+}
+
+# Function that ensures that no BFD redirect rules are installed.
+check_no_bfd_redirect() {
+ AT_CHECK([ovn-sbctl dump-flows ls | grep ls_in_l2_lkup | grep -E "udp.dst == 3784|udp.src == 3784" | wc -l], [0], [0
+])
+}
+
+# By default, no rules related to routing protocol redirect are present
+check_no_redirect
+
+# Set "lsp-bgp" port as target of BGP control plane redirected traffic
+check ovn-nbctl --wait=sb set logical_router_port lr-ls options:routing-protocol-redirect=lsp-bgp
+check ovn-nbctl --wait=sb set logical_router_port lr-ls options:routing-protocols=BGP
+
+# Check that BGP control plane traffic is redirected "lsp-bgp"
+AT_CHECK([ovn-sbctl dump-flows ls | grep ls_in_l2_lkup | grep -E "tcp.dst == 179|tcp.src == 179" | ovn_strip_lflows], [0], [dnl
+ table=??(ls_in_l2_lkup ), priority=100 , match=(ip4.dst == 172.16.1.1 && tcp.dst == 179), action=(outport = "lsp-bgp"; output;)
+ table=??(ls_in_l2_lkup ), priority=100 , match=(ip4.dst == 172.16.1.1 && tcp.src == 179), action=(outport = "lsp-bgp"; output;)
+ table=??(ls_in_l2_lkup ), priority=100 , match=(ip6.dst == fe80::ac:10ff:fe01:1 && tcp.dst == 179), action=(outport = "lsp-bgp"; output;)
+ table=??(ls_in_l2_lkup ), priority=100 , match=(ip6.dst == fe80::ac:10ff:fe01:1 && tcp.src == 179), action=(outport = "lsp-bgp"; output;)
+])
+
+# Check that ARP/ND traffic is cloned to the "lsp-bgp"
+AT_CHECK([ovn-sbctl dump-flows ls | grep ls_in_l2_lkup | grep "arp.op == 2 && arp.tpa == 172.16.1.1" | ovn_strip_lflows], [0], [dnl
+ table=??(ls_in_l2_lkup ), priority=100 , match=(arp.op == 2 && arp.tpa == 172.16.1.1), action=(clone { outport = "lsp-bgp"; output; }; outport = "ls-lr"; output;)
+])
+AT_CHECK([ovn-sbctl dump-flows ls | grep ls_in_l2_lkup | grep "&& nd_na" | ovn_strip_lflows], [0], [dnl
+ table=??(ls_in_l2_lkup ), priority=100 , match=(ip6.dst == fe80::ac:10ff:fe01:1 && nd_na), action=(clone { outport = "lsp-bgp"; output; }; outport = "ls-lr"; output;)
+])
+
+# Check that at this point no BFD redirecting is present
+check_no_bfd_redirect
+
+# Add BFD traffic redirect
+check ovn-nbctl --wait=sb set logical_router_port lr-ls options:routing-protocols=BGP,BFD
+
+# Check that BFD traffic is redirected to "lsp-bgp"
+AT_CHECK([ovn-sbctl dump-flows ls | grep ls_in_l2_lkup | grep -E "udp.dst == 3784|udp.src == 3784" | ovn_strip_lflows], [0], [dnl
+ table=??(ls_in_l2_lkup ), priority=100 , match=(ip4.dst == 172.16.1.1 && udp.dst == 3784), action=(outport = "lsp-bgp"; output;)
+ table=??(ls_in_l2_lkup ), priority=100 , match=(ip4.dst == 172.16.1.1 && udp.src == 3784), action=(outport = "lsp-bgp"; output;)
+ table=??(ls_in_l2_lkup ), priority=100 , match=(ip6.dst == fe80::ac:10ff:fe01:1 && udp.dst == 3784), action=(outport = "lsp-bgp"; output;)
+ table=??(ls_in_l2_lkup ), priority=100 , match=(ip6.dst == fe80::ac:10ff:fe01:1 && udp.src == 3784), action=(outport = "lsp-bgp"; output;)
+])
+
+
+# Check that ARP replies and ND advertisements are blocked from exiting "lsp-bgp"
+AT_CHECK([ovn-sbctl dump-flows ls | grep ls_in_check_port_sec | grep "priority=80" | ovn_strip_lflows], [0], [dnl
+ table=??(ls_in_check_port_sec), priority=80 , match=(inport == "lsp-bgp" && arp.op == 2), action=(reg0[[15]] = 1; next;)
+ table=??(ls_in_check_port_sec), priority=80 , match=(inport == "lsp-bgp" && nd_na), action=(reg0[[15]] = 1; next;)
+ table=??(ls_in_check_port_sec), priority=80 , match=(inport == "lsp-bgp" && nd_ra), action=(reg0[[15]] = 1; next;)
+])
+
+# Remove 'bgp-redirect' option from LRP and check that rules are removed
+check ovn-nbctl --wait=sb remove logical_router_port lr-ls options routing-protocol-redirect
+check ovn-nbctl --wait=sb remove logical_router_port lr-ls options routing-protocols
+check_no_redirect
+
+# Set non-existent LSP as target of 'bgp-redirect' and check that no rules are added
+check ovn-nbctl --wait=sb set logical_router_port lr-ls options:routing-protocol-redirect=lsp-foo
+check ovn-nbctl --wait=sb set logical_router_port lr-ls options:routing-protocols=BGP,BFD
+check_no_redirect
+
+AT_CLEANUP
+])
diff --git a/tests/ovn.at b/tests/ovn.at
index a1d689e84..ec6e6c100 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -1263,11 +1263,27 @@ ct_lb_mark(backends=192.168.1.2:80,192.168.1.3:80; hash_fields="eth_src,eth_dst,
# ct_next
ct_next;
+ formats as ct_next(dnat);
+ encodes as ct(table=oflow_in_table,zone=NXM_NX_REG13[[0..15]])
+ has prereqs ip
+ct_next(dnat);
+ encodes as ct(table=oflow_in_table,zone=NXM_NX_REG13[[0..15]])
+ has prereqs ip
+ct_next(snat);
encodes as ct(table=oflow_in_table,zone=NXM_NX_REG13[[0..15]])
has prereqs ip
ct_clear; ct_next;
+ formats as ct_clear; ct_next(dnat);
encodes as ct_clear,ct(table=oflow_in_table,zone=NXM_NX_REG13[[0..15]])
has prereqs ip
+ct_next(snat, dnat);
+ Syntax error at `,' expecting `)'.
+ct_next(dnat, ignore);
+ Syntax error at `,' expecting `)'.
+ct_next(ignore);
+ "ct_next" action accepts only "dnat" or "snat" parameter.
+ct_next();
+ "ct_next" action accepts only "dnat" or "snat" parameter.
# ct_commit
ct_commit;
@@ -8617,7 +8633,7 @@ done
# Complete Neighbor Solicitation packet and Neighbor Advertisement packet
# vif1 -> NS -> vif2. vif1 <- NA <- ovn-controller.
# vif2 will not receive NS packet, since ovn-controller will reply for it.
-ns_packet=3333ffa1f9aefa163e94059886dd6000000000203afffd81ce49a9480000f8163efffe940598fd81ce49a9480000f8163efffea1f9ae8700e01160000000fd81ce49a9480000f8163efffea1f9ae0101fa163e940598
+ns_packet=3333ffa1f9aefa163e94059886dd6000000000203afffd81ce49a9480000f8163efffe940598ff0200000000000000000001ffa1f9ae8700e01160000000fd81ce49a9480000f8163efffea1f9ae0101fa163e940598
na_packet=fa163e940598fa163ea1f9ae86dd6000000000203afffd81ce49a9480000f8163efffea1f9aefd81ce49a9480000f8163efffe9405988800e9ed60000000fd81ce49a9480000f8163efffea1f9ae0201fa163ea1f9ae
as hv1 ovs-appctl netdev-dummy/receive vif1 $ns_packet
@@ -14417,7 +14433,7 @@ test_ns_na() {
local inport=$1 src_mac=$2 dst_mac=$3 src_ip=$4 dst_ip=$5
packet=$(fmt_pkt "
- Ether(dst='ff:ff:ff:ff:ff:ff', src='${src_mac}') /
+ Ether(dst='33:33:ff:ff:ff:ff', src='${src_mac}') /
IPv6(src='${src_ip}', dst='ff02::1:ff00:2') /
ICMPv6ND_NS(tgt='${dst_ip}')
")
@@ -15740,14 +15756,17 @@ m4_define([MULTICHASSIS_PATH_MTU_DISCOVERY_TEST],
second_mac=00:00:00:00:00:02
multi1_mac=00:00:00:00:00:f0
multi2_mac=00:00:00:00:00:f1
+ external_mac=00:00:00:00:ee:ff
first_ip=10.0.0.1
second_ip=10.0.0.2
multi1_ip=10.0.0.10
multi2_ip=10.0.0.20
+ external_ip=10.0.0.30
first_ip6=abcd::1
second_ip6=abcd::2
multi1_ip6=abcd::f0
multi2_ip6=abcd::f1
+ external_ip6=abcd::eeff
check ovn-nbctl ls-add ls0
check ovn-nbctl lsp-add ls0 first
@@ -15866,6 +15885,24 @@ m4_define([MULTICHASSIS_PATH_MTU_DISCOVERY_TEST],
reset_env
+ AS_BOX([Multi with "unknown" to external doesn't produce wrong FDB])
+ len=3000
+ check ovn-nbctl --wait=hv lsp-set-addresses multi1 unknown
+
+ packet=$(send_ip_packet multi1 1 $multi1_mac $external_mac $multi1_ip $external_ip $(payload $len) 1 ${expected_ip_mtu})
+ echo $packet >> hv1/multi1.expected
+
+ packet=$(send_ip6_packet multi1 1 $multi1_mac $external_mac $multi1_ip6 $external_ip6 $(payload $len) 1 ${expected_ip_mtu})
+ echo $packet >> hv1/multi1.expected
+
+ check_pkts
+ reset_env
+
+ check_row_count fdb 0 mac="$external_mac"
+ ovn-sbctl --all destroy fdb
+
+ check ovn-nbctl --wait=hv lsp-set-addresses multi1 "${multi1_mac} ${multi1_ip} ${multi1_ip6}"
+
AS_BOX([Packets of proper size are delivered from multichassis to regular ports])
len=1000
@@ -15965,9 +16002,6 @@ m4_define([MULTICHASSIS_PATH_MTU_DISCOVERY_TEST],
packet=$(send_ip6_packet multi1 1 $multi1_mac $multi2_mac $multi1_ip6 $multi2_ip6 $(payload $len) 1)
echo $packet >> hv1/multi1.expected
- check_pkts
- reset_env
-
AS_BOX([MTU updates are honored in ICMP Path MTU calculation])
set_mtu() {
@@ -28717,7 +28751,7 @@ AT_CHECK([
for hv in 1 2; do
grep table=$ecmp_stateful hv${hv}flows | \
grep "priority=100" | \
- grep -c "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],.*exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_MARK\\[[16..31\\]],load:0x[[0-9]]->NXM_NX_CT_LABEL\\[[96..127\\]]))"
+ grep -c "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],.*exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_MARK\\[[16..31\\]]))"
grep table=$arp_resolve hv${hv}flows | \
grep "priority=200" | \
@@ -28846,7 +28880,7 @@ AT_CHECK([
for hv in 1 2; do
grep table=$ecmp_stateful hv${hv}flows | \
grep "priority=100" | \
- grep -c "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],.*exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_MARK\\[[16..31\\]],load:0x[[0-9]]->NXM_NX_CT_LABEL\\[[96..127\\]]))"
+ grep -c "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],.*exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_MARK\\[[16..31\\]]))"
grep table=$arp_resolve hv${hv}flows | \
grep "priority=200" | \
@@ -34709,11 +34743,13 @@ port_key_1=$(printf "0x%x" $(as hv1 fetch_column port_binding tunnel_key logical
dp_key_2=$(printf "0x%x" $(as hv1 fetch_column datapath tunnel_key external_ids:name=gw-2))
port_key_2=$(printf "0x%x" $(as hv1 fetch_column port_binding tunnel_key logical_port=gw-2-public))
-AT_CHECK_UNQUOTED([as hv1 ovs-ofctl dump-flows br-int table=OFTABLE_MAC_CACHE_USE --no-stats | strip_cookie | sort], [0], [dnl
- table=OFTABLE_MAC_CACHE_USE, priority=100,ip,reg14=${port_key_1},metadata=${dp_key_1},dl_src=00:00:00:00:10:10,nw_src=192.168.10.10 actions=drop
+table=" table=OFTABLE_MAC_CACHE_USE, priority=100,ip,reg14=${port_key_1},metadata=${dp_key_1},dl_src=00:00:00:00:10:10,nw_src=192.168.10.10 actions=drop
table=OFTABLE_MAC_CACHE_USE, priority=100,ip,reg14=${port_key_1},metadata=${dp_key_1},dl_src=00:00:00:00:10:20,nw_src=192.168.10.20 actions=drop
table=OFTABLE_MAC_CACHE_USE, priority=100,ip,reg14=${port_key_2},metadata=${dp_key_2},dl_src=00:00:00:00:10:10,nw_src=192.168.10.10 actions=drop
- table=OFTABLE_MAC_CACHE_USE, priority=100,ip,reg14=${port_key_2},metadata=${dp_key_2},dl_src=00:00:00:00:10:20,nw_src=192.168.10.20 actions=drop
+ table=OFTABLE_MAC_CACHE_USE, priority=100,ip,reg14=${port_key_2},metadata=${dp_key_2},dl_src=00:00:00:00:10:20,nw_src=192.168.10.20 actions=drop"
+sorted_table=$(printf '%s\n' "$table" | sort)
+AT_CHECK_UNQUOTED([as hv1 ovs-ofctl dump-flows br-int table=OFTABLE_MAC_CACHE_USE --no-stats | strip_cookie | sort], [0], [dnl
+$sorted_table
])
timestamp=$(fetch_column mac_binding timestamp ip="192.168.10.20")
@@ -38238,49 +38274,60 @@ AT_CLEANUP
OVN_FOR_EACH_NORTHD([
AT_SETUP([ovn-controller - cleanup VIF/CIF related flows/fields when updating requested-chassis])
ovn_start
-
net_add n1
-sim_add hv1
-ovs-vsctl add-br br-phys
-ovn_attach n1 br-phys 192.168.0.1
-check ovs-vsctl -- add-port br-int vif1 -- \
- set Interface vif1 external-ids:iface-id=lsp1 \
- ofport-request=8
-check ovn-nbctl ls-add lsw0
+for i in 1 2; do
+ sim_add hv$i
+ as hv$i
+ ovs-vsctl add-br br-phys
+ ovn_attach n1 br-phys 192.168.0.$i
+ check ovs-vsctl -- add-port br-int vif1 -- \
+ set Interface vif1 ofport-request=8
+done
+check ovn-nbctl ls-add lsw0
+as hv1
+check ovs-vsctl set Interface vif1 external-ids:iface-id=lsp1
check ovn-nbctl lsp-add lsw0 lsp1
check ovn-nbctl lsp-add lsw0 sw0-port1.1 lsp1 7
# wait for the VIF to be claimed to this chassis
wait_row_count Chassis 1 name=hv1
+wait_row_count Chassis 1 name=hv2
hv1_uuid=$(fetch_column Chassis _uuid name=hv1)
+hv2_uuid=$(fetch_column Chassis _uuid name=hv2)
+
wait_for_ports_up lsp1
wait_for_ports_up sw0-port1.1
wait_column "$hv1_uuid" Port_Binding chassis logical_port=lsp1
wait_column "$hv1_uuid" Port_Binding chassis logical_port=sw0-port1.1
# check that flows is installed
-OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=0 |grep priority=100 | grep -c in_port=8], [0],[dnl
+OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=OFTABLE_PHY_TO_LOG |grep priority=150|grep dl_vlan=7| grep -c in_port=8], [0],[dnl
1
])
-OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=0 |grep priority=150|grep dl_vlan=7| grep -c in_port=8], [0],[dnl
-1
+
+OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=OFTABLE_PHY_TO_LOG |grep priority=150|grep dl_vlan=7| grep -c in_port=8], [1],[dnl
+0
])
-# set lport requested-chassis to differant chassis
+# Add hv2 to lport Additional requested chassis as MAIN chassis
+# and check that no flows installed in table 0 in hv1
check ovn-nbctl set Logical_Switch_Port lsp1 \
- options:requested-chassis=foo
+ options:requested-chassis=hv2,hv1
-OVS_WAIT_UNTIL([test `ovn-sbctl get Port_Binding lsp1 up` = 'false'])
-OVS_WAIT_UNTIL([test `ovn-sbctl get Port_Binding sw0-port1.1 up` = 'false'])
-wait_column "" Port_Binding chassis logical_port=lsp1
-wait_column "" Port_Binding chassis logical_port=sw0-port1.1
+as hv2
+check ovs-vsctl set Interface vif1 external-ids:iface-id=lsp1
+ovn-nbctl --wait=hv sync
-OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=0 |grep priority=100 |grep -c in_port=8], [1],[dnl
-0
+wait_for_ports_up lsp1
+wait_for_ports_up sw0-port1.1
+wait_column "$hv2_uuid" Port_Binding chassis logical_port=lsp1
+
+OVS_WAIT_FOR_OUTPUT([as hv2 ovs-ofctl dump-flows br-int table=OFTABLE_PHY_TO_LOG |grep priority=150|grep dl_vlan=7| grep -c in_port=8], [0],[dnl
+1
])
-OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=0 |grep priority=150|grep dl_vlan=7| grep -c in_port=8], [1],[dnl
+OVS_WAIT_FOR_OUTPUT([as hv1 ovs-ofctl dump-flows br-int table=OFTABLE_PHY_TO_LOG |grep priority=150|grep dl_vlan=7| grep -c in_port=8], [1],[dnl
0
])
@@ -38756,3 +38803,238 @@ OVN_CLEANUP([hv1],[hv2])
AT_CLEANUP
])
+
+dnl This test checks that the megaflows translated by ovs-vswitchd
+dnl don't match on IPv6 source and destination addresses for
+dnl simple switching.
+OVN_FOR_EACH_NORTHD([
+AT_SETUP([IPv6 switching - megaflow check for IPv6 src/dst matches])
+AT_SKIP_IF([test $HAVE_SCAPY = no])
+ovn_start
+
+check ovn-nbctl ls-add sw0
+
+check ovn-nbctl lsp-add sw0 vm0
+check ovn-nbctl lsp-set-addresses vm0 "f0:00:0f:01:02:03 10.0.0.3 1000::3"
+
+check ovn-nbctl lsp-add sw0 vm1
+check ovn-nbctl lsp-set-addresses vm1 "f0:00:0f:01:02:04 10.0.0.4 1000::4"
+
+check ovn-nbctl lr-add lr0
+check ovn-nbctl lrp-add lr0 lr0-sw0 fa:16:3e:00:00:01 10.0.0.1 1000::1/64
+check ovn-nbctl lsp-add sw0 sw0-lr0
+check ovn-nbctl lsp-set-type sw0-lr0 router
+check ovn-nbctl lsp-set-addresses sw0-lr0 router
+check ovn-nbctl --wait=hv lsp-set-options sw0-lr0 router-port=lr0-sw0
+
+net_add n1
+sim_add hv
+as hv
+check ovs-vsctl add-br br-phys
+ovn_attach n1 br-phys 192.168.0.1
+check ovs-vsctl add-port br-int vif1 -- \
+ set Interface vif1 external-ids:iface-id=vm0 \
+ options:tx_pcap=hv/vif1-tx.pcap \
+ options:rxq_pcap=hv/vif1-rx.pcap \
+ ofport-request=1
+check ovs-vsctl add-port br-int vif2 -- \
+ set Interface vif2 external-ids:iface-id=vm1 \
+ options:tx_pcap=hv/vif2-tx.pcap \
+ options:rxq_pcap=hv/vif2-rx.pcap \
+ ofport-request=2
+
+check ovn-nbctl --wait=sb sync
+wait_for_ports_up
+
+AS_BOX([No port security, to vm1])
+packet=$(fmt_pkt "Ether(dst='f0:00:0f:01:02:04', src='f0:00:0f:01:02:03')/ \
+ IPv6(dst='1000::4', src='1000::3')/ \
+ UDP(sport=53, dport=4369)")
+
+as hv
+ovs-appctl ofproto/trace br-int in_port=1 $packet > vm0_ip6_ofproto_trace.txt
+ovs-appctl netdev-dummy/receive vif1 $packet
+
+AT_CAPTURE_FILE([vm0_ip6_ofproto_trace.txt])
+
+AT_CHECK([grep Megaflow vm0_ip6_ofproto_trace.txt | grep -e ipv6_src -e ipv6_dst -c], [1], [dnl
+0
+])
+
+dnl Make sure that the packet was received by vm1. This ensures that the
+dnl packet was delivered as expected and the megaflow didn't have any matches
+dnl on IPv6 src or dst.
+
+echo $packet >> expected-vif2
+OVN_CHECK_PACKETS([hv/vif2-tx.pcap], [expected-vif2])
+
+AS_BOX([No port security, to vm0])
+packet=$(fmt_pkt "Ether(dst='f0:00:0f:01:02:03', src='f0:00:0f:01:02:04')/ \
+ IPv6(dst='1000::3', src='1000::4')/ \
+ UDP(sport=53, dport=4369)")
+
+as hv
+ovs-appctl ofproto/trace br-int in_port=2 $packet > vm1_ip6_ofproto_trace.txt
+ovs-appctl netdev-dummy/receive vif2 $packet
+
+AT_CAPTURE_FILE([vm1_ip6_ofproto_trace.txt])
+
+AT_CHECK([grep Megaflow vm0_ip6_ofproto_trace.txt | grep -e ipv6_src -e ipv6_dst -c], [1], [dnl
+0
+])
+
+dnl Make sure that the packet was received by vm0. This ensures that the
+dnl packet was delivered as expected and the megaflow didn't have any matches
+dnl on IPv6 src or dst.
+echo $packet >> expected-vif1
+OVN_CHECK_PACKETS([hv/vif1-tx.pcap], [expected-vif1])
+
+AS_BOX([With port security, to vm1])
+dnl Add port security to vm0. The megaflow should now match on ipv6 src/dst.
+check ovn-nbctl lsp-set-port-security vm0 "f0:00:0f:01:02:03 10.0.0.3 1000::3"
+check ovn-nbctl --wait=hv sync
+
+packet=$(fmt_pkt "Ether(dst='f0:00:0f:01:02:04', src='f0:00:0f:01:02:03')/ \
+ IPv6(dst='1000::4', src='1000::3')/ \
+ UDP(sport=53, dport=4369)")
+
+as hv
+ovs-appctl ofproto/trace br-int in_port=1 $packet > vm0_ip6_ofproto_trace.txt
+ovs-appctl netdev-dummy/receive vif1 $packet
+
+AT_CAPTURE_FILE([vm0_ip6_ofproto_trace.txt])
+
+AT_CHECK([grep Megaflow vm0_ip6_ofproto_trace.txt | grep -e ipv6_src -e ipv6_dst -c], [0], [dnl
+1
+])
+
+dnl Make sure that the packet was received by vm1.
+echo $packet >> expected-vif2
+OVN_CHECK_PACKETS([hv/vif2-tx.pcap], [expected-vif2])
+
+AS_BOX([Clear port security, to vm1])
+dnl Clear port security.
+check ovn-nbctl lsp-set-port-security vm0 ""
+check ovn-nbctl --wait=hv sync
+
+as hv
+ovs-appctl ofproto/trace br-int in_port=1 $packet > vm0_ip6_ofproto_trace.txt
+ovs-appctl netdev-dummy/receive vif1 $packet
+
+AT_CAPTURE_FILE([vm0_ip6_ofproto_trace.txt])
+
+AT_CHECK([grep Megaflow vm0_ip6_ofproto_trace.txt | grep -e ipv6_src -e ipv6_dst -c], [1], [dnl
+0
+])
+
+dnl Make sure that the packet was received by vm1.
+echo $packet >> expected-vif2
+OVN_CHECK_PACKETS([hv/vif2-tx.pcap], [expected-vif2])
+
+AS_BOX([With proxy arp/nd, to vm1])
+dnl Configure proxy arp/nd on the router port. The megaflow should now match
+dnl on ipv6 src/dst.
+check ovn-nbctl --wait=hv lsp-set-options sw0-lr0 router-port=lr0-sw0 arp_proxy="2000::1/64"
+
+as hv
+ovs-appctl ofproto/trace br-int in_port=1 $packet > vm0_ip6_ofproto_trace.txt
+ovs-appctl netdev-dummy/receive vif1 $packet
+
+AT_CAPTURE_FILE([vm0_ip6_ofproto_trace.txt])
+
+AT_CHECK([grep Megaflow vm0_ip6_ofproto_trace.txt | grep -e ipv6_src -e ipv6_dst -c], [0], [dnl
+1
+])
+
+dnl Make sure that the packet was received by vm1.
+echo $packet >> expected-vif2
+OVN_CHECK_PACKETS([hv/vif2-tx.pcap], [expected-vif2])
+
+AS_BOX([With proxy arp/nd, to vm0])
+packet=$(fmt_pkt "Ether(dst='f0:00:0f:01:02:03', src='f0:00:0f:01:02:04')/ \
+ IPv6(dst='1000::3', src='1000::4')/ \
+ UDP(sport=53, dport=4369)")
+
+as hv
+ovs-appctl ofproto/trace br-int in_port=2 $packet > vm1_ip6_ofproto_trace.txt
+ovs-appctl netdev-dummy/receive vif2 $packet
+
+AT_CAPTURE_FILE([vm1_ip6_ofproto_trace.txt])
+
+AT_CHECK([grep Megaflow vm0_ip6_ofproto_trace.txt | grep -e ipv6_src -e ipv6_dst -c], [0], [dnl
+1
+])
+
+dnl Make sure that the packet was received by vm0.
+echo $packet >> expected-vif1
+OVN_CHECK_PACKETS([hv/vif1-tx.pcap], [expected-vif1])
+
+AT_CLEANUP
+])
+
+
+OVN_FOR_EACH_NORTHD([
+AT_SETUP([Multichassis port I-P processing])
+ovn_start
+
+net_add n1
+
+sim_add hv1
+as hv1
+check ovs-vsctl add-br br-phys
+ovn_attach n1 br-phys 192.168.0.11
+check ovs-vsctl set open . external-ids:ovn-bridge-mappings=phys:br-phys
+
+sim_add hv2
+as hv2
+check ovs-vsctl add-br br-phys
+ovn_attach n1 br-phys 192.168.0.12
+check ovs-vsctl set open . external-ids:ovn-bridge-mappings=phys:br-phys
+
+check ovn-nbctl ls-add ls
+check ovn-nbctl lsp-add ls multi
+check ovn-nbctl lsp-set-options multi requested-chassis=hv1
+
+check ovn-nbctl lsp-add ls ln
+check ovn-nbctl lsp-set-type ln localnet
+check ovn-nbctl lsp-set-addresses ln unknown
+check ovn-nbctl lsp-set-options ln network_name=phys
+
+check ovn-nbctl lsp-add ls lsp1 \
+ -- lsp-set-options lsp1 requested-chassis=hv1
+as hv1 check ovs-vsctl -- add-port br-int lsp1 \
+ -- set Interface lsp1 external-ids:iface-id=lsp1
+
+for hv in hv1 hv2; do
+ as $hv check ovs-vsctl -- add-port br-int multi \
+ -- set Interface multi external-ids:iface-id=multi
+done
+
+wait_for_ports_up
+ovn-nbctl --wait=hv sync
+
+OVS_WAIT_UNTIL([test $(as hv2 ovs-ofctl dump-flows br-int table=OFTABLE_OUTPUT_LARGE_PKT_DETECT | grep -c check_pkt_larger) -eq 0])
+
+check ovn-nbctl --wait=hv lsp-set-options multi requested-chassis=hv1,hv2
+OVS_WAIT_UNTIL([test $(as hv2 ovs-ofctl dump-flows br-int table=OFTABLE_OUTPUT_LARGE_PKT_DETECT | grep -c check_pkt_larger) -eq 4])
+
+check ovn-nbctl --wait=hv lsp-set-options multi requested-chassis=hv1
+OVS_WAIT_UNTIL([test $(as hv2 ovs-ofctl dump-flows br-int table=OFTABLE_OUTPUT_LARGE_PKT_DETECT | grep -c check_pkt_larger) -eq 0])
+
+check ovn-nbctl --wait=hv lsp-set-options multi requested-chassis=hv1,hv2
+OVS_WAIT_UNTIL([test $(as hv2 ovs-ofctl dump-flows br-int table=OFTABLE_OUTPUT_LARGE_PKT_DETECT | grep -c check_pkt_larger) -eq 4])
+
+as hv2 check ovs-vsctl del-port multi
+OVS_WAIT_UNTIL([test $(as hv2 ovs-ofctl dump-flows br-int table=OFTABLE_OUTPUT_LARGE_PKT_DETECT | grep -c check_pkt_larger) -eq 0])
+
+as hv2 check ovs-vsctl -- add-port br-int multi \
+ -- set Interface multi external-ids:iface-id=multi
+OVS_WAIT_UNTIL([test $(as hv2 ovs-ofctl dump-flows br-int table=OFTABLE_OUTPUT_LARGE_PKT_DETECT | grep -c check_pkt_larger) -eq 4])
+
+check ovn-nbctl --wait=hv lsp-del multi
+OVS_WAIT_UNTIL([test $(as hv2 ovs-ofctl dump-flows br-int table=OFTABLE_OUTPUT_LARGE_PKT_DETECT | grep -c check_pkt_larger) -eq 0])
+
+OVN_CLEANUP([hv1],[hv2])
+
+AT_CLEANUP
+])
diff --git a/tests/system-ovn.at b/tests/system-ovn.at
index c54b0f3a5..861b1cb99 100644
--- a/tests/system-ovn.at
+++ b/tests/system-ovn.at
@@ -3518,7 +3518,7 @@ AT_CHECK([ovn-nbctl lr-nat-add R1 dnat_and_snat 172.16.1.3 192.168.1.2 foo1 00:0
AT_CHECK([ovn-nbctl lr-nat-add R1 dnat_and_snat 172.16.1.4 192.168.1.3 foo2 00:00:02:02:03:05])
# Add a SNAT rule
-AT_CHECK([ovn-nbctl lr-nat-add R1 snat 172.16.1.1 192.168.0.0/16])
+AT_CHECK([ovn-nbctl lr-nat-add R1 snat 172.16.1.1 0.0.0.0/0])
# Add default route to ext-net
AT_CHECK([ovn-nbctl lr-route-add R1 10.0.0.0/24 172.16.1.2])
@@ -3724,8 +3724,7 @@ AT_CHECK([ovn-nbctl lr-nat-add R1 dnat_and_snat fd20::3 fd11::2 foo1 00:00:02:02
AT_CHECK([ovn-nbctl lr-nat-add R1 dnat_and_snat fd20::4 fd11::3 foo2 00:00:02:02:03:05])
# Add a SNAT rule
-AT_CHECK([ovn-nbctl lr-nat-add R1 snat fd20::1 fd11::/64])
-AT_CHECK([ovn-nbctl lr-nat-add R1 snat fd20::1 fd12::/64])
+AT_CHECK([ovn-nbctl lr-nat-add R1 snat fd20::1 ::/0])
ovn-nbctl --wait=hv sync
OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int | grep 'nat(src=fd20::1)'])
@@ -3920,7 +3919,7 @@ AT_CHECK([ovn-nbctl lr-nat-add R1 dnat_and_snat 172.16.1.3 192.168.1.2 foo1 00:0
AT_CHECK([ovn-nbctl lr-nat-add R1 dnat_and_snat 172.16.1.4 192.168.2.2 bar1 00:00:02:02:03:05])
# Add a SNAT rule
-AT_CHECK([ovn-nbctl lr-nat-add R1 snat 172.16.1.1 192.168.0.0/16])
+AT_CHECK([ovn-nbctl lr-nat-add R1 snat 172.16.1.1 0.0.0.0/0])
ovn-nbctl --wait=hv sync
OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int | grep 'nat(src=172.16.1.1)'])
@@ -4104,8 +4103,7 @@ AT_CHECK([ovn-nbctl lr-nat-add R1 dnat_and_snat fd20::3 fd11::2 foo1 00:00:02:02
AT_CHECK([ovn-nbctl lr-nat-add R1 dnat_and_snat fd20::4 fd12::2 bar1 00:00:02:02:03:05])
# Add a SNAT rule
-AT_CHECK([ovn-nbctl lr-nat-add R1 snat fd20::1 fd11::/64])
-AT_CHECK([ovn-nbctl lr-nat-add R1 snat fd20::1 fd12::/64])
+AT_CHECK([ovn-nbctl lr-nat-add R1 snat fd20::1 ::/0])
ovn-nbctl --wait=hv sync
OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int | grep 'nat(src=fd20::1)'])
@@ -6172,21 +6170,18 @@ NS_CHECK_EXEC([bob1], [ping -q -c 3 -i 0.3 -w 2 10.0.0.2 | FORMAT_PING], \
# and just ensure that the known ethernet address is present.
AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.0.1) | \
sed -e 's/zone=[[0-9]]*/zone=<cleared>/' |
-sed -e 's/mark=[[0-9]]*/mark=<cleared>/' |
-sed -e 's/labels=0x[[0-9]]/labels=0x?/'], [0], [dnl
-icmp,orig=(src=172.16.0.1,dst=10.0.0.2,id=<cleared>,type=8,code=0),reply=(src=10.0.0.2,dst=172.16.0.1,id=<cleared>,type=0,code=0),zone=<cleared>,mark=<cleared>,labels=0x?000000000401020400000000
-tcp,orig=(src=172.16.0.1,dst=10.0.0.2,sport=<cleared>,dport=<cleared>),reply=(src=10.0.0.2,dst=172.16.0.1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x?000000000401020400000000,protoinfo=(state=<cleared>)
+sed -e 's/mark=[[0-9]]*/mark=<cleared>/'], [0], [dnl
+icmp,orig=(src=172.16.0.1,dst=10.0.0.2,id=<cleared>,type=8,code=0),reply=(src=10.0.0.2,dst=172.16.0.1,id=<cleared>,type=0,code=0),zone=<cleared>,mark=<cleared>,labels=0x401020400000000
+tcp,orig=(src=172.16.0.1,dst=10.0.0.2,sport=<cleared>,dport=<cleared>),reply=(src=10.0.0.2,dst=172.16.0.1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x401020400000000,protoinfo=(state=<cleared>)
])
# Ensure datapaths show conntrack states as expected
# Like with conntrack entries, we shouldn't try to predict
# port binding tunnel keys. So omit them from expected labels.
-AT_CHECK([ovs-appctl dpctl/dump-flows | sed -e 's/label=0x[[0-9]]/label=0x?/' | \
-grep 'ct_state(+new-est+trk).*ct(.*label=0x?000000000401020400000000/.*)' -c], [0], [dnl
+AT_CHECK([ovs-appctl dpctl/dump-flows | grep 'ct_state(+new-est+trk).*ct(.*label=0x401020400000000/.*)' -c], [0], [dnl
2
])
-AT_CHECK([[ovs-appctl dpctl/dump-flows | sed -e 's/ct_label(0x[0-9]/ct_label(0x?/' | \
-grep 'ct_state(-new+est+trk).*ct_label(0x?000000000401020400000000)' -c]], [0], [dnl
+AT_CHECK([ovs-appctl dpctl/dump-flows | grep 'ct_state(-new+est+trk).*ct_label(0x401020400000000)' -c], [0], [dnl
2
])
@@ -6205,21 +6200,18 @@ NS_CHECK_EXEC([bob1], [ping -q -c 3 -i 0.3 -w 2 10.0.0.2 | FORMAT_PING], \
[0], [dnl
3 packets transmitted, 3 received, 0% packet loss, time 0ms
])
-AT_CHECK([ovs-appctl dpctl/dump-flows | sed -e 's/label=0x[[0-9]]/label=0x?/' | \
-grep 'ct_state(+new-est+trk).*ct(.*label=0x?000000001001020400000000/.*)' -c], [0], [dnl
+AT_CHECK([ovs-appctl dpctl/dump-flows | grep 'ct_state(+new-est+trk).*ct(.*label=0x1001020400000000/.*)' -c], [0], [dnl
2
])
-AT_CHECK([[ovs-appctl dpctl/dump-flows | sed -e 's/ct_label(0x[0-9]/ct_label(0x?/' | \
-grep 'ct_state(-new+est+trk).*ct_label(0x?000000001001020400000000)' -c]], [0], [dnl
+AT_CHECK([ovs-appctl dpctl/dump-flows | grep 'ct_state(-new+est+trk).*ct_label(0x1001020400000000)' -c], [0], [dnl
2
])
-AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 1001020400000000 | FORMAT_CT(172.16.0.1) | \
+AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 0x1001020400000000 | FORMAT_CT(172.16.0.1) | \
sed -e 's/zone=[[0-9]]*/zone=<cleared>/' |
-sed -e 's/mark=[[0-9]]*/mark=<cleared>/' |
-sed -e 's/labels=0x[[0-9]]/labels=0x?/' | sort], [0], [dnl
-icmp,orig=(src=172.16.0.1,dst=10.0.0.2,id=<cleared>,type=8,code=0),reply=(src=10.0.0.2,dst=172.16.0.1,id=<cleared>,type=0,code=0),zone=<cleared>,mark=<cleared>,labels=0x?000000001001020400000000
-tcp,orig=(src=172.16.0.1,dst=10.0.0.2,sport=<cleared>,dport=<cleared>),reply=(src=10.0.0.2,dst=172.16.0.1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x?000000001001020400000000,protoinfo=(state=<cleared>)
+sed -e 's/mark=[[0-9]]*/mark=<cleared>/' | sort], [0], [dnl
+icmp,orig=(src=172.16.0.1,dst=10.0.0.2,id=<cleared>,type=8,code=0),reply=(src=10.0.0.2,dst=172.16.0.1,id=<cleared>,type=0,code=0),zone=<cleared>,mark=<cleared>,labels=0x1001020400000000
+tcp,orig=(src=172.16.0.1,dst=10.0.0.2,sport=<cleared>,dport=<cleared>),reply=(src=10.0.0.2,dst=172.16.0.1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x1001020400000000,protoinfo=(state=<cleared>)
])
# Check entries in table 76 and 77 expires w/o traffic
OVS_WAIT_UNTIL([
@@ -6241,16 +6233,27 @@ NS_CHECK_EXEC([alice1], [ping -q -c 3 -i 0.3 -w 2 172.16.0.1 | FORMAT_PING], \
3 packets transmitted, 3 received, 0% packet loss, time 0ms
])
-AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 401020500000000 | FORMAT_CT(172.16.0.1) | \
+AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 0x401020500000000 | FORMAT_CT(172.16.0.1) | \
sed -e 's/zone=[[0-9]]*/zone=<cleared>/' |
-sed -e 's/mark=[[0-9]]*/mark=<cleared>/' |
-sed -e 's/labels=0x[[0-9]]/labels=0x?/' | sort], [0], [dnl
-tcp,orig=(src=172.16.0.1,dst=10.0.0.2,sport=<cleared>,dport=<cleared>),reply=(src=10.0.0.2,dst=172.16.0.1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x?000000000401020500000000,protoinfo=(state=<cleared>)
+sed -e 's/mark=[[0-9]]*/mark=<cleared>/' | sort], [0], [dnl
+tcp,orig=(src=172.16.0.1,dst=10.0.0.2,sport=<cleared>,dport=<cleared>),reply=(src=10.0.0.2,dst=172.16.0.1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x401020500000000,protoinfo=(state=<cleared>)
])
-# Flush connection tracking entries
-ovn-nbctl --wait=hv lr-route-del R1
-AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.0.1)])
+# Now remove one ECMP route and check that traffic is still being conntracked.
+check ovn-nbctl --policy="src-ip" lr-route-del R1 10.0.0.0/24 20.0.0.3
+check ovn-nbctl --wait=hv sync
+AT_CHECK([ovs-appctl dpctl/flush-conntrack])
+NETNS_DAEMONIZE([bob1], [nc -l -k 8081], [bob2.pid])
+NS_CHECK_EXEC([alice1], [nc -z 172.16.0.1 8081], [0])
+NS_CHECK_EXEC([alice1], [ping -q -c 3 -i 0.3 -w 2 172.16.0.1 | FORMAT_PING], \
+[0], [dnl
+3 packets transmitted, 3 received, 0% packet loss, time 0ms
+])
+AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 0x401020500000000 | FORMAT_CT(172.16.0.1) | \
+sed -e 's/zone=[[0-9]]*/zone=<cleared>/' |
+sed -e 's/mark=[[0-9]]*/mark=<cleared>/' | sort], [0], [dnl
+tcp,orig=(src=172.16.0.1,dst=10.0.0.2,sport=<cleared>,dport=<cleared>),reply=(src=10.0.0.2,dst=172.16.0.1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x401020500000000,protoinfo=(state=<cleared>)
+])
OVS_APP_EXIT_AND_WAIT([ovn-controller])
@@ -6399,12 +6402,11 @@ NS_CHECK_EXEC([bob1], [ping -q -c 3 -i 0.3 -w 2 fd01::2 | FORMAT_PING], \
# Ensure datapaths show conntrack states as expected
# Like with conntrack entries, we shouldn't try to predict
# port binding tunnel keys. So omit them from expected labels.
-AT_CHECK([ovs-appctl dpctl/dump-flows | sed -e 's/label=0x[[0-9]]/label=0x?/' | \
-grep 'ct_state(+new-est+trk).*ct(.*label=0x?000000000401020400000000/.*)' -c], [0], [dnl
+AT_CHECK([ovs-appctl dpctl/dump-flows | grep 'ct_state(+new-est+trk).*ct(.*label=0x401020400000000/.*)' -c], [0], [dnl
2
])
-AT_CHECK([[ovs-appctl dpctl/dump-flows | sed -e 's/ct_label(0x[0-9]/ct_label(0x?/' | \
-grep 'ct_state(-new+est+trk).*ct_label(0x?000000000401020400000000)' -c]], [0], [dnl
+
+AT_CHECK([ovs-appctl dpctl/dump-flows | grep 'ct_state(-new+est+trk).*ct_label(0x401020400000000)' -c], [0], [dnl
2
])
@@ -6413,10 +6415,9 @@ grep 'ct_state(-new+est+trk).*ct_label(0x?000000000401020400000000)' -c]], [0],
# and just ensure that the known ethernet address is present.
AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd01::2) | \
sed -e 's/zone=[[0-9]]*/zone=<cleared>/' |
-sed -e 's/mark=[[0-9]]*/mark=<cleared>/' |
-sed -e 's/labels=0x[[0-9]]/labels=0x?/' | sort], [0], [dnl
-icmpv6,orig=(src=fd07::1,dst=fd01::2,id=<cleared>,type=128,code=0),reply=(src=fd01::2,dst=fd07::1,id=<cleared>,type=129,code=0),zone=<cleared>,mark=<cleared>,labels=0x?000000000401020400000000
-tcp,orig=(src=fd07::1,dst=fd01::2,sport=<cleared>,dport=<cleared>),reply=(src=fd01::2,dst=fd07::1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x?000000000401020400000000,protoinfo=(state=<cleared>)
+sed -e 's/mark=[[0-9]]*/mark=<cleared>/' | sort], [0], [dnl
+icmpv6,orig=(src=fd07::1,dst=fd01::2,id=<cleared>,type=128,code=0),reply=(src=fd01::2,dst=fd07::1,id=<cleared>,type=129,code=0),zone=<cleared>,mark=<cleared>,labels=0x401020400000000
+tcp,orig=(src=fd07::1,dst=fd01::2,sport=<cleared>,dport=<cleared>),reply=(src=fd01::2,dst=fd07::1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x401020400000000,protoinfo=(state=<cleared>)
])
# Flush conntrack entries for easier output parsing of next test.
@@ -6433,21 +6434,18 @@ NS_CHECK_EXEC([bob1], [ping -q -c 3 -i 0.3 -w 2 fd01::2 | FORMAT_PING], \
3 packets transmitted, 3 received, 0% packet loss, time 0ms
])
-AT_CHECK([ovs-appctl dpctl/dump-flows | sed -e 's/label=0x[[0-9]]/label=0x?/' | \
-grep 'ct_state(+new-est+trk).*ct(.*label=0x?000000001001020400000000/.*)' -c], [0], [dnl
+AT_CHECK([ovs-appctl dpctl/dump-flows | grep 'ct_state(+new-est+trk).*ct(.*label=0x1001020400000000/.*)' -c], [0], [dnl
2
])
-AT_CHECK([[ovs-appctl dpctl/dump-flows | sed -e 's/ct_label(0x[0-9]/ct_label(0x?/' | \
-grep 'ct_state(-new+est+trk).*ct_label(0x?000000001001020400000000)' -c]], [0], [dnl
+AT_CHECK([ovs-appctl dpctl/dump-flows | grep 'ct_state(-new+est+trk).*ct_label(0x1001020400000000)' -c], [0], [dnl
2
])
-AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 1001020400000000 | FORMAT_CT(fd01::2) | \
+AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 0x1001020400000000 | FORMAT_CT(fd01::2) | \
sed -e 's/zone=[[0-9]]*/zone=<cleared>/' |
-sed -e 's/mark=[[0-9]]*/mark=<cleared>/' |
-sed -e 's/labels=0x[[0-9]]/labels=0x?/'], [0], [dnl
-icmpv6,orig=(src=fd07::1,dst=fd01::2,id=<cleared>,type=128,code=0),reply=(src=fd01::2,dst=fd07::1,id=<cleared>,type=129,code=0),zone=<cleared>,mark=<cleared>,labels=0x?000000001001020400000000
-tcp,orig=(src=fd07::1,dst=fd01::2,sport=<cleared>,dport=<cleared>),reply=(src=fd01::2,dst=fd07::1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x?000000001001020400000000,protoinfo=(state=<cleared>)
+sed -e 's/mark=[[0-9]]*/mark=<cleared>/'], [0], [dnl
+icmpv6,orig=(src=fd07::1,dst=fd01::2,id=<cleared>,type=128,code=0),reply=(src=fd01::2,dst=fd07::1,id=<cleared>,type=129,code=0),zone=<cleared>,mark=<cleared>,labels=0x1001020400000000
+tcp,orig=(src=fd07::1,dst=fd01::2,sport=<cleared>,dport=<cleared>),reply=(src=fd01::2,dst=fd07::1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x1001020400000000,protoinfo=(state=<cleared>)
])
# Check entries in table 76 and 77 expires w/o traffic
@@ -6467,16 +6465,27 @@ NS_CHECK_EXEC([alice1], [ping -q -c 3 -i 0.3 -w 2 fd07::1 | FORMAT_PING], \
3 packets transmitted, 3 received, 0% packet loss, time 0ms
])
-AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 1001020400000000 | FORMAT_CT(fd07::1) | \
+AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 0x1001020400000000 | FORMAT_CT(fd07::1) | \
sed -e 's/zone=[[0-9]]*/zone=<cleared>/' |
-sed -e 's/mark=[[0-9]]*/mark=<cleared>/' |
-sed -e 's/labels=0x[[0-9]]/labels=0x?/' | sort], [0], [dnl
-tcp,orig=(src=fd07::1,dst=fd01::2,sport=<cleared>,dport=<cleared>),reply=(src=fd01::2,dst=fd07::1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x?000000001001020400000000,protoinfo=(state=<cleared>)
+sed -e 's/mark=[[0-9]]*/mark=<cleared>/' | sort], [0], [dnl
+tcp,orig=(src=fd07::1,dst=fd01::2,sport=<cleared>,dport=<cleared>),reply=(src=fd01::2,dst=fd07::1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x1001020400000000,protoinfo=(state=<cleared>)
])
-# Flush connection tracking entries
-check ovn-nbctl --wait=hv lr-route-del R1
-AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd01::2)])
+# Now remove one ECMP route and check that traffic is still being conntracked.
+check ovn-nbctl --policy="src-ip" lr-route-del R1 fd01::/126 fd02::3
+check ovn-nbctl --wait=hv sync
+AT_CHECK([ovs-appctl dpctl/flush-conntrack])
+NETNS_DAEMONIZE([bob1], [nc -6 -l -k 8081], [bob2.pid])
+NS_CHECK_EXEC([alice1], [nc -6 -z fd07::1 8081], [0])
+NS_CHECK_EXEC([alice1], [ping -q -c 3 -i 0.3 -w 2 fd07::1 | FORMAT_PING], \
+[0], [dnl
+3 packets transmitted, 3 received, 0% packet loss, time 0ms
+])
+AT_CHECK([ovs-appctl dpctl/dump-conntrack | grep 0x1001020400000000 | FORMAT_CT(fd07::1) | \
+sed -e 's/zone=[[0-9]]*/zone=<cleared>/' |
+sed -e 's/mark=[[0-9]]*/mark=<cleared>/' | sort], [0], [dnl
+tcp,orig=(src=fd07::1,dst=fd01::2,sport=<cleared>,dport=<cleared>),reply=(src=fd01::2,dst=fd07::1,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=<cleared>,labels=0x1001020400000000,protoinfo=(state=<cleared>)
+])
OVS_APP_EXIT_AND_WAIT([ovn-controller])
@@ -6941,7 +6950,8 @@ check ovn-nbctl lsp-add public public1 \
-- lsp-set-type public1 localnet \
-- lsp-set-options public1 network_name=phynet
-NS_CHECK_EXEC([server], [bfdd-beacon --listen=172.16.1.50], [0])
+NETNS_DAEMONIZE([server], [bfdd-beacon --nofork --tee --listen=172.16.1.50 >beacon.stdout 2>&1], [beacon.pid])
+OVS_WAIT_UNTIL([grep -q "Listening for BFD connections" beacon.stdout])
NS_CHECK_EXEC([server], [bfdd-control allow 172.16.1.1], [0], [dnl
Allowing connections from 172.16.1.1
])
@@ -7001,7 +7011,8 @@ check ovn-nbctl set logical_router R1 options:chassis=hv1
check ovn-nbctl set logical_router_static_route $route_uuid bfd=$uuid
# restart bfdd
-NS_CHECK_EXEC([server], [bfdd-beacon --listen=172.16.1.50], [0])
+NETNS_DAEMONIZE([server], [bfdd-beacon --nofork --tee --listen=172.16.1.50 >beacon.stdout 2>&1], [beacon.pid])
+OVS_WAIT_UNTIL([grep -q "Listening for BFD connections" beacon.stdout])
NS_CHECK_EXEC([server], [bfdd-control allow 172.16.1.1], [0], [dnl
Allowing connections from 172.16.1.1
])
@@ -7043,7 +7054,8 @@ check ovn-nbctl lr-route-add R1 2000::/64 1000::b
route_uuid_v6=$(fetch_column nb:logical_router_static_route _uuid ip_prefix=\"2000::/64\")
ovn-nbctl set logical_router_static_route $route_uuid_v6 bfd=$uuid_v6
check ovn-nbctl --wait=hv sync
-NS_CHECK_EXEC([server], [bfdd-beacon --listen=1000::b], [0])
+NETNS_DAEMONIZE([server], [bfdd-beacon --nofork --tee --listen=1000::b >beacon.stdout 2>&1], [beacon.pid])
+OVS_WAIT_UNTIL([grep -q "Listening for BFD connections" beacon.stdout])
NS_CHECK_EXEC([server], [bfdd-control allow 1000::a], [0], [dnl
Allowing connections from 1000::a
])
@@ -9374,7 +9386,7 @@ name: 'vport4' value: '999'
NETNS_DAEMONIZE([vm1], [nc -k -l 42.42.42.2 4242], [nc-vm1.pid])
NETNS_START_TCPDUMP([vm1],
- [-n -i vm1 -nnleX -c6 udp and dst 42.42.42.2 and dst port 4343],
+ [-n -i vm1 -nnqleX -c6 udp and dst 42.42.42.2 and dst port 4343],
[vm1])
# Make sure connecting to the VIP works (hairpin, via ls and via lr).
@@ -9525,7 +9537,7 @@ name: 'vport4' value: '999'
NETNS_DAEMONIZE([vm1], [nc -k -l 4242::2 4242], [nc-vm1.pid])
NETNS_START_TCPDUMP([vm1],
- [-n -i vm1 -nnleX -c6 udp and dst 4242::2 and dst port 4343],
+ [-n -i vm1 -nnqleX -c6 udp and dst 4242::2 and dst port 4343],
[vm1])
# Make sure connecting to the VIP works (hairpin, via ls and via lr).
@@ -11363,7 +11375,25 @@ check_ovn_installed
check_ports_up
check_ports_bound
-OVS_APP_EXIT_AND_WAIT([ovn-controller])
+AS_BOX(["Leave some ovn-installed while closing ovn-controller"])
+# Block IDL from ovn-controller to OVSDB
+stop_ovsdb_controller_updates $TCP_PORT
+remove_iface_id vif2
+ensure_controller_run
+
+# OVSDB should now be seen as read-only by ovn-controller
+remove_iface_id vif1
+check ovn-nbctl --wait=hv sync
+
+# Stop ovsdb before ovn-controller to ensure it's not updated
+as
+OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
+/connection dropped.*/d"])
+
+# Don't use OVS_APP_EXIT... to use --restart to avoid cleaning up the databases.
+TMPPID=$(cat $OVS_RUNDIR/ovn-controller.pid)
+check ovs-appctl -t ovn-controller exit --restart
+OVS_WAIT_WHILE([kill -0 $TMPPID 2>/dev/null])
as ovn-sb
OVS_APP_EXIT_AND_WAIT([ovsdb-server])
@@ -11374,9 +11404,6 @@ OVS_APP_EXIT_AND_WAIT([ovsdb-server])
as northd
OVS_APP_EXIT_AND_WAIT([ovn-northd])
-as
-OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
-/connection dropped.*/d"])
AT_CLEANUP
])
@@ -13186,16 +13213,17 @@ ovs-vsctl --id=@br get Bridge br-int \
-- --id=@ipfix create IPFIX targets=\"127.0.0.1:4242\" template_interval=1 \
-- --id=@cs create Flow_Sample_Collector_Set id=100 bridge=@br ipfix=@ipfix
+ovn-nbctl --wait=hv sync
dnl And wait for it to be up and running.
OVS_WAIT_UNTIL([ovs-ofctl dump-ipfix-flow br-int | grep -q '1 ids'])
dnl Start UDP echo server on vm2.
-NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l 1000], [nc-vm2-1000.pid])
-NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l 1010], [nc-vm2-1010.pid])
-NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l 2000], [nc-vm2-2000.pid])
-NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l 2010], [nc-vm2-2010.pid])
-NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l 3000], [nc-vm2-3000.pid])
-NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l 3010], [nc-vm2-3010.pid])
+NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l -m 1 1000], [nc-vm2-1000.pid])
+NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l -m 1 1010], [nc-vm2-1010.pid])
+NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l -m 1 2000], [nc-vm2-2000.pid])
+NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l -m 1 2010], [nc-vm2-2010.pid])
+NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l -m 1 3000], [nc-vm2-3000.pid])
+NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l -m 1 3010], [nc-vm2-3010.pid])
dnl Send traffic (2 packets) to the UDP LB1 (hits the from-lport ACL).
NS_CHECK_EXEC([vm1], [(echo a; sleep 1; echo a) | nc --send-only -u 43.43.43.43 1000])
@@ -13354,11 +13382,12 @@ ovs-vsctl --id=@br get Bridge br-int \
-- --id=@ipfix create IPFIX targets=\"127.0.0.1:4242\" template_interval=1 \
-- --id=@cs create Flow_Sample_Collector_Set id=100 bridge=@br ipfix=@ipfix
+ovn-nbctl --wait=hv sync
dnl And wait for it to be up and running.
OVS_WAIT_UNTIL([ovs-ofctl dump-ipfix-flow br-int | grep -q '1 ids'])
dnl Start UDP echo server on vm2.
-NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l 1000], [nc-vm2-1000.pid])
+NETNS_DAEMONIZE([vm2], [nc -e /bin/cat -k -u -v -l -m 1 1000], [nc-vm2-1000.pid])
dnl Send traffic to the UDP server (hits both ACL tiers).
NS_CHECK_EXEC([vm1], [echo a | nc --send-only -u 42.42.42.3 1000])
@@ -13483,6 +13512,7 @@ ovs-vsctl --id=@br get Bridge br-int \
-- --id=@ipfix create IPFIX targets=\"127.0.0.1:4242\" template_interval=1 \
-- --id=@cs create Flow_Sample_Collector_Set id=100 bridge=@br ipfix=@ipfix
+ovn-nbctl --wait=hv sync
dnl And wait for it to be up and running.
OVS_WAIT_UNTIL([ovs-ofctl dump-ipfix-flow br-int | grep -q '1 ids'])
@@ -13750,3 +13780,152 @@ OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d
/.*terminating with signal 15.*/d"])
AT_CLEANUP
])
+
+OVN_FOR_EACH_NORTHD([
+AT_SETUP([Routing protocol redirect])
+AT_SKIP_IF([test $HAVE_NC = no])
+
+ovn_start
+OVS_TRAFFIC_VSWITCHD_START()
+
+ADD_BR([br-int])
+ADD_BR([br-ext])
+
+check ovs-ofctl add-flow br-ext action=normal
+# Set external-ids in br-int needed for ovn-controller
+check ovs-vsctl \
+ -- set Open_vSwitch . external-ids:system-id=hv1 \
+ -- set Open_vSwitch . external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \
+ -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
+ -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \
+ -- set bridge br-int fail-mode=secure other-config:disable-in-band=true
+
+# Start ovn-controller
+start_daemon ovn-controller
+
+check ovn-nbctl lr-add R1 \
+ -- set Logical_Router R1 options:chassis=hv1
+
+check ovn-nbctl ls-add public
+check ovn-nbctl ls-add bar
+
+check ovn-nbctl lrp-add R1 rp-public 00:00:02:01:02:03 172.16.1.1/24
+check ovn-nbctl lrp-add R1 rp-bar 00:00:ff:00:00:01 192.168.10.1/24
+
+check ovn-nbctl lsp-add public public-rp -- set Logical_Switch_Port public-rp \
+ type=router options:router-port=rp-public \
+ -- lsp-set-addresses public-rp router
+
+check ovn-nbctl lsp-add bar bar-rp -- set Logical_Switch_Port bar-rp \
+ type=router options:router-port=rp-bar \
+ -- lsp-set-addresses bar-rp router
+
+check ovn-nbctl lsp-add public bgp-daemon \
+ -- lsp-set-addresses bgp-daemon unknown
+
+# Setup container "bar1" representing host on an internal network
+ADD_NAMESPACES(bar1)
+ADD_VETH(bar1, bar1, br-int, "192.168.10.2/24", "00:00:ff:ff:ff:01", \
+ "192.168.10.1")
+check ovn-nbctl lsp-add bar bar1 \
+ -- lsp-set-addresses bar1 "00:00:ff:ff:ff:01 192.168.10.2"
+
+# Setup SNAT for the internal host
+check ovn-nbctl lr-nat-add R1 snat 172.16.1.1 192.168.10.2
+
+# Configure external connectivity
+check ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-mappings=phynet:br-ext
+check ovn-nbctl lsp-add public public1 \
+ -- lsp-set-addresses public1 unknown \
+ -- lsp-set-type public1 localnet \
+ -- lsp-set-options public1 network_name=phynet
+
+check ovn-nbctl --wait=hv sync
+
+# Set option that redirects BGP and BFD traffic to a LSP "bgp-daemon"
+check ovn-nbctl --wait=sb set logical_router_port rp-public options:routing-protocol-redirect=bgp-daemon
+check ovn-nbctl --wait=sb set logical_router_port rp-public options:routing-protocols=BGP,BFD
+
+# Create "bgp-daemon" interface in a namespace with IP and MAC matching LRP "rp-public"
+ADD_NAMESPACES(bgp-daemon)
+ADD_VETH(bgp-daemon, bgp-daemon, br-int, "172.16.1.1/24", "00:00:02:01:02:03")
+
+ADD_NAMESPACES(ext-foo)
+ADD_VETH(ext-foo, ext-foo, br-ext, "172.16.1.100/24", "00:10:10:01:02:13", \
+ "172.16.1.1")
+
+# Flip the interface down/up to get proper IPv6 LLA
+NS_EXEC([bgp-daemon], [ip link set down bgp-daemon])
+NS_EXEC([bgp-daemon], [ip link set up bgp-daemon])
+NS_EXEC([ext-foo], [ip link set down ext-foo])
+NS_EXEC([ext-foo], [ip link set up ext-foo])
+
+# Wait until IPv6 LLA loses the "tentative" flag otherwise it can't be bound to.
+OVS_WAIT_UNTIL([NS_EXEC([bgp-daemon], [ip a show dev bgp-daemon | grep "fe80::" | grep -v tentative])])
+OVS_WAIT_UNTIL([NS_EXEC([ext-foo], [ip a show dev ext-foo | grep "fe80::" | grep -v tentative])])
+
+# Verify that BGP control plane traffic is delivered to the "bgp-daemon"
+# interface on both IPv4 and IPv6 LLA addresses
+NETNS_DAEMONIZE([bgp-daemon], [nc -l -k 172.16.1.1 179], [bgp_v4.pid])
+NS_CHECK_EXEC([ext-foo], [echo "BGP IPv4 server traffic" | nc --send-only 172.16.1.1 179])
+
+NETNS_DAEMONIZE([bgp-daemon], [nc -l -6 -k fe80::200:2ff:fe01:203%bgp-daemon 179], [bgp_v6.pid])
+NS_CHECK_EXEC([ext-foo], [echo "BGP IPv6 server traffic" | nc --send-only -6 fe80::200:2ff:fe01:203%ext-foo 179])
+
+# Perform same set of checks as above for BFD daemon.
+# We need to manually check that the message arrived on the receiving end as Ncat will
+# produce false positive results over UDP due to lack of ICMP port unreachable messages
+# from LRP's IP.
+NETNS_DAEMONIZE([bgp-daemon], [nc -l -u 172.16.1.1 3784 > bgp-daemon_bfd_v4.out], [bfd_v4.pid])
+NS_CHECK_EXEC([ext-foo], [echo "from ext-foo: BFD IPv4 server traffic" | nc -u 172.16.1.1 3784])
+AT_CHECK([cat bgp-daemon_bfd_v4.out], [0], [dnl
+from ext-foo: BFD IPv4 server traffic
+])
+
+NETNS_DAEMONIZE([bgp-daemon], [nc -l -6 -u fe80::200:2ff:fe01:203%bgp-daemon 3784 > bgp-daemon_bfd_v6.out], [bfd_v6.pid])
+NS_CHECK_EXEC([ext-foo], [echo "from ext-foo: BFD IPv6 server traffic" | nc -u -6 fe80::200:2ff:fe01:203%ext-foo 3784])
+AT_CHECK([cat bgp-daemon_bfd_v6.out], [0], [dnl
+from ext-foo: BFD IPv6 server traffic
+])
+
+# Verify connection in other direction. i.e when BGP daemon running on "bgp-daemon" port
+# makes a client connection to its peer
+NETNS_DAEMONIZE([ext-foo], [nc -l -k 172.16.1.100 179], [reply_bgp_v4.pid])
+NS_CHECK_EXEC([bgp-daemon], [echo "BGP IPv4 client traffic" | nc --send-only 172.16.1.100 179])
+
+NETNS_DAEMONIZE([ext-foo], [nc -l -6 -k fe80::210:10ff:fe01:213%ext-foo 179], [reply_bgp_v6.pid])
+NS_CHECK_EXEC([bgp-daemon], [echo "BGP IPv6 client traffic" | nc --send-only -6 fe80::210:10ff:fe01:213%bgp-daemon 179])
+
+# Perform same checks in other direction for BFD daemon
+NETNS_DAEMONIZE([ext-foo], [nc -l -u 172.16.1.100 3784 > ext-foo_bfd_v4.out], [reply_bfd_v4.pid])
+NS_CHECK_EXEC([bgp-daemon], [echo "from bgp-daemon: BFD IPv4 client traffic" | nc -u 172.16.1.100 3784])
+AT_CHECK([cat ext-foo_bfd_v4.out], [0], [dnl
+from bgp-daemon: BFD IPv4 client traffic
+])
+
+NETNS_DAEMONIZE([ext-foo], [nc -l -6 -u fe80::210:10ff:fe01:213%ext-foo 3784 > ext-foo_bfd_v6.out], [reply_bfd_v6.pid])
+NS_CHECK_EXEC([bgp-daemon], [echo "from bgp-daemon: BFD IPv6 client traffic" | nc -u -6 fe80::210:10ff:fe01:213%bgp-daemon 3784])
+AT_CHECK([cat ext-foo_bfd_v6.out], [0], [dnl
+from bgp-daemon: BFD IPv6 client traffic
+])
+
+# Verify that hosts on the internal network can reach external networks
+NETNS_DAEMONIZE([ext-foo], [nc -l -k 172.16.1.100 2222], [nc_external.pid])
+NS_CHECK_EXEC([bar1], [echo "TCP test" | nc -w 1 --send-only 172.16.1.100 2222])
+
+OVS_APP_EXIT_AND_WAIT([ovn-controller])
+
+as ovn-sb
+OVS_APP_EXIT_AND_WAIT([ovsdb-server])
+
+as ovn-nb
+OVS_APP_EXIT_AND_WAIT([ovsdb-server])
+
+as northd
+OVS_APP_EXIT_AND_WAIT([ovn-northd])
+
+as
+OVS_TRAFFIC_VSWITCHD_STOP(["/.*error receiving.*/d
+/.*terminating with signal 15.*/d"])
+AT_CLEANUP
+])
diff --git a/utilities/containers/py-requirements.txt b/utilities/containers/py-requirements.txt
index 8a3e977aa..1b55042c8 100644
--- a/utilities/containers/py-requirements.txt
+++ b/utilities/containers/py-requirements.txt
@@ -1,6 +1,6 @@
flake8>=6.1.0
meson>=1.4,<1.5
-scapy
+scapy==2.5.0
sphinx<8.0 # https://github.com/sphinx-doc/sphinx/issues/12711
setuptools
pyelftools