From alexandr.lobakin at intel.com Mon Nov 29 14:10:47 2021 From: alexandr.lobakin at intel.com (Alexander Lobakin) Date: Mon, 29 Nov 2021 15:10:47 +0100 Subject: [Intel-wired-lan] [PATCH net-next 0/2] igc: driver change to support XDP metadata In-Reply-To: <163700856423.565980.10162564921347693758.stgit@firesoul> References: <163700856423.565980.10162564921347693758.stgit@firesoul> Message-ID: <20211129141047.8939-1-alexandr.lobakin@intel.com> From: Jesper Dangaard Brouer Date: Mon, 15 Nov 2021 21:36:20 +0100 > Changes to fix and enable XDP metadata to a specific Intel driver igc. > Tested with hardware i225 that uses driver igc, while testing AF_XDP > access to metadata area. Would you mind if I take this your series into my bigger one that takes care of it throughout all the Intel drivers? > --- > > Jesper Dangaard Brouer (2): > igc: AF_XDP zero-copy metadata adjust breaks SKBs on XDP_PASS > igc: enable XDP metadata in driver > > > drivers/net/ethernet/intel/igc/igc_main.c | 33 +++++++++++++++++++---------- > 1 file changed, 22 insertions(+), 11 deletions(-) > > -- Thanks, Al From jbrouer at redhat.com Mon Nov 29 14:29:07 2021 From: jbrouer at redhat.com (Jesper Dangaard Brouer) Date: Mon, 29 Nov 2021 15:29:07 +0100 Subject: [Intel-wired-lan] [PATCH net-next 0/2] igc: driver change to support XDP metadata In-Reply-To: <20211129141047.8939-1-alexandr.lobakin@intel.com> References: <163700856423.565980.10162564921347693758.stgit@firesoul> <20211129141047.8939-1-alexandr.lobakin@intel.com> Message-ID: On 29/11/2021 15.10, Alexander Lobakin wrote: > From: Jesper Dangaard Brouer > Date: Mon, 15 Nov 2021 21:36:20 +0100 > >> Changes to fix and enable XDP metadata to a specific Intel driver igc. >> Tested with hardware i225 that uses driver igc, while testing AF_XDP >> access to metadata area. > > Would you mind if I take this your series into my bigger one that > takes care of it throughout all the Intel drivers? I have a customer that depend on this fix. They will have to do the backport anyway (to v5.13), but it would bring confidence on their side if the commits appear in an official git-tree before doing the backport (and optimally with a SHA they can refer to). Tony Nguyen have these landed in your git-tree? --JEsper From jbrouer at redhat.com Mon Nov 29 14:39:04 2021 From: jbrouer at redhat.com (Jesper Dangaard Brouer) Date: Mon, 29 Nov 2021 15:39:04 +0100 Subject: [Intel-wired-lan] [PATCH net-next 2/2] igc: enable XDP metadata in driver In-Reply-To: <20211126161649.151100-1-alexandr.lobakin@intel.com> References: <163700856423.565980.10162564921347693758.stgit@firesoul> <163700859087.565980.3578855072170209153.stgit@firesoul> <20211126161649.151100-1-alexandr.lobakin@intel.com> Message-ID: <6de05aea-9cf4-c938-eff2-9e3b138512a4@redhat.com> On 26/11/2021 17.16, Alexander Lobakin wrote: > From: Jesper Dangaard Brouer > Date: Mon, 15 Nov 2021 21:36:30 +0100 > >> Enabling the XDP bpf_prog access to data_meta area is a very small >> change. Hint passing 'true' to xdp_prepare_buff(). >> >> The SKB layers can also access data_meta area, which required more >> driver changes to support. Reviewers, notice the igc driver have two >> different functions that can create SKBs, depending on driver config. >> >> Hint for testers, ethtool priv-flags legacy-rx enables >> the function igc_construct_skb() >> >> ethtool --set-priv-flags DEV legacy-rx on >> >> Signed-off-by: Jesper Dangaard Brouer >> --- >> drivers/net/ethernet/intel/igc/igc_main.c | 29 +++++++++++++++++++---------- >> 1 file changed, 19 insertions(+), 10 deletions(-) >> >> diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c >> index 76b0a7311369..b516f1b301b4 100644 >> --- a/drivers/net/ethernet/intel/igc/igc_main.c >> +++ b/drivers/net/ethernet/intel/igc/igc_main.c >> @@ -1718,24 +1718,26 @@ static void igc_add_rx_frag(struct igc_ring *rx_ring, >> >> static struct sk_buff *igc_build_skb(struct igc_ring *rx_ring, >> struct igc_rx_buffer *rx_buffer, >> - union igc_adv_rx_desc *rx_desc, >> - unsigned int size) >> + struct xdp_buff *xdp) >> { >> - void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; >> + unsigned int size = xdp->data_end - xdp->data; >> unsigned int truesize = igc_get_rx_frame_truesize(rx_ring, size); >> + unsigned int metasize = xdp->data - xdp->data_meta; >> struct sk_buff *skb; >> >> /* prefetch first cache line of first page */ >> - net_prefetch(va); >> + net_prefetch(xdp->data); > > I'd prefer prefetching xdp->data_meta here. GRO layer accesses it. > Maximum meta size for now is 32, so at least 96 bytes of the frame > will stil be prefetched. Prefetch works for "full" cachelines. Intel CPUs often prefect two cache-lines, when doing this, thus I guess we still get xdp->data. I don't mind prefetching xdp->data_meta, but (1) I tried to keep the change minimal as current behavior was data area I kept that. (2) xdp->data starts on a cacheline and we know NIC hardware have touched that, it is not a full-cache-miss due to DDIO/DCA it is known to be in L3 cache (gain is around 2-3 ns in my machine for data prefetch). Given this is only a 2.5 Gbit/s driver/HW I doubt this make any difference. Tony is it worth resending a V2 of this patch? >> >> /* build an skb around the page buffer */ >> - skb = build_skb(va - IGC_SKB_PAD, truesize); >> + skb = build_skb(xdp->data_hard_start, truesize); >> if (unlikely(!skb)) >> return NULL; >> >> /* update pointers within the skb to store the data */ >> - skb_reserve(skb, IGC_SKB_PAD); >> + skb_reserve(skb, xdp->data - xdp->data_hard_start); >> __skb_put(skb, size); >> + if (metasize) >> + skb_metadata_set(skb, metasize); >> >> igc_rx_buffer_flip(rx_buffer, truesize); >> return skb; >> @@ -1746,6 +1748,7 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, >> struct xdp_buff *xdp, >> ktime_t timestamp) >> { >> + unsigned int metasize = xdp->data - xdp->data_meta; >> unsigned int size = xdp->data_end - xdp->data; >> unsigned int truesize = igc_get_rx_frame_truesize(rx_ring, size); >> void *va = xdp->data; >> @@ -1756,7 +1759,7 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, >> net_prefetch(va); > > ...here as well. > From alexandr.lobakin at intel.com Mon Nov 29 14:41:47 2021 From: alexandr.lobakin at intel.com (Alexander Lobakin) Date: Mon, 29 Nov 2021 15:41:47 +0100 Subject: [Intel-wired-lan] [PATCH net-next 0/2] igc: driver change to support XDP metadata In-Reply-To: References: <163700856423.565980.10162564921347693758.stgit@firesoul> <20211129141047.8939-1-alexandr.lobakin@intel.com> Message-ID: <20211129144147.10242-1-alexandr.lobakin@intel.com> From: Jesper Dangaard Brouer Date: Mon, 29 Nov 2021 15:29:07 +0100 > On 29/11/2021 15.10, Alexander Lobakin wrote: > > From: Jesper Dangaard Brouer > > Date: Mon, 15 Nov 2021 21:36:20 +0100 > > > >> Changes to fix and enable XDP metadata to a specific Intel driver igc. > >> Tested with hardware i225 that uses driver igc, while testing AF_XDP > >> access to metadata area. > > > > Would you mind if I take this your series into my bigger one that > > takes care of it throughout all the Intel drivers? > > I have a customer that depend on this fix. They will have to do the > backport anyway (to v5.13), but it would bring confidence on their side > if the commits appear in an official git-tree before doing the backport > (and optimally with a SHA they can refer to). Yeah, sure, it's totally fine to get them accepted separately, I'll just refer to them in my series. > Tony Nguyen have these landed in your git-tree? Doesn't seem like. The reason might be that you responded to my patch 2/2 comments only now. > --JEsper Al From alexandr.lobakin at intel.com Mon Nov 29 14:53:03 2021 From: alexandr.lobakin at intel.com (Alexander Lobakin) Date: Mon, 29 Nov 2021 15:53:03 +0100 Subject: [Intel-wired-lan] [PATCH net-next 2/2] igc: enable XDP metadata in driver In-Reply-To: <6de05aea-9cf4-c938-eff2-9e3b138512a4@redhat.com> References: <163700856423.565980.10162564921347693758.stgit@firesoul> <163700859087.565980.3578855072170209153.stgit@firesoul> <20211126161649.151100-1-alexandr.lobakin@intel.com> <6de05aea-9cf4-c938-eff2-9e3b138512a4@redhat.com> Message-ID: <20211129145303.10507-1-alexandr.lobakin@intel.com> From: Jesper Dangaard Brouer Date: Mon, 29 Nov 2021 15:39:04 +0100 > On 26/11/2021 17.16, Alexander Lobakin wrote: > > From: Jesper Dangaard Brouer > > Date: Mon, 15 Nov 2021 21:36:30 +0100 > > > >> Enabling the XDP bpf_prog access to data_meta area is a very small > >> change. Hint passing 'true' to xdp_prepare_buff(). > >> > >> The SKB layers can also access data_meta area, which required more > >> driver changes to support. Reviewers, notice the igc driver have two > >> different functions that can create SKBs, depending on driver config. > >> > >> Hint for testers, ethtool priv-flags legacy-rx enables > >> the function igc_construct_skb() > >> > >> ethtool --set-priv-flags DEV legacy-rx on > >> > >> Signed-off-by: Jesper Dangaard Brouer > >> --- > >> drivers/net/ethernet/intel/igc/igc_main.c | 29 +++++++++++++++++++---------- > >> 1 file changed, 19 insertions(+), 10 deletions(-) > >> > >> diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c > >> index 76b0a7311369..b516f1b301b4 100644 > >> --- a/drivers/net/ethernet/intel/igc/igc_main.c > >> +++ b/drivers/net/ethernet/intel/igc/igc_main.c > >> @@ -1718,24 +1718,26 @@ static void igc_add_rx_frag(struct igc_ring *rx_ring, > >> > >> static struct sk_buff *igc_build_skb(struct igc_ring *rx_ring, > >> struct igc_rx_buffer *rx_buffer, > >> - union igc_adv_rx_desc *rx_desc, > >> - unsigned int size) > >> + struct xdp_buff *xdp) > >> { > >> - void *va = page_address(rx_buffer->page) + rx_buffer->page_offset; > >> + unsigned int size = xdp->data_end - xdp->data; > >> unsigned int truesize = igc_get_rx_frame_truesize(rx_ring, size); > >> + unsigned int metasize = xdp->data - xdp->data_meta; > >> struct sk_buff *skb; > >> > >> /* prefetch first cache line of first page */ > >> - net_prefetch(va); > >> + net_prefetch(xdp->data); > > > > I'd prefer prefetching xdp->data_meta here. GRO layer accesses it. > > Maximum meta size for now is 32, so at least 96 bytes of the frame > > will stil be prefetched. > > Prefetch works for "full" cachelines. Intel CPUs often prefect two > cache-lines, when doing this, thus I guess we still get xdp->data. Sure. I mean, net_prefetch() prefetches 128 bytes in a row. xdp->data is usually aligned to XDP_PACKET_HEADROOM (or two bytes to the right). If our CL is 64 and the meta is present, then... ah right, 64 to the left and 64 starting from data to the right. > I don't mind prefetching xdp->data_meta, but (1) I tried to keep the > change minimal as current behavior was data area I kept that. (2) > xdp->data starts on a cacheline and we know NIC hardware have touched > that, it is not a full-cache-miss due to DDIO/DCA it is known to be in > L3 cache (gain is around 2-3 ns in my machine for data prefetch). > Given this is only a 2.5 Gbit/s driver/HW I doubt this make any difference. Code constistency at least. On 10+ Gbps we prefetch meta, and I plan to continue doing this in my series. > Tony is it worth resending a V2 of this patch? Tony, you can take it as it is if you want, I'll correct it later in mine. Up to you. Reviewed-by: Alexander Lobakin > >> > >> /* build an skb around the page buffer */ > >> - skb = build_skb(va - IGC_SKB_PAD, truesize); > >> + skb = build_skb(xdp->data_hard_start, truesize); > >> if (unlikely(!skb)) > >> return NULL; > >> > >> /* update pointers within the skb to store the data */ > >> - skb_reserve(skb, IGC_SKB_PAD); > >> + skb_reserve(skb, xdp->data - xdp->data_hard_start); > >> __skb_put(skb, size); > >> + if (metasize) > >> + skb_metadata_set(skb, metasize); > >> > >> igc_rx_buffer_flip(rx_buffer, truesize); > >> return skb; > >> @@ -1746,6 +1748,7 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, > >> struct xdp_buff *xdp, > >> ktime_t timestamp) > >> { > >> + unsigned int metasize = xdp->data - xdp->data_meta; > >> unsigned int size = xdp->data_end - xdp->data; > >> unsigned int truesize = igc_get_rx_frame_truesize(rx_ring, size); > >> void *va = xdp->data; > >> @@ -1756,7 +1759,7 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, > >> net_prefetch(va); > > > > ...here as well. > > Thanks, Al From zhangrui182 at huawei.com Mon Nov 29 13:52:01 2021 From: zhangrui182 at huawei.com (Rui Zhang) Date: Mon, 29 Nov 2021 19:52:01 +0600 Subject: [Intel-wired-lan] [PATCH] i40e: fix use-after-free in i40e_sync_filters_subtask() Message-ID: <20211129135201.4097648-1-zhangrui182@huawei.com> From: Di Zhu Using ifconfig command to delete the ipv6 address will cause the i40e network card driver to delete its internal mac_filter and i40e_service_task kernel thread will concurrently access the mac_filter. These two processes are not protected by lock so causing the following use-after-free problems. print_address_description+0x70/0x360 ? vprintk_func+0x5e/0xf0 kasan_report+0x1b2/0x330 i40e_sync_vsi_filters+0x4f0/0x1850 [i40e] i40e_sync_filters_subtask+0xe3/0x130 [i40e] i40e_service_task+0x195/0x24c0 [i40e] process_one_work+0x3f5/0x7d0 worker_thread+0x61/0x6c0 ? process_one_work+0x7d0/0x7d0 kthread+0x1c3/0x1f0 ? kthread_park+0xc0/0xc0 ret_from_fork+0x35/0x40 Allocated by task 2279810: kasan_kmalloc+0xa0/0xd0 kmem_cache_alloc_trace+0xf3/0x1e0 i40e_add_filter+0x127/0x2b0 [i40e] i40e_add_mac_filter+0x156/0x190 [i40e] i40e_addr_sync+0x2d/0x40 [i40e] __hw_addr_sync_dev+0x154/0x210 i40e_set_rx_mode+0x6d/0xf0 [i40e] __dev_set_rx_mode+0xfb/0x1f0 __dev_mc_add+0x6c/0x90 igmp6_group_added+0x214/0x230 __ipv6_dev_mc_inc+0x338/0x4f0 addrconf_join_solict.part.7+0xa2/0xd0 addrconf_dad_work+0x500/0x980 process_one_work+0x3f5/0x7d0 worker_thread+0x61/0x6c0 kthread+0x1c3/0x1f0 ret_from_fork+0x35/0x40 Freed by task 2547073: __kasan_slab_free+0x130/0x180 kfree+0x90/0x1b0 __i40e_del_filter+0xa3/0xf0 [i40e] i40e_del_mac_filter+0xf3/0x130 [i40e] i40e_addr_unsync+0x85/0xa0 [i40e] __hw_addr_sync_dev+0x9d/0x210 i40e_set_rx_mode+0x6d/0xf0 [i40e] __dev_set_rx_mode+0xfb/0x1f0 __dev_mc_del+0x69/0x80 igmp6_group_dropped+0x279/0x510 __ipv6_dev_mc_dec+0x174/0x220 addrconf_leave_solict.part.8+0xa2/0xd0 __ipv6_ifa_notify+0x4cd/0x570 ipv6_ifa_notify+0x58/0x80 ipv6_del_addr+0x259/0x4a0 inet6_addr_del+0x188/0x260 addrconf_del_ifaddr+0xcc/0x130 inet6_ioctl+0x152/0x190 sock_do_ioctl+0xd8/0x2b0 sock_ioctl+0x2e5/0x4c0 do_vfs_ioctl+0x14e/0xa80 ksys_ioctl+0x7c/0xa0 __x64_sys_ioctl+0x42/0x50 do_syscall_64+0x98/0x2c0 entry_SYSCALL_64_after_hwframe+0x65/0xca Signed-off-by: Di Zhu Signed-off-by: Rui Zhang --- drivers/net/ethernet/intel/i40e/i40e_main.c | 24 +++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 8221c3364..11c1e9421 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -96,6 +96,24 @@ MODULE_VERSION(DRV_VERSION); static struct workqueue_struct *i40e_wq; +static void netdev_hw_addr_refcnt(struct i40e_mac_filter *f, + struct net_device *netdev, int delta) +{ + struct netdev_hw_addr *ha; + + if (f == NULL || netdev == NULL) + return; + + netdev_for_each_mc_addr(ha, netdev) { + if (ether_addr_equal(ha->addr, f->macaddr)) { + ha->refcount += delta; + if (ha->refcount <= 0) + ha->refcount = 1; + break; + } + } +} + /** * i40e_allocate_dma_mem_d - OS specific memory alloc for shared code * @hw: pointer to the HW structure @@ -1994,6 +2012,7 @@ static void i40e_undo_add_filter_entries(struct i40e_vsi *vsi, hlist_for_each_entry_safe(new, h, from, hlist) { /* We can simply free the wrapper structure */ hlist_del(&new->hlist); + netdev_hw_addr_refcnt(new->f, vsi->netdev, -1); kfree(new); } } @@ -2330,6 +2349,10 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi) &tmp_add_list, &tmp_del_list, vlan_filters); + + hlist_for_each_entry(new, &tmp_add_list, hlist) + netdev_hw_addr_refcnt(new->f, vsi->netdev, 1); + if (retval) goto err_no_memory_locked; @@ -2462,6 +2485,7 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi) if (new->f->state == I40E_FILTER_NEW) new->f->state = new->state; hlist_del(&new->hlist); + netdev_hw_addr_refcnt(new->f, vsi->netdev, -1); kfree(new); } spin_unlock_bh(&vsi->mac_filter_hash_lock); -- 2.23.0 From david.m.ertman at intel.com Mon Nov 29 16:51:49 2021 From: david.m.ertman at intel.com (Ertman, David M) Date: Mon, 29 Nov 2021 16:51:49 +0000 Subject: [Intel-wired-lan] [PATCH net-next] ice: add support for DSCP QoS for IDC In-Reply-To: <10cc4a54-0b25-3c3c-bcc0-41b8bd096cb5@molgen.mpg.de> References: <20211123182536.315714-1-david.m.ertman@intel.com> <10cc4a54-0b25-3c3c-bcc0-41b8bd096cb5@molgen.mpg.de> Message-ID: > -----Original Message----- > From: Paul Menzel > Sent: Wednesday, November 24, 2021 1:52 AM > To: Ertman, David M > Cc: intel-wired-lan at lists.osuosl.org > Subject: Re: [Intel-wired-lan] [PATCH net-next] ice: add support for DSCP > QoS for IDC > > Dear Dave, > > > Am 23.11.21 um 19:25 schrieb Dave Ertman: > > The ice driver provides QoS information to auxiliary drivers > > through the exported function ice_get_qos_params. This function > > doesn't currently support L3 DSCP QoS. > > > > Add the necessary defines, structure elements and code to support > > DSCP QoS through the IDC functions. > > How did you test this? > > In what datasheet (name, revision, section) is that documented? > > > Signed-off-by: Dave Ertman > > --- > > drivers/net/ethernet/intel/ice/ice_idc.c | 5 +++++ > > include/linux/net/intel/iidc.h | 5 +++++ > > 2 files changed, 10 insertions(+) > > > > diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c > b/drivers/net/ethernet/intel/ice/ice_idc.c > > index fc3580167e7b..263a2e7577a2 100644 > > --- a/drivers/net/ethernet/intel/ice/ice_idc.c > > +++ b/drivers/net/ethernet/intel/ice/ice_idc.c > > @@ -227,6 +227,11 @@ void ice_get_qos_params(struct ice_pf *pf, struct > iidc_qos_params *qos) > > > > for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) > > qos->tc_info[i].rel_bw = dcbx_cfg->etscfg.tcbwtable[i]; > > + > > + qos->pfc_mode = dcbx_cfg->pfc_mode; > > + if (qos->pfc_mode == IIDC_DSCP_PFC_MODE) > > Just a nit as the compiler will probably do it itself, but you could use > `dcbx_cfg->pfc_mode` in the check, and move the assignment into the body. That would be an incorrect implementation. the assignment needs to happen regardless of the results of the test. > > > + for (i = 0; i < IIDC_MAX_DSCP_MAPPING; i++) > > + qos->dscp_map[i] = dcbx_cfg->dscp_map[i]; > > } > > EXPORT_SYMBOL_GPL(ice_get_qos_params); > > > > diff --git a/include/linux/net/intel/iidc.h b/include/linux/net/intel/iidc.h > > index 1289593411d3..0a90f301679d 100644 > > --- a/include/linux/net/intel/iidc.h > > +++ b/include/linux/net/intel/iidc.h > > @@ -32,6 +32,9 @@ enum iidc_rdma_protocol { > > }; > > > > #define IIDC_MAX_USER_PRIORITY 8 > > +#define IIDC_MAX_DSCP_MAPPING 64 > > +#define IIDC_VLAN_PFC_MODE 0x0 > > +#define IIDC_DSCP_PFC_MODE 0x1 > > > > /* Struct to hold per RDMA Qset info */ > > struct iidc_rdma_qset_params { > > @@ -60,6 +63,8 @@ struct iidc_qos_params { > > u8 vport_relative_bw; > > u8 vport_priority_type; > > u8 num_tc; > > + u8 pfc_mode; > > + u8 dscp_map[IIDC_MAX_DSCP_MAPPING]; > > }; > > > > struct iidc_event { > > > Kind regards, > > Paul From kuba at kernel.org Mon Nov 29 17:59:47 2021 From: kuba at kernel.org (Jakub Kicinski) Date: Mon, 29 Nov 2021 09:59:47 -0800 Subject: [Intel-wired-lan] [RFC PATCH 0/4] r8169: support dash In-Reply-To: <20211129101315.16372-381-nic_swsd@realtek.com> References: <20211129101315.16372-381-nic_swsd@realtek.com> Message-ID: <20211129095947.547a765f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Mon, 29 Nov 2021 18:13:11 +0800 Hayes Wang wrote: > These patches are used to support dash for RTL8111EP and > RTL8111FP(RTL81117). If I understand correctly DASH is a DMTF standard for remote control. Since it's a standard I think we should have a common way of configuring it across drivers. Is enable/disable the only configuration that we will need? We don't use sysfs too much these days, can we move the knob to devlink, please? (If we only need an on/off switch generic devlink param should be fine). From alexandr.lobakin at intel.com Mon Nov 29 18:13:20 2021 From: alexandr.lobakin at intel.com (Alexander Lobakin) Date: Mon, 29 Nov 2021 19:13:20 +0100 Subject: [Intel-wired-lan] [PATCH net-next 2/2] igc: enable XDP metadata in driver In-Reply-To: <20211129145303.10507-1-alexandr.lobakin@intel.com> References: <163700856423.565980.10162564921347693758.stgit@firesoul> <163700859087.565980.3578855072170209153.stgit@firesoul> <20211126161649.151100-1-alexandr.lobakin@intel.com> <6de05aea-9cf4-c938-eff2-9e3b138512a4@redhat.com> <20211129145303.10507-1-alexandr.lobakin@intel.com> Message-ID: <20211129181320.579477-1-alexandr.lobakin@intel.com> From: Alexander Lobakin Date: Mon, 29 Nov 2021 15:53:03 +0100 > From: Jesper Dangaard Brouer > Date: Mon, 29 Nov 2021 15:39:04 +0100 > > > On 26/11/2021 17.16, Alexander Lobakin wrote: > > > From: Jesper Dangaard Brouer > > > Date: Mon, 15 Nov 2021 21:36:30 +0100 > > > > > >> Enabling the XDP bpf_prog access to data_meta area is a very small > > >> change. Hint passing 'true' to xdp_prepare_buff(). [ snip ] > > Prefetch works for "full" cachelines. Intel CPUs often prefect two > > cache-lines, when doing this, thus I guess we still get xdp->data. > > Sure. I mean, net_prefetch() prefetches 128 bytes in a row. > xdp->data is usually aligned to XDP_PACKET_HEADROOM (or two bytes > to the right). If our CL is 64 and the meta is present, then... ah > right, 64 to the left and 64 starting from data to the right. > > > I don't mind prefetching xdp->data_meta, but (1) I tried to keep the > > change minimal as current behavior was data area I kept that. (2) > > xdp->data starts on a cacheline and we know NIC hardware have touched > > that, it is not a full-cache-miss due to DDIO/DCA it is known to be in > > L3 cache (gain is around 2-3 ns in my machine for data prefetch). > > Given this is only a 2.5 Gbit/s driver/HW I doubt this make any difference. > > Code constistency at least. On 10+ Gbps we prefetch meta, and I plan > to continue doing this in my series. > > > Tony is it worth resending a V2 of this patch? > > Tony, you can take it as it is if you want, I'll correct it later in > mine. Up to you. My "fixup" looks like (in case of v2 needed or so): diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index b516f1b301b4..142c57b7a451 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -1726,7 +1726,7 @@ static struct sk_buff *igc_build_skb(struct igc_ring *rx_ring, struct sk_buff *skb; /* prefetch first cache line of first page */ - net_prefetch(xdp->data); + net_prefetch(xdp->data_meta); /* build an skb around the page buffer */ skb = build_skb(xdp->data_hard_start, truesize); @@ -1756,10 +1756,11 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, struct sk_buff *skb; /* prefetch first cache line of first page */ - net_prefetch(va); + net_prefetch(xdp->data_meta); /* allocate a skb to store the frags */ - skb = napi_alloc_skb(&rx_ring->q_vector->napi, IGC_RX_HDR_LEN + metasize); + skb = napi_alloc_skb(&rx_ring->q_vector->napi, + IGC_RX_HDR_LEN + metasize); if (unlikely(!skb)) return NULL; @@ -2363,7 +2364,8 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) if (!skb) { xdp_init_buff(&xdp, truesize, &rx_ring->xdp_rxq); xdp_prepare_buff(&xdp, pktbuf - igc_rx_offset(rx_ring), - igc_rx_offset(rx_ring) + pkt_offset, size, true); + igc_rx_offset(rx_ring) + pkt_offset, + size, true); skb = igc_xdp_run_prog(adapter, &xdp); } > Reviewed-by: Alexander Lobakin > > > >> > > >> /* build an skb around the page buffer */ > > >> - skb = build_skb(va - IGC_SKB_PAD, truesize); > > >> + skb = build_skb(xdp->data_hard_start, truesize); > > >> if (unlikely(!skb)) > > >> return NULL; > > >> > > >> /* update pointers within the skb to store the data */ > > >> - skb_reserve(skb, IGC_SKB_PAD); > > >> + skb_reserve(skb, xdp->data - xdp->data_hard_start); > > >> __skb_put(skb, size); > > >> + if (metasize) > > >> + skb_metadata_set(skb, metasize); > > >> > > >> igc_rx_buffer_flip(rx_buffer, truesize); > > >> return skb; > > >> @@ -1746,6 +1748,7 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, > > >> struct xdp_buff *xdp, > > >> ktime_t timestamp) > > >> { > > >> + unsigned int metasize = xdp->data - xdp->data_meta; > > >> unsigned int size = xdp->data_end - xdp->data; > > >> unsigned int truesize = igc_get_rx_frame_truesize(rx_ring, size); > > >> void *va = xdp->data; > > >> @@ -1756,7 +1759,7 @@ static struct sk_buff *igc_construct_skb(struct igc_ring *rx_ring, > > >> net_prefetch(va); > > > > > > ...here as well. > > > > > Thanks, > Al Al From anthony.l.nguyen at intel.com Mon Nov 29 19:03:15 2021 From: anthony.l.nguyen at intel.com (Nguyen, Anthony L) Date: Mon, 29 Nov 2021 19:03:15 +0000 Subject: [Intel-wired-lan] [PATCH net-next 2/2] igc: enable XDP metadata in driver In-Reply-To: <20211129181320.579477-1-alexandr.lobakin@intel.com> References: <163700856423.565980.10162564921347693758.stgit@firesoul> <163700859087.565980.3578855072170209153.stgit@firesoul> <20211126161649.151100-1-alexandr.lobakin@intel.com> <6de05aea-9cf4-c938-eff2-9e3b138512a4@redhat.com> <20211129145303.10507-1-alexandr.lobakin@intel.com> <20211129181320.579477-1-alexandr.lobakin@intel.com> Message-ID: <9948428f33d013105108872d51f7e6ebec21203c.camel@intel.com> On Mon, 2021-11-29 at 19:13 +0100, Alexander Lobakin wrote: > From: Alexander Lobakin > Date: Mon, 29 Nov 2021 15:53:03 +0100 > > > From: Jesper Dangaard Brouer > > Date: Mon, 29 Nov 2021 15:39:04 +0100 > > > > > On 26/11/2021 17.16, Alexander Lobakin wrote: > > > > From: Jesper Dangaard Brouer > > > > Date: Mon, 15 Nov 2021 21:36:30 +0100 > > > > > > > > > Enabling the XDP bpf_prog access to data_meta area is a very > > > > > small > > > > > change. Hint passing 'true' to xdp_prepare_buff(). > > [ snip ] > > > > Prefetch works for "full" cachelines. Intel CPUs often prefect > > > two > > > cache-lines, when doing this, thus I guess we still get xdp- > > > >data. > > > > Sure. I mean, net_prefetch() prefetches 128 bytes in a row. > > xdp->data is usually aligned to XDP_PACKET_HEADROOM (or two bytes > > to the right). If our CL is 64 and the meta is present, then... ah > > right, 64 to the left and 64 starting from data to the right. > > > > > I don't mind prefetching xdp->data_meta, but (1) I tried to keep > > > the > > > xdp->data starts on a cacheline and we know NIC hardware have > > > touched > > > that, it is not a full-cache-miss due to DDIO/DCA it is known to > > > be in > > > L3 cache (gain is around 2-3 ns in my machine for data prefetch). > > > Given this is only a 2.5 Gbit/s driver/HW I doubt this make any > > > difference. > > > > Code constistency at least. On 10+ Gbps we prefetch meta, and I > > plan > > to continue doing this in my series. > > > > > Tony is it worth resending a V2 of this patch? > > > > Tony, you can take it as it is if you want, I'll correct it later > > in > > mine. Up to you. > > My "fixup" looks like (in case of v2 needed or so): Thanks Al. If Jesper is ok with this, I'll incorporate it in before sending the pull request to netdev. Otherwise, you can do it as follow on in the other series you previously referenced. Thanks, Tony > diff --git a/drivers/net/ethernet/intel/igc/igc_main.c > b/drivers/net/ethernet/intel/igc/igc_main.c > index b516f1b301b4..142c57b7a451 100644 > --- a/drivers/net/ethernet/intel/igc/igc_main.c > +++ b/drivers/net/ethernet/intel/igc/igc_main.c > @@ -1726,7 +1726,7 @@ static struct sk_buff *igc_build_skb(struct > igc_ring *rx_ring, > ????????struct sk_buff *skb; > ? > ????????/* prefetch first cache line of first page */ > -???????net_prefetch(xdp->data); > +???????net_prefetch(xdp->data_meta); > ? > ????????/* build an skb around the page buffer */ > ????????skb = build_skb(xdp->data_hard_start, truesize); > @@ -1756,10 +1756,11 @@ static struct sk_buff > *igc_construct_skb(struct igc_ring *rx_ring, > ????????struct sk_buff *skb; > ? > ????????/* prefetch first cache line of first page */ > -???????net_prefetch(va); > +???????net_prefetch(xdp->data_meta); > ? > ????????/* allocate a skb to store the frags */ > -???????skb = napi_alloc_skb(&rx_ring->q_vector->napi, IGC_RX_HDR_LEN > + metasize); > +???????skb = napi_alloc_skb(&rx_ring->q_vector->napi, > +??????????????????????????? IGC_RX_HDR_LEN + metasize); > ????????if (unlikely(!skb)) > ????????????????return NULL; > ? > @@ -2363,7 +2364,8 @@ static int igc_clean_rx_irq(struct igc_q_vector > *q_vector, const int budget) > ????????????????if (!skb) { > ????????????????????????xdp_init_buff(&xdp, truesize, &rx_ring- > >xdp_rxq); > ????????????????????????xdp_prepare_buff(&xdp, pktbuf - > igc_rx_offset(rx_ring), > -??????????????????????????????????????? igc_rx_offset(rx_ring) + > pkt_offset, size, true); > +??????????????????????????????????????? igc_rx_offset(rx_ring) + > pkt_offset, > +??????????????????????????????????????? size, true); > ? > ????????????????????????skb = igc_xdp_run_prog(adapter, &xdp); > ????????????????} From anthony.l.nguyen at intel.com Mon Nov 29 19:22:54 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 11:22:54 -0800 Subject: [Intel-wired-lan] [PATCH net-next 0/6] iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 Message-ID: <20211129192300.14188-1-anthony.l.nguyen@intel.com> From: Brett Creeley This patch series adds support in the iavf driver for communicating and using VIRTCHNL_VF_OFFLOAD_VLAN_V2. The current VIRTCHNL_VF_OFFLOAD_VLAN is very limited and covers all 802.1Q VLAN offloads and filtering with no granularity. The new VIRTCHNL_VF_OFFLOAD_VLAN_V2 adds more granularity, flexibility, and support for 802.1ad offloads and filtering. This includes the VF negotiating which VLAN offloads/filtering it's allowed, where VLAN tags should be inserted and/or stripped into and from descriptors, and the supported VLAN protocols. Brett Creeley (6): virtchnl: Add support for new VLAN capabilities iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 negotiation iavf: Add support VIRTCHNL_VF_OFFLOAD_VLAN_V2 during netdev config iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 hotpath iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 offload enable/disable iavf: Restrict maximum VLAN filters for VIRTCHNL_VF_OFFLOAD_VLAN_V2 drivers/net/ethernet/intel/iavf/iavf.h | 45 +- drivers/net/ethernet/intel/iavf/iavf_main.c | 767 +++++++++++++++--- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 72 +- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 30 +- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 534 ++++++++++-- include/linux/avf/virtchnl.h | 378 +++++++++ 6 files changed, 1609 insertions(+), 217 deletions(-) -- 2.20.1 From anthony.l.nguyen at intel.com Mon Nov 29 19:23:00 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 11:23:00 -0800 Subject: [Intel-wired-lan] [PATCH net-next 6/6] iavf: Restrict maximum VLAN filters for VIRTCHNL_VF_OFFLOAD_VLAN_V2 In-Reply-To: <20211129192300.14188-1-anthony.l.nguyen@intel.com> References: <20211129192300.14188-1-anthony.l.nguyen@intel.com> Message-ID: <20211129192300.14188-7-anthony.l.nguyen@intel.com> From: Brett Creeley For VIRTCHNL_VF_OFFLOAD_VLAN, PF's would limit the number of VLAN filters a VF was allowed to add. However, by the time the opcode failed, the VLAN netdev had already been added. VIRTCHNL_VF_OFFLOAD_VLAN_V2 added the ability for a PF to tell the VF how many VLAN filters it's allowed to add. Make changes to support that functionality. Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf_main.c | 50 +++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 1e798b00cd82..5b5cd66e4ef2 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -731,6 +731,50 @@ static void iavf_restore_filters(struct iavf_adapter *adapter) iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021AD)); } +/** + * iavf_get_num_vlans_added - get number of VLANs added + * @adapter: board private structure + */ +static u16 iavf_get_num_vlans_added(struct iavf_adapter *adapter) +{ + return bitmap_weight(adapter->vsi.active_cvlans, VLAN_N_VID) + + bitmap_weight(adapter->vsi.active_svlans, VLAN_N_VID); +} + +/** + * iavf_get_max_vlans_allowed - get maximum VLANs allowed for this VF + * @adapter: board private structure + * + * This depends on the negotiated VLAN capability. For VIRTCHNL_VF_OFFLOAD_VLAN, + * do not impose a limit as that maintains current behavior and for + * VIRTCHNL_VF_OFFLOAD_VLAN_V2, use the maximum allowed sent from the PF. + **/ +static u16 iavf_get_max_vlans_allowed(struct iavf_adapter *adapter) +{ + /* don't impose any limit for VIRTCHNL_VF_OFFLOAD_VLAN since there has + * never been a limit on the VF driver side + */ + if (VLAN_ALLOWED(adapter)) + return VLAN_N_VID; + else if (VLAN_V2_ALLOWED(adapter)) + return adapter->vlan_v2_caps.filtering.max_filters; + + return 0; +} + +/** + * iavf_max_vlans_added - check if maximum VLANs allowed already exist + * @adapter: board private structure + **/ +static bool iavf_max_vlans_added(struct iavf_adapter *adapter) +{ + if (iavf_get_num_vlans_added(adapter) < + iavf_get_max_vlans_allowed(adapter)) + return false; + + return true; +} + /** * iavf_vlan_rx_add_vid - Add a VLAN filter to a device * @netdev: network device struct @@ -745,6 +789,12 @@ static int iavf_vlan_rx_add_vid(struct net_device *netdev, if (!VLAN_FILTERING_ALLOWED(adapter)) return -EIO; + if (iavf_max_vlans_added(adapter)) { + netdev_err(netdev, "Max allowed VLAN filters %u. Remove existing VLANs or disable filtering via Ethtool if supported.\n", + iavf_get_max_vlans_allowed(adapter)); + return -EIO; + } + if (!iavf_add_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto)))) return -ENOMEM; -- 2.20.1 From anthony.l.nguyen at intel.com Mon Nov 29 19:22:56 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 11:22:56 -0800 Subject: [Intel-wired-lan] [PATCH net-next 2/6] iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 negotiation In-Reply-To: <20211129192300.14188-1-anthony.l.nguyen@intel.com> References: <20211129192300.14188-1-anthony.l.nguyen@intel.com> Message-ID: <20211129192300.14188-3-anthony.l.nguyen@intel.com> From: Brett Creeley In order to support the new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability the VF driver needs to rework it's initialization state machine and reset flow. This has to be done because successful negotiation of VIRTCHNL_VF_OFFLOAD_VLAN_V2 requires the VF driver to perform a second capability request via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS before configuring the adapter and its netdev. Add the VIRTCHNL_VF_OFFLOAD_VLAN_V2 bit when sending the VIRTHCNL_OP_GET_VF_RESOURECES message. The underlying PF will either support VIRTCHNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 or neither. Both of these offloads should never be supported together. Based on this, add 2 new states to the initialization state machine: __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS __IAVF_INIT_CONFIG_ADAPTER The __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS state is used to request/store the new VLAN capabilities if and only if VIRTCHNL_VLAN_OFFLOAD_VLAN_V2 was successfully negotiated in the __IAVF_INIT_GET_RESOURCES state. The __IAVF_INIT_CONFIG_ADAPTER state is used to configure the adapter/netdev after the resource requests have finished. The VF will move into this state regardless of whether it successfully negotiated VIRTCHNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2. Also, add a the new flag IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS and set it during VF reset. If VIRTCHNL_VF_OFFLOAD_VLAN_V2 was successfully negotiated then the VF will request its VLAN capabilities via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS during the reset. This is needed because the PF may change/modify the VF's configuration during VF reset (i.e. modifying the VF's port VLAN configuration). This also, required the VF to call netdev_update_features() since its VLAN features may change during VF reset. Make sure to call this under rtnl_lock(). Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf.h | 9 + drivers/net/ethernet/intel/iavf/iavf_main.c | 205 +++++++++++++----- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 78 ++++++- 3 files changed, 240 insertions(+), 52 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index b5728bdbcf33..edb139834437 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -181,6 +181,8 @@ enum iavf_state_t { __IAVF_REMOVE, /* driver is being unloaded */ __IAVF_INIT_VERSION_CHECK, /* aq msg sent, awaiting reply */ __IAVF_INIT_GET_RESOURCES, /* aq msg sent, awaiting reply */ + __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS, + __IAVF_INIT_CONFIG_ADAPTER, __IAVF_INIT_SW, /* got resources, setting up structs */ __IAVF_INIT_FAILED, /* init failed, restarting procedure */ __IAVF_RESETTING, /* in reset */ @@ -310,6 +312,7 @@ struct iavf_adapter { #define IAVF_FLAG_AQ_ADD_ADV_RSS_CFG BIT(27) #define IAVF_FLAG_AQ_DEL_ADV_RSS_CFG BIT(28) #define IAVF_FLAG_AQ_REQUEST_STATS BIT(29) +#define IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS BIT(30) /* OS defined structs */ struct net_device *netdev; @@ -349,6 +352,8 @@ struct iavf_adapter { VIRTCHNL_VF_OFFLOAD_RSS_PF))) #define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_OFFLOAD_VLAN) +#define VLAN_V2_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \ + VIRTCHNL_VF_OFFLOAD_VLAN_V2) #define ADV_LINK_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_CAP_ADV_LINK_SPEED) #define FDIR_FLTR_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ @@ -360,6 +365,7 @@ struct iavf_adapter { struct virtchnl_version_info pf_version; #define PF_IS_V11(_a) (((_a)->pf_version.major == 1) && \ ((_a)->pf_version.minor == 1)) + struct virtchnl_vlan_caps vlan_v2_caps; u16 msg_enable; struct iavf_eth_stats current_stats; struct iavf_vsi vsi; @@ -448,6 +454,7 @@ static inline void iavf_change_state(struct iavf_adapter *adapter, int iavf_up(struct iavf_adapter *adapter); void iavf_down(struct iavf_adapter *adapter); int iavf_process_config(struct iavf_adapter *adapter); +int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter); void iavf_schedule_reset(struct iavf_adapter *adapter); void iavf_schedule_request_stats(struct iavf_adapter *adapter); void iavf_reset(struct iavf_adapter *adapter); @@ -466,6 +473,8 @@ int iavf_send_api_ver(struct iavf_adapter *adapter); int iavf_verify_api_ver(struct iavf_adapter *adapter); int iavf_send_vf_config_msg(struct iavf_adapter *adapter); int iavf_get_vf_config(struct iavf_adapter *adapter); +int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter); +int iavf_send_vf_offload_vlan_v2_msg(struct iavf_adapter *adapter); void iavf_irq_enable(struct iavf_adapter *adapter, bool flush); void iavf_configure_queues(struct iavf_adapter *adapter); void iavf_deconfigure_queues(struct iavf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index f5ac2390d8ce..a5291363468f 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1584,6 +1584,8 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter) { if (adapter->aq_required & IAVF_FLAG_AQ_GET_CONFIG) return iavf_send_vf_config_msg(adapter); + if (adapter->aq_required & IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS) + return iavf_send_vf_offload_vlan_v2_msg(adapter); if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_QUEUES) { iavf_disable_queues(adapter); return 0; @@ -1826,6 +1828,59 @@ static void iavf_init_version_check(struct iavf_adapter *adapter) iavf_change_state(adapter, __IAVF_INIT_FAILED); } +/** + * iavf_parse_vf_resource_msg - parse response from VIRTCHNL_OP_GET_VF_RESOURCES + * @adapter: board private structure + */ +int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter) +{ + int i, num_req_queues = adapter->num_req_queues; + struct iavf_vsi *vsi = &adapter->vsi; + + for (i = 0; i < adapter->vf_res->num_vsis; i++) { + if (adapter->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV) + adapter->vsi_res = &adapter->vf_res->vsi_res[i]; + } + if (!adapter->vsi_res) { + dev_err(&adapter->pdev->dev, "No LAN VSI found\n"); + return -ENODEV; + } + + if (num_req_queues && + num_req_queues > adapter->vsi_res->num_queue_pairs) { + /* Problem. The PF gave us fewer queues than what we had + * negotiated in our request. Need a reset to see if we can't + * get back to a working state. + */ + dev_err(&adapter->pdev->dev, + "Requested %d queues, but PF only gave us %d.\n", + num_req_queues, + adapter->vsi_res->num_queue_pairs); + adapter->flags |= IAVF_FLAG_REINIT_MSIX_NEEDED; + adapter->num_req_queues = adapter->vsi_res->num_queue_pairs; + iavf_schedule_reset(adapter); + + return -EAGAIN; + } + adapter->num_req_queues = 0; + adapter->vsi.id = adapter->vsi_res->vsi_id; + + adapter->vsi.back = adapter; + adapter->vsi.base_vector = 1; + adapter->vsi.work_limit = IAVF_DEFAULT_IRQ_WORK; + vsi->netdev = adapter->netdev; + vsi->qs_handle = adapter->vsi_res->qset_handle; + if (adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { + adapter->rss_key_size = adapter->vf_res->rss_key_size; + adapter->rss_lut_size = adapter->vf_res->rss_lut_size; + } else { + adapter->rss_key_size = IAVF_HKEY_ARRAY_SIZE; + adapter->rss_lut_size = IAVF_HLUT_ARRAY_SIZE; + } + + return 0; +} + /** * iavf_init_get_resources - third step of driver startup * @adapter: board private structure @@ -1837,7 +1892,6 @@ static void iavf_init_version_check(struct iavf_adapter *adapter) **/ static void iavf_init_get_resources(struct iavf_adapter *adapter) { - struct net_device *netdev = adapter->netdev; struct pci_dev *pdev = adapter->pdev; struct iavf_hw *hw = &adapter->hw; int err; @@ -1855,7 +1909,7 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter) err = iavf_get_vf_config(adapter); if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK) { err = iavf_send_vf_config_msg(adapter); - goto err; + goto err_alloc; } else if (err == IAVF_ERR_PARAM) { /* We only get ERR_PARAM if the device is in a very bad * state or if we've been disabled for previous bad @@ -1870,9 +1924,83 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter) goto err_alloc; } - err = iavf_process_config(adapter); + err = iavf_parse_vf_resource_msg(adapter); if (err) goto err_alloc; + + err = iavf_send_vf_offload_vlan_v2_msg(adapter); + if (err == -EOPNOTSUPP) { + /* underlying PF doesn't support VIRTCHNL_VF_OFFLOAD_VLAN_V2, so + * go directly to finishing initialization + */ + iavf_change_state(adapter, __IAVF_INIT_CONFIG_ADAPTER); + return; + } else if (err) { + dev_err(&pdev->dev, "Unable to send offload vlan v2 request (%d)\n", + err); + goto err_alloc; + } + + /* underlying PF supports VIRTCHNL_VF_OFFLOAD_VLAN_V2, so update the + * state accordingly + */ + iavf_change_state(adapter, __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS); + return; + +err_alloc: + kfree(adapter->vf_res); + adapter->vf_res = NULL; +err: + iavf_change_state(adapter, __IAVF_INIT_FAILED); +} + +/** + * iavf_init_get_offload_vlan_v2_caps - part of driver startup + * @adapter: board private structure + * + * Function processes __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS driver state if the + * VF negotiates VIRTCHNL_VF_OFFLOAD_VLAN_V2. If VIRTCHNL_VF_OFFLOAD_VLAN_V2 is + * not negotiated, then this state will never be entered. + **/ +static void iavf_init_get_offload_vlan_v2_caps(struct iavf_adapter *adapter) +{ + int ret; + + WARN_ON(adapter->state != __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS); + + memset(&adapter->vlan_v2_caps, 0, sizeof(adapter->vlan_v2_caps)); + + ret = iavf_get_vf_vlan_v2_caps(adapter); + if (ret) { + if (ret == IAVF_ERR_ADMIN_QUEUE_NO_WORK) + iavf_send_vf_offload_vlan_v2_msg(adapter); + goto err; + } + + iavf_change_state(adapter, __IAVF_INIT_CONFIG_ADAPTER); + return; +err: + iavf_change_state(adapter, __IAVF_INIT_FAILED); +} + +/** + * iavf_init_config_adapter - last part of driver startup + * @adapter: board private structure + * + * After all the supported capabilities are negotiated, then the + * __IAVF_INIT_CONFIG_ADAPTER state will finish driver initialization. + */ +static void iavf_init_config_adapter(struct iavf_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + struct pci_dev *pdev = adapter->pdev; + int err; + + WARN_ON(adapter->state != __IAVF_INIT_CONFIG_ADAPTER); + + if (iavf_process_config(adapter)) + goto err; + adapter->current_op = VIRTCHNL_OP_UNKNOWN; adapter->flags |= IAVF_FLAG_RX_CSUM_ENABLED; @@ -1962,9 +2090,6 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter) iavf_free_misc_irq(adapter); err_sw_init: iavf_reset_interrupt_capability(adapter); -err_alloc: - kfree(adapter->vf_res); - adapter->vf_res = NULL; err: iavf_change_state(adapter, __IAVF_INIT_FAILED); } @@ -2013,6 +2138,18 @@ static void iavf_watchdog_task(struct work_struct *work) queue_delayed_work(iavf_wq, &adapter->watchdog_task, msecs_to_jiffies(1)); return; + case __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS: + iavf_init_get_offload_vlan_v2_caps(adapter); + mutex_unlock(&adapter->crit_lock); + queue_delayed_work(iavf_wq, &adapter->watchdog_task, + msecs_to_jiffies(1)); + return; + case __IAVF_INIT_CONFIG_ADAPTER: + iavf_init_config_adapter(adapter); + mutex_unlock(&adapter->crit_lock); + queue_delayed_work(iavf_wq, &adapter->watchdog_task, + msecs_to_jiffies(1)); + return; case __IAVF_INIT_FAILED: if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) { dev_err(&adapter->pdev->dev, @@ -2066,10 +2203,13 @@ static void iavf_watchdog_task(struct work_struct *work) iavf_send_api_ver(adapter); } } else { + int ret = iavf_process_aq_command(adapter); + /* An error will be returned if no commands were * processed; use this opportunity to update stats + * if the error isn't -ENOTSUPP */ - if (iavf_process_aq_command(adapter) && + if (ret && ret != -EOPNOTSUPP && adapter->state == __IAVF_RUNNING) iavf_request_stats(adapter); } @@ -2309,6 +2449,13 @@ static void iavf_reset_task(struct work_struct *work) } adapter->aq_required |= IAVF_FLAG_AQ_GET_CONFIG; + /* always set since VIRTCHNL_OP_GET_VF_RESOURCES has not been + * sent/received yet, so VLAN_V2_ALLOWED() cannot is not reliable here, + * however the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS won't be sent until + * VIRTCHNL_OP_GET_VF_RESOURCES and VIRTCHNL_VF_OFFLOAD_VLAN_V2 have + * been successfully sent and negotiated + */ + adapter->aq_required |= IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS; adapter->aq_required |= IAVF_FLAG_AQ_MAP_VECTORS; spin_lock_bh(&adapter->mac_vlan_list_lock); @@ -3608,39 +3755,10 @@ static int iavf_check_reset_complete(struct iavf_hw *hw) int iavf_process_config(struct iavf_adapter *adapter) { struct virtchnl_vf_resource *vfres = adapter->vf_res; - int i, num_req_queues = adapter->num_req_queues; struct net_device *netdev = adapter->netdev; - struct iavf_vsi *vsi = &adapter->vsi; netdev_features_t hw_enc_features; netdev_features_t hw_features; - /* got VF config message back from PF, now we can parse it */ - for (i = 0; i < vfres->num_vsis; i++) { - if (vfres->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV) - adapter->vsi_res = &vfres->vsi_res[i]; - } - if (!adapter->vsi_res) { - dev_err(&adapter->pdev->dev, "No LAN VSI found\n"); - return -ENODEV; - } - - if (num_req_queues && - num_req_queues > adapter->vsi_res->num_queue_pairs) { - /* Problem. The PF gave us fewer queues than what we had - * negotiated in our request. Need a reset to see if we can't - * get back to a working state. - */ - dev_err(&adapter->pdev->dev, - "Requested %d queues, but PF only gave us %d.\n", - num_req_queues, - adapter->vsi_res->num_queue_pairs); - adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED; - adapter->num_req_queues = adapter->vsi_res->num_queue_pairs; - iavf_schedule_reset(adapter); - return -ENODEV; - } - adapter->num_req_queues = 0; - hw_enc_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | @@ -3721,21 +3839,6 @@ int iavf_process_config(struct iavf_adapter *adapter) netdev->features &= ~NETIF_F_GSO; } - adapter->vsi.id = adapter->vsi_res->vsi_id; - - adapter->vsi.back = adapter; - adapter->vsi.base_vector = 1; - adapter->vsi.work_limit = IAVF_DEFAULT_IRQ_WORK; - vsi->netdev = adapter->netdev; - vsi->qs_handle = adapter->vsi_res->qset_handle; - if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { - adapter->rss_key_size = vfres->rss_key_size; - adapter->rss_lut_size = vfres->rss_lut_size; - } else { - adapter->rss_key_size = IAVF_HKEY_ARRAY_SIZE; - adapter->rss_lut_size = IAVF_HLUT_ARRAY_SIZE; - } - return 0; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 52bfe2a853f0..2ad426f13462 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -137,6 +137,7 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter) VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 | VIRTCHNL_VF_OFFLOAD_ENCAP | + VIRTCHNL_VF_OFFLOAD_VLAN_V2 | VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES | VIRTCHNL_VF_OFFLOAD_ADQ | @@ -155,6 +156,19 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter) NULL, 0); } +int iavf_send_vf_offload_vlan_v2_msg(struct iavf_adapter *adapter) +{ + adapter->aq_required &= ~IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS; + + if (!VLAN_V2_ALLOWED(adapter)) + return -EOPNOTSUPP; + + adapter->current_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS; + + return iavf_send_pf_msg(adapter, VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, + NULL, 0); +} + /** * iavf_validate_num_queues * @adapter: adapter structure @@ -235,6 +249,45 @@ int iavf_get_vf_config(struct iavf_adapter *adapter) return err; } +int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter) +{ + struct iavf_hw *hw = &adapter->hw; + struct iavf_arq_event_info event; + enum virtchnl_ops op; + enum iavf_status err; + u16 len; + + len = sizeof(struct virtchnl_vlan_caps); + event.buf_len = len; + event.msg_buf = kzalloc(event.buf_len, GFP_KERNEL); + if (!event.msg_buf) { + err = -ENOMEM; + goto out; + } + + while (1) { + /* When the AQ is empty, iavf_clean_arq_element will return + * nonzero and this loop will terminate. + */ + err = iavf_clean_arq_element(hw, &event, NULL); + if (err) + goto out_alloc; + op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high); + if (op == VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS) + break; + } + + err = (enum iavf_status)le32_to_cpu(event.desc.cookie_low); + if (err) + goto out_alloc; + + memcpy(&adapter->vlan_v2_caps, event.msg_buf, min(event.msg_len, len)); +out_alloc: + kfree(event.msg_buf); +out: + return err; +} + /** * iavf_configure_queues * @adapter: adapter structure @@ -1757,6 +1810,26 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, } spin_unlock_bh(&adapter->mac_vlan_list_lock); + + iavf_parse_vf_resource_msg(adapter); + + /* negotiated VIRTCHNL_VF_OFFLOAD_VLAN_V2, so wait for the + * response to VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS to finish + * configuration + */ + if (VLAN_V2_ALLOWED(adapter)) + break; + /* fallthrough and finish config if VIRTCHNL_VF_OFFLOAD_VLAN_V2 + * wasn't successfully negotiated with the PF + */ + } + fallthrough; + case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: { + if (v_opcode == VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS) + memcpy(&adapter->vlan_v2_caps, msg, + min_t(u16, msglen, + sizeof(adapter->vlan_v2_caps))); + iavf_process_config(adapter); /* unlock crit_lock before acquiring rtnl_lock as other @@ -1764,8 +1837,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, * crit_lock */ mutex_unlock(&adapter->crit_lock); + /* VLAN capabilities can change during VFR, so make sure to + * update the netdev features with the new capabilities + */ rtnl_lock(); - netdev_update_features(adapter->netdev); + netdev_update_features(netdev); rtnl_unlock(); if (iavf_lock_timeout(&adapter->crit_lock, 10000)) dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", -- 2.20.1 From anthony.l.nguyen at intel.com Mon Nov 29 19:22:59 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 11:22:59 -0800 Subject: [Intel-wired-lan] [PATCH net-next 5/6] iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 offload enable/disable In-Reply-To: <20211129192300.14188-1-anthony.l.nguyen@intel.com> References: <20211129192300.14188-1-anthony.l.nguyen@intel.com> Message-ID: <20211129192300.14188-6-anthony.l.nguyen@intel.com> From: Brett Creeley The new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability added support that allows the VF to support 802.1Q and 802.1ad VLAN insertion and stripping if successfully negotiated via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS. Multiple changes were needed to support this new functionality. 1. Added new aq_required flags to support any kind of VLAN stripping and insertion offload requests via virtchnl. 2. Added the new method iavf_set_vlan_offload_features() that's used during VF initialization, VF reset, and iavf_set_features() to set the aq_required bits based on the current VLAN offload configuration of the VF's netdev. 3. Added virtchnl handling for VIRTCHNL_OP_ENABLE_STRIPPING_V2, VIRTCHNL_OP_DISABLE_STRIPPING_V2, VIRTCHNL_OP_ENABLE_INSERTION_V2, and VIRTCHNL_OP_ENABLE_INSERTION_V2. Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf.h | 18 +- drivers/net/ethernet/intel/iavf/iavf_main.c | 151 +++++++++++-- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 203 ++++++++++++++++++ 3 files changed, 352 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 2660d46da1b5..8188cec34937 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -287,7 +287,7 @@ struct iavf_adapter { /* duplicates for common code */ #define IAVF_FLAG_DCB_ENABLED 0 /* flags for admin queue service task */ - u32 aq_required; + u64 aq_required; #define IAVF_FLAG_AQ_ENABLE_QUEUES BIT(0) #define IAVF_FLAG_AQ_DISABLE_QUEUES BIT(1) #define IAVF_FLAG_AQ_ADD_MAC_FILTER BIT(2) @@ -320,6 +320,14 @@ struct iavf_adapter { #define IAVF_FLAG_AQ_DEL_ADV_RSS_CFG BIT(28) #define IAVF_FLAG_AQ_REQUEST_STATS BIT(29) #define IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS BIT(30) +#define IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_STRIPPING BIT(31) +#define IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_STRIPPING BIT(32) +#define IAVF_FLAG_AQ_ENABLE_STAG_VLAN_STRIPPING BIT(33) +#define IAVF_FLAG_AQ_DISABLE_STAG_VLAN_STRIPPING BIT(34) +#define IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_INSERTION BIT(35) +#define IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION BIT(36) +#define IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION BIT(37) +#define IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION BIT(38) /* OS defined structs */ struct net_device *netdev; @@ -524,6 +532,14 @@ void iavf_enable_channels(struct iavf_adapter *adapter); void iavf_disable_channels(struct iavf_adapter *adapter); void iavf_add_cloud_filter(struct iavf_adapter *adapter); void iavf_del_cloud_filter(struct iavf_adapter *adapter); +void iavf_enable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid); +void iavf_disable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid); +void iavf_enable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid); +void iavf_disable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid); +void +iavf_set_vlan_offload_features(struct iavf_adapter *adapter, + netdev_features_t prev_features, + netdev_features_t features); void iavf_add_fdir_filter(struct iavf_adapter *adapter); void iavf_del_fdir_filter(struct iavf_adapter *adapter); void iavf_add_adv_rss_cfg(struct iavf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 49c19fd08d64..1e798b00cd82 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1815,6 +1815,39 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter) iavf_del_adv_rss_cfg(adapter); return 0; } + if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_STRIPPING) { + iavf_disable_vlan_stripping_v2(adapter, ETH_P_8021Q); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_STAG_VLAN_STRIPPING) { + iavf_disable_vlan_stripping_v2(adapter, ETH_P_8021AD); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_STRIPPING) { + iavf_enable_vlan_stripping_v2(adapter, ETH_P_8021Q); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_STAG_VLAN_STRIPPING) { + iavf_enable_vlan_stripping_v2(adapter, ETH_P_8021AD); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION) { + iavf_disable_vlan_insertion_v2(adapter, ETH_P_8021Q); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION) { + iavf_disable_vlan_insertion_v2(adapter, ETH_P_8021AD); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_INSERTION) { + iavf_enable_vlan_insertion_v2(adapter, ETH_P_8021Q); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION) { + iavf_enable_vlan_insertion_v2(adapter, ETH_P_8021AD); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_REQUEST_STATS) { iavf_request_stats(adapter); return 0; @@ -1823,6 +1856,91 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter) return -EAGAIN; } +/** + * iavf_set_vlan_offload_features - set VLAN offload configuration + * @adapter: board private structure + * @prev_features: previous features used for comparison + * @features: updated features used for configuration + * + * Set the aq_required bit(s) based on the requested features passed in to + * configure VLAN stripping and/or VLAN insertion if supported. Also, schedule + * the watchdog if any changes are requested to expedite the request via + * virtchnl. + **/ +void +iavf_set_vlan_offload_features(struct iavf_adapter *adapter, + netdev_features_t prev_features, + netdev_features_t features) +{ + bool enable_stripping = true, enable_insertion = true; + u16 vlan_ethertype = 0; + u64 aq_required = 0; + + /* keep cases separate because one ethertype for offloads can be + * disabled at the same time as another is disabled, so check for an + * enabled ethertype first, then check for disabled. Default to + * ETH_P_8021Q so an ethertype is specified if disabling insertion and + * stripping. + */ + if (features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX)) + vlan_ethertype = ETH_P_8021AD; + else if (features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) + vlan_ethertype = ETH_P_8021Q; + else if (prev_features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX)) + vlan_ethertype = ETH_P_8021AD; + else if (prev_features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) + vlan_ethertype = ETH_P_8021Q; + else + vlan_ethertype = ETH_P_8021Q; + + if (!(features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_CTAG_RX))) + enable_stripping = false; + if (!(features & (NETIF_F_HW_VLAN_STAG_TX | NETIF_F_HW_VLAN_CTAG_TX))) + enable_insertion = false; + + if (VLAN_ALLOWED(adapter)) { + /* VIRTCHNL_VF_OFFLOAD_VLAN only has support for toggling VLAN + * stripping via virtchnl. VLAN insertion can be toggled on the + * netdev, but it doesn't require a virtchnl message + */ + if (enable_stripping) + aq_required |= IAVF_FLAG_AQ_ENABLE_VLAN_STRIPPING; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_VLAN_STRIPPING; + + } else if (VLAN_V2_ALLOWED(adapter)) { + switch (vlan_ethertype) { + case ETH_P_8021Q: + if (enable_stripping) + aq_required |= IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_STRIPPING; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_STRIPPING; + + if (enable_insertion) + aq_required |= IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_INSERTION; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION; + break; + case ETH_P_8021AD: + if (enable_stripping) + aq_required |= IAVF_FLAG_AQ_ENABLE_STAG_VLAN_STRIPPING; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_STAG_VLAN_STRIPPING; + + if (enable_insertion) + aq_required |= IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION; + break; + } + } + + if (aq_required) { + adapter->aq_required |= aq_required; + mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0); + } +} + /** * iavf_startup - first step of driver startup * @adapter: board private structure @@ -2179,6 +2297,10 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) else iavf_init_rss(adapter); + if (VLAN_V2_ALLOWED(adapter)) + /* request initial VLAN offload settings */ + iavf_set_vlan_offload_features(adapter, 0, netdev->features); + return; err_mem: iavf_free_rss(adapter); @@ -3689,6 +3811,11 @@ static int iavf_change_mtu(struct net_device *netdev, int new_mtu) return 0; } +#define NETIF_VLAN_OFFLOAD_FEATURES (NETIF_F_HW_VLAN_CTAG_RX | \ + NETIF_F_HW_VLAN_CTAG_TX | \ + NETIF_F_HW_VLAN_STAG_RX | \ + NETIF_F_HW_VLAN_STAG_TX) + /** * iavf_set_features - set the netdev feature flags * @netdev: ptr to the netdev being adjusted @@ -3700,25 +3827,11 @@ static int iavf_set_features(struct net_device *netdev, { struct iavf_adapter *adapter = netdev_priv(netdev); - /* Don't allow enabling VLAN features when adapter is not capable - * of VLAN offload/filtering - */ - if (!VLAN_ALLOWED(adapter)) { - netdev->hw_features &= ~(NETIF_F_HW_VLAN_CTAG_RX | - NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_FILTER); - if (features & (NETIF_F_HW_VLAN_CTAG_RX | - NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_FILTER)) - return -EINVAL; - } else if ((netdev->features ^ features) & NETIF_F_HW_VLAN_CTAG_RX) { - if (features & NETIF_F_HW_VLAN_CTAG_RX) - adapter->aq_required |= - IAVF_FLAG_AQ_ENABLE_VLAN_STRIPPING; - else - adapter->aq_required |= - IAVF_FLAG_AQ_DISABLE_VLAN_STRIPPING; - } + /* trigger update on any VLAN feature change */ + if ((netdev->features & NETIF_VLAN_OFFLOAD_FEATURES) ^ + (features & NETIF_VLAN_OFFLOAD_FEATURES)) + iavf_set_vlan_offload_features(adapter, netdev->features, + features); return 0; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 613fcc491fd7..1fe6ab40409a 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -1122,6 +1122,204 @@ void iavf_disable_vlan_stripping(struct iavf_adapter *adapter) iavf_send_pf_msg(adapter, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING, NULL, 0); } +/** + * iavf_tpid_to_vc_ethertype - transform from VLAN TPID to virtchnl ethertype + * @tpid: VLAN TPID (i.e. 0x8100, 0x88a8, etc.) + */ +static u32 iavf_tpid_to_vc_ethertype(u16 tpid) +{ + switch (tpid) { + case ETH_P_8021Q: + return VIRTCHNL_VLAN_ETHERTYPE_8100; + case ETH_P_8021AD: + return VIRTCHNL_VLAN_ETHERTYPE_88A8; + } + + return 0; +} + +/** + * iavf_set_vc_offload_ethertype - set virtchnl ethertype for offload message + * @adapter: adapter structure + * @msg: message structure used for updating offloads over virtchnl to update + * @tpid: VLAN TPID (i.e. 0x8100, 0x88a8, etc.) + * @offload_op: opcode used to determine which support structure to check + */ +static int +iavf_set_vc_offload_ethertype(struct iavf_adapter *adapter, + struct virtchnl_vlan_setting *msg, u16 tpid, + enum virtchnl_ops offload_op) +{ + struct virtchnl_vlan_supported_caps *offload_support; + u16 vc_ethertype = iavf_tpid_to_vc_ethertype(tpid); + + /* reference the correct offload support structure */ + switch (offload_op) { + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + offload_support = + &adapter->vlan_v2_caps.offloads.stripping_support; + break; + case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2: + case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: + offload_support = + &adapter->vlan_v2_caps.offloads.insertion_support; + break; + default: + dev_err(&adapter->pdev->dev, "Invalid opcode %d for setting virtchnl ethertype to enable/disable VLAN offloads\n", + offload_op); + return -EINVAL; + } + + /* make sure ethertype is supported */ + if (offload_support->outer & vc_ethertype && + offload_support->outer & VIRTCHNL_VLAN_TOGGLE) { + msg->outer_ethertype_setting = vc_ethertype; + } else if (offload_support->inner & vc_ethertype && + offload_support->inner & VIRTCHNL_VLAN_TOGGLE) { + msg->inner_ethertype_setting = vc_ethertype; + } else { + dev_dbg(&adapter->pdev->dev, "opcode %d unsupported for VLAN TPID 0x%04x\n", + offload_op, tpid); + return -EINVAL; + } + + return 0; +} + +/** + * iavf_clear_offload_v2_aq_required - clear AQ required bit for offload request + * @adapter: adapter structure + * @tpid: VLAN TPID + * @offload_op: opcode used to determine which AQ required bit to clear + */ +static void +iavf_clear_offload_v2_aq_required(struct iavf_adapter *adapter, u16 tpid, + enum virtchnl_ops offload_op) +{ + switch (offload_op) { + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + if (tpid == ETH_P_8021Q) + adapter->aq_required &= + ~IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_STRIPPING; + else if (tpid == ETH_P_8021AD) + adapter->aq_required &= + ~IAVF_FLAG_AQ_ENABLE_STAG_VLAN_STRIPPING; + break; + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + if (tpid == ETH_P_8021Q) + adapter->aq_required &= + ~IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_STRIPPING; + else if (tpid == ETH_P_8021AD) + adapter->aq_required &= + ~IAVF_FLAG_AQ_DISABLE_STAG_VLAN_STRIPPING; + break; + case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2: + if (tpid == ETH_P_8021Q) + adapter->aq_required &= + ~IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_INSERTION; + else if (tpid == ETH_P_8021AD) + adapter->aq_required &= + ~IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION; + break; + case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: + if (tpid == ETH_P_8021Q) + adapter->aq_required &= + ~IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION; + else if (tpid == ETH_P_8021AD) + adapter->aq_required &= + ~IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION; + break; + default: + dev_err(&adapter->pdev->dev, "Unsupported opcode %d specified for clearing aq_required bits for VIRTCHNL_VF_OFFLOAD_VLAN_V2 offload request\n", + offload_op); + } +} + +/** + * iavf_send_vlan_offload_v2 - send offload enable/disable over virtchnl + * @adapter: adapter structure + * @tpid: VLAN TPID used for the command (i.e. 0x8100 or 0x88a8) + * @offload_op: offload_op used to make the request over virtchnl + */ +static void +iavf_send_vlan_offload_v2(struct iavf_adapter *adapter, u16 tpid, + enum virtchnl_ops offload_op) +{ + struct virtchnl_vlan_setting *msg; + int len = sizeof(*msg); + + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, "Cannot send %d, command %d pending\n", + offload_op, adapter->current_op); + return; + } + + adapter->current_op = offload_op; + + msg = kzalloc(len, GFP_KERNEL); + if (!msg) + return; + + msg->vport_id = adapter->vsi_res->vsi_id; + + /* always clear to prevent unsupported and endless requests */ + iavf_clear_offload_v2_aq_required(adapter, tpid, offload_op); + + /* only send valid offload requests */ + if (!iavf_set_vc_offload_ethertype(adapter, msg, tpid, offload_op)) + iavf_send_pf_msg(adapter, offload_op, (u8 *)msg, len); + else + adapter->current_op = VIRTCHNL_OP_UNKNOWN; + + kfree(msg); +} + +/** + * iavf_enable_vlan_stripping_v2 - enable VLAN stripping + * @adapter: adapter structure + * @tpid: VLAN TPID used to enable VLAN stripping + */ +void iavf_enable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid) +{ + iavf_send_vlan_offload_v2(adapter, tpid, + VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2); +} + +/** + * iavf_disable_vlan_stripping_v2 - disable VLAN stripping + * @adapter: adapter structure + * @tpid: VLAN TPID used to disable VLAN stripping + */ +void iavf_disable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid) +{ + iavf_send_vlan_offload_v2(adapter, tpid, + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2); +} + +/** + * iavf_enable_vlan_insertion_v2 - enable VLAN insertion + * @adapter: adapter structure + * @tpid: VLAN TPID used to enable VLAN insertion + */ +void iavf_enable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid) +{ + iavf_send_vlan_offload_v2(adapter, tpid, + VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2); +} + +/** + * iavf_disable_vlan_insertion_v2 - disable VLAN insertion + * @adapter: adapter structure + * @tpid: VLAN TPID used to disable VLAN insertion + */ +void iavf_disable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid) +{ + iavf_send_vlan_offload_v2(adapter, tpid, + VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2); +} + #define IAVF_MAX_SPEED_STRLEN 13 /** @@ -1962,6 +2160,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__); + /* Request VLAN offload settings */ + if (VLAN_V2_ALLOWED(adapter)) + iavf_set_vlan_offload_features(adapter, 0, + netdev->features); + iavf_set_queue_vlan_tag_loc(adapter); } -- 2.20.1 From anthony.l.nguyen at intel.com Mon Nov 29 19:22:58 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 11:22:58 -0800 Subject: [Intel-wired-lan] [PATCH net-next 4/6] iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 hotpath In-Reply-To: <20211129192300.14188-1-anthony.l.nguyen@intel.com> References: <20211129192300.14188-1-anthony.l.nguyen@intel.com> Message-ID: <20211129192300.14188-5-anthony.l.nguyen@intel.com> From: Brett Creeley The new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability added support that allows the PF to set the location of the Tx and Rx VLAN tag for insertion and stripping offloads. In order to support this functionality a few changes are needed. 1. Add a new method to cache the VLAN tag location based on negotiated capabilities for the Tx and Rx ring flags. This needs to be called in the initialization and reset paths. 2. Refactor the transmit hotpath to account for the new Tx ring flags. When IAVF_TXR_FLAGS_VLAN_LOC_L2TAG2 is set, then the driver needs to insert the VLAN tag in the L2TAG2 field of the transmit descriptor. When the IAVF_TXRX_FLAGS_VLAN_LOC_L2TAG1 is set, then the driver needs to use the l2tag1 field of the data descriptor (same behavior as before). 3. Refactor the iavf_tx_prepare_vlan_flags() function to simplify transmit hardware VLAN offload functionality by only depending on the skb_vlan_tag_present() function. This can be done because the OS won't request transmit offload for a VLAN unless the driver told the OS it's supported and enabled. 4. Refactor the receive hotpath to account for the new Rx ring flags and VLAN ethertypes. This requires checking the Rx ring flags and descriptor status bits to determine the location of the VLAN tag. Also, since only a single ethertype can be supported at a time, check the enabled netdev features before specifying a VLAN ethertype in __vlan_hwaccel_put_tag(). Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf.h | 1 + drivers/net/ethernet/intel/iavf/iavf_main.c | 82 +++++++++++++++++++ drivers/net/ethernet/intel/iavf/iavf_txrx.c | 72 ++++++++-------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 30 ++++--- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 2 + 5 files changed, 136 insertions(+), 51 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 5fb6ebf9a760..2660d46da1b5 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -488,6 +488,7 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter); int iavf_get_vf_config(struct iavf_adapter *adapter); int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter); int iavf_send_vf_offload_vlan_v2_msg(struct iavf_adapter *adapter); +void iavf_set_queue_vlan_tag_loc(struct iavf_adapter *adapter); void iavf_irq_enable(struct iavf_adapter *adapter, bool flush); void iavf_configure_queues(struct iavf_adapter *adapter); void iavf_deconfigure_queues(struct iavf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index fe7b68b0bbb7..49c19fd08d64 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1165,6 +1165,86 @@ static void iavf_free_queues(struct iavf_adapter *adapter) adapter->rx_rings = NULL; } +/** + * iavf_set_queue_vlan_tag_loc - set location for VLAN tag offload + * @adapter: board private structure + * + * Based on negotiated capabilities, the VLAN tag needs to be inserted and/or + * stripped in certain descriptor fields. Instead of checking the offload + * capability bits in the hot path, cache the location the ring specific + * flags. + */ +void iavf_set_queue_vlan_tag_loc(struct iavf_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_active_queues; i++) { + struct iavf_ring *tx_ring = &adapter->tx_rings[i]; + struct iavf_ring *rx_ring = &adapter->rx_rings[i]; + + /* prevent multiple L2TAG bits being set after VFR */ + tx_ring->flags &= + ~(IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 | + IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2); + rx_ring->flags &= + ~(IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 | + IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2); + + if (VLAN_ALLOWED(adapter)) { + tx_ring->flags |= IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + rx_ring->flags |= IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + } else if (VLAN_V2_ALLOWED(adapter)) { + struct virtchnl_vlan_supported_caps *stripping_support; + struct virtchnl_vlan_supported_caps *insertion_support; + + stripping_support = + &adapter->vlan_v2_caps.offloads.stripping_support; + insertion_support = + &adapter->vlan_v2_caps.offloads.insertion_support; + + if (stripping_support->outer) { + if (stripping_support->outer & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1) + rx_ring->flags |= + IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + else if (stripping_support->outer & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2) + rx_ring->flags |= + IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2; + } else if (stripping_support->inner) { + if (stripping_support->inner & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1) + rx_ring->flags |= + IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + else if (stripping_support->inner & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2) + rx_ring->flags |= + IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2; + } + + if (insertion_support->outer) { + if (insertion_support->outer & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1) + tx_ring->flags |= + IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + else if (insertion_support->outer & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2) + tx_ring->flags |= + IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2; + } else if (insertion_support->inner) { + if (insertion_support->inner & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1) + tx_ring->flags |= + IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + else if (insertion_support->inner & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2) + tx_ring->flags |= + IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2; + } + } + } +} + /** * iavf_alloc_queues - Allocate memory for all rings * @adapter: board private structure to initialize @@ -1226,6 +1306,8 @@ static int iavf_alloc_queues(struct iavf_adapter *adapter) adapter->num_active_queues = num_active_queues; + iavf_set_queue_vlan_tag_loc(adapter); + return 0; err_out: diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 8f2376d17466..0032425ff6c6 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -865,6 +865,9 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) && (vlan_tag & VLAN_VID_MASK)) __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag); + else if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_STAG_RX) && + vlan_tag & VLAN_VID_MASK) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021AD), vlan_tag); napi_gro_receive(&q_vector->napi, skb); } @@ -1468,7 +1471,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) struct iavf_rx_buffer *rx_buffer; union iavf_rx_desc *rx_desc; unsigned int size; - u16 vlan_tag; + u16 vlan_tag = 0; u8 rx_ptype; u64 qword; @@ -1551,9 +1554,13 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* populate checksum, VLAN, and protocol */ iavf_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype); - - vlan_tag = (qword & BIT(IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) ? - le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1) : 0; + if (qword & BIT(IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT) && + rx_ring->flags & IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1) + vlan_tag = le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1); + if (rx_desc->wb.qword2.ext_status & + cpu_to_le16(BIT(IAVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) && + rx_ring->flags & IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2) + vlan_tag = le16_to_cpu(rx_desc->wb.qword2.l2tag2_2); iavf_trace(clean_rx_irq_rx, rx_ring, rx_desc, skb); iavf_receive_skb(rx_ring, skb, vlan_tag); @@ -1781,46 +1788,30 @@ int iavf_napi_poll(struct napi_struct *napi, int budget) * Returns error code indicate the frame should be dropped upon error and the * otherwise returns 0 to indicate the flags has been set properly. **/ -static inline int iavf_tx_prepare_vlan_flags(struct sk_buff *skb, - struct iavf_ring *tx_ring, - u32 *flags) +static inline void iavf_tx_prepare_vlan_flags(struct sk_buff *skb, + struct iavf_ring *tx_ring, + u32 *flags) { - __be16 protocol = skb->protocol; u32 tx_flags = 0; - if (protocol == htons(ETH_P_8021Q) && - !(tx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) { - /* When HW VLAN acceleration is turned off by the user the - * stack sets the protocol to 8021q so that the driver - * can take any steps required to support the SW only - * VLAN handling. In our case the driver doesn't need - * to take any further steps so just set the protocol - * to the encapsulated ethertype. - */ - skb->protocol = vlan_get_protocol(skb); - goto out; - } - /* if we have a HW VLAN tag being added, default to the HW one */ - if (skb_vlan_tag_present(skb)) { - tx_flags |= skb_vlan_tag_get(skb) << IAVF_TX_FLAGS_VLAN_SHIFT; - tx_flags |= IAVF_TX_FLAGS_HW_VLAN; - /* else if it is a SW VLAN, check the next protocol and store the tag */ - } else if (protocol == htons(ETH_P_8021Q)) { - struct vlan_hdr *vhdr, _vhdr; - - vhdr = skb_header_pointer(skb, ETH_HLEN, sizeof(_vhdr), &_vhdr); - if (!vhdr) - return -EINVAL; + /* stack will only request hardware VLAN insertion offload for protocols + * that the driver supports and has enabled + */ + if (!skb_vlan_tag_present(skb)) + return; - protocol = vhdr->h_vlan_encapsulated_proto; - tx_flags |= ntohs(vhdr->h_vlan_TCI) << IAVF_TX_FLAGS_VLAN_SHIFT; - tx_flags |= IAVF_TX_FLAGS_SW_VLAN; + tx_flags |= skb_vlan_tag_get(skb) << IAVF_TX_FLAGS_VLAN_SHIFT; + if (tx_ring->flags & IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2) { + tx_flags |= IAVF_TX_FLAGS_HW_OUTER_SINGLE_VLAN; + } else if (tx_ring->flags & IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1) { + tx_flags |= IAVF_TX_FLAGS_HW_VLAN; + } else { + dev_dbg(tx_ring->dev, "Unsupported Tx VLAN tag location requested\n"); + return; } -out: *flags = tx_flags; - return 0; } /** @@ -2440,8 +2431,13 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb, first->gso_segs = 1; /* prepare the xmit flags */ - if (iavf_tx_prepare_vlan_flags(skb, tx_ring, &tx_flags)) - goto out_drop; + iavf_tx_prepare_vlan_flags(skb, tx_ring, &tx_flags); + if (tx_flags & IAVF_TX_FLAGS_HW_OUTER_SINGLE_VLAN) { + cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + cd_l2tag2 = (tx_flags & IAVF_TX_FLAGS_VLAN_MASK) >> + IAVF_TX_FLAGS_VLAN_SHIFT; + } /* obtain protocol of skb */ protocol = vlan_get_protocol(skb); diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index e5b9ba42dd00..2624bf6d009e 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -243,19 +243,20 @@ static inline unsigned int iavf_txd_use_count(unsigned int size) #define DESC_NEEDED (MAX_SKB_FRAGS + 6) #define IAVF_MIN_DESC_PENDING 4 -#define IAVF_TX_FLAGS_HW_VLAN BIT(1) -#define IAVF_TX_FLAGS_SW_VLAN BIT(2) -#define IAVF_TX_FLAGS_TSO BIT(3) -#define IAVF_TX_FLAGS_IPV4 BIT(4) -#define IAVF_TX_FLAGS_IPV6 BIT(5) -#define IAVF_TX_FLAGS_FCCRC BIT(6) -#define IAVF_TX_FLAGS_FSO BIT(7) -#define IAVF_TX_FLAGS_FD_SB BIT(9) -#define IAVF_TX_FLAGS_VXLAN_TUNNEL BIT(10) -#define IAVF_TX_FLAGS_VLAN_MASK 0xffff0000 -#define IAVF_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000 -#define IAVF_TX_FLAGS_VLAN_PRIO_SHIFT 29 -#define IAVF_TX_FLAGS_VLAN_SHIFT 16 +#define IAVF_TX_FLAGS_HW_VLAN BIT(1) +#define IAVF_TX_FLAGS_SW_VLAN BIT(2) +#define IAVF_TX_FLAGS_TSO BIT(3) +#define IAVF_TX_FLAGS_IPV4 BIT(4) +#define IAVF_TX_FLAGS_IPV6 BIT(5) +#define IAVF_TX_FLAGS_FCCRC BIT(6) +#define IAVF_TX_FLAGS_FSO BIT(7) +#define IAVF_TX_FLAGS_FD_SB BIT(9) +#define IAVF_TX_FLAGS_VXLAN_TUNNEL BIT(10) +#define IAVF_TX_FLAGS_HW_OUTER_SINGLE_VLAN BIT(11) +#define IAVF_TX_FLAGS_VLAN_MASK 0xffff0000 +#define IAVF_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000 +#define IAVF_TX_FLAGS_VLAN_PRIO_SHIFT 29 +#define IAVF_TX_FLAGS_VLAN_SHIFT 16 struct iavf_tx_buffer { struct iavf_tx_desc *next_to_watch; @@ -362,6 +363,9 @@ struct iavf_ring { u16 flags; #define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0) #define IAVF_RXR_FLAGS_BUILD_SKB_ENABLED BIT(1) +#define IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(3) +#define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4) +#define IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2 BIT(5) /* stats structs */ struct iavf_queue_stats stats; diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 2dc1c435223c..613fcc491fd7 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -1962,6 +1962,8 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__); + iavf_set_queue_vlan_tag_loc(adapter); + } break; case VIRTCHNL_OP_ENABLE_QUEUES: -- 2.20.1 From anthony.l.nguyen at intel.com Mon Nov 29 19:22:55 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 11:22:55 -0800 Subject: [Intel-wired-lan] [PATCH net-next 1/6] virtchnl: Add support for new VLAN capabilities In-Reply-To: <20211129192300.14188-1-anthony.l.nguyen@intel.com> References: <20211129192300.14188-1-anthony.l.nguyen@intel.com> Message-ID: <20211129192300.14188-2-anthony.l.nguyen@intel.com> From: Brett Creeley Currently VIRTCHNL only allows for VLAN filtering and offloads to happen on a single 802.1Q VLAN. Add support to filter and offload on inner, outer, and/or inner + outer VLANs. This is done by introducing the new capability VIRTCHNL_VF_OFFLOAD_VLAN_V2. The flow to negotiate this new capability is shown below. 1. VF - sets the VIRTCHNL_VF_OFFLOAD_VLAN_V2 bit in the virtchnl_vf_resource.vf_caps_flags during the VIRTCHNL_OP_GET_VF_RESOURCES request message. The VF should also set the VIRTCHNL_VF_OFFLOAD_VLAN bit in case the PF driver doesn't support the new capability. 2. PF - sets the VLAN capability bit it supports in the VIRTCHNL_OP_GET_VF_RESOURCES response message. This will either be VIRTCHNL_VF_OFFLOAD_VLAN_V2, VIRTCHNL_VF_OFFLOAD_VLAN, or none. 3. VF - If the VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability was ACK'd by the PF, then the VF needs to request the VLAN capabilities of the PF/Device by issuing a VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS request. If the VIRTCHNL_VF_OFFLOAD_VLAN capability was ACK'd then the VF knows only single 802.1Q VLAN filtering/offloads are supported. If no VLAN capability is ACK'd then the PF/Device doesn't support hardware VLAN filtering/offloads for this VF. 4. PF - Populates the virtchnl_vlan_caps structure based on what it allows/supports for that VF and sends that response via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS. After VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS is successfully negotiated the VF driver needs to interpret the capabilities supported by the underlying PF/Device. The VF will be allowed to filter/offload the inner 802.1Q, outer (various ethertype), inner 802.1Q + outer (various ethertypes), or none based on which fields are set. The VF will also need to interpret where the VLAN tag should be inserted and/or stripped based on the negotiated capabilities. Signed-off-by: Brett Creeley --- include/linux/avf/virtchnl.h | 378 +++++++++++++++++++++++++++++++++++ 1 file changed, 378 insertions(+) diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index b30a1bc74fc7..800e1005c72a 100644 --- a/include/linux/avf/virtchnl.h +++ b/include/linux/avf/virtchnl.h @@ -141,6 +141,13 @@ enum virtchnl_ops { VIRTCHNL_OP_DEL_RSS_CFG = 46, VIRTCHNL_OP_ADD_FDIR_FILTER = 47, VIRTCHNL_OP_DEL_FDIR_FILTER = 48, + VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS = 51, + VIRTCHNL_OP_ADD_VLAN_V2 = 52, + VIRTCHNL_OP_DEL_VLAN_V2 = 53, + VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 = 54, + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 = 55, + VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 = 56, + VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 = 57, VIRTCHNL_OP_MAX, }; @@ -246,6 +253,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource); #define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES BIT(6) /* used to negotiate communicating link speeds in Mbps */ #define VIRTCHNL_VF_CAP_ADV_LINK_SPEED BIT(7) +#define VIRTCHNL_VF_OFFLOAD_VLAN_V2 BIT(15) #define VIRTCHNL_VF_OFFLOAD_VLAN BIT(16) #define VIRTCHNL_VF_OFFLOAD_RX_POLLING BIT(17) #define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 BIT(18) @@ -475,6 +483,351 @@ struct virtchnl_vlan_filter_list { VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list); +/* This enum is used for all of the VIRTCHNL_VF_OFFLOAD_VLAN_V2_CAPS related + * structures and opcodes. + * + * VIRTCHNL_VLAN_UNSUPPORTED - This field is not supported and if a VF driver + * populates it the PF should return VIRTCHNL_STATUS_ERR_NOT_SUPPORTED. + * + * VIRTCHNL_VLAN_ETHERTYPE_8100 - This field supports 0x8100 ethertype. + * VIRTCHNL_VLAN_ETHERTYPE_88A8 - This field supports 0x88A8 ethertype. + * VIRTCHNL_VLAN_ETHERTYPE_9100 - This field supports 0x9100 ethertype. + * + * VIRTCHNL_VLAN_ETHERTYPE_AND - Used when multiple ethertypes can be supported + * by the PF concurrently. For example, if the PF can support + * VIRTCHNL_VLAN_ETHERTYPE_8100 AND VIRTCHNL_VLAN_ETHERTYPE_88A8 filters it + * would OR the following bits: + * + * VIRTHCNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_AND; + * + * The VF would interpret this as VLAN filtering can be supported on both 0x8100 + * and 0x88A8 VLAN ethertypes. + * + * VIRTCHNL_ETHERTYPE_XOR - Used when only a single ethertype can be supported + * by the PF concurrently. For example if the PF can support + * VIRTCHNL_VLAN_ETHERTYPE_8100 XOR VIRTCHNL_VLAN_ETHERTYPE_88A8 stripping + * offload it would OR the following bits: + * + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_XOR; + * + * The VF would interpret this as VLAN stripping can be supported on either + * 0x8100 or 0x88a8 VLAN ethertypes. So when requesting VLAN stripping via + * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 the specified ethertype will override + * the previously set value. + * + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 - Used to tell the VF to insert and/or + * strip the VLAN tag using the L2TAG1 field of the Tx/Rx descriptors. + * + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to insert hardware + * offloaded VLAN tags using the L2TAG2 field of the Tx descriptor. + * + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to strip hardware + * offloaded VLAN tags using the L2TAG2_2 field of the Rx descriptor. + * + * VIRTCHNL_VLAN_PRIO - This field supports VLAN priority bits. This is used for + * VLAN filtering if the underlying PF supports it. + * + * VIRTCHNL_VLAN_TOGGLE_ALLOWED - This field is used to say whether a + * certain VLAN capability can be toggled. For example if the underlying PF/CP + * allows the VF to toggle VLAN filtering, stripping, and/or insertion it should + * set this bit along with the supported ethertypes. + */ +enum virtchnl_vlan_support { + VIRTCHNL_VLAN_UNSUPPORTED = 0, + VIRTCHNL_VLAN_ETHERTYPE_8100 = BIT(0), + VIRTCHNL_VLAN_ETHERTYPE_88A8 = BIT(1), + VIRTCHNL_VLAN_ETHERTYPE_9100 = BIT(2), + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 = BIT(8), + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 = BIT(9), + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 = BIT(10), + VIRTCHNL_VLAN_PRIO = BIT(24), + VIRTCHNL_VLAN_FILTER_MASK = BIT(28), + VIRTCHNL_VLAN_ETHERTYPE_AND = BIT(29), + VIRTCHNL_VLAN_ETHERTYPE_XOR = BIT(30), + VIRTCHNL_VLAN_TOGGLE = BIT(31), +}; + +/* This structure is used as part of the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS + * for filtering, insertion, and stripping capabilities. + * + * If only outer capabilities are supported (for filtering, insertion, and/or + * stripping) then this refers to the outer most or single VLAN from the VF's + * perspective. + * + * If only inner capabilities are supported (for filtering, insertion, and/or + * stripping) then this refers to the outer most or single VLAN from the VF's + * perspective. Functionally this is the same as if only outer capabilities are + * supported. The VF driver is just forced to use the inner fields when + * adding/deleting filters and enabling/disabling offloads (if supported). + * + * If both outer and inner capabilities are supported (for filtering, insertion, + * and/or stripping) then outer refers to the outer most or single VLAN and + * inner refers to the second VLAN, if it exists, in the packet. + * + * There is no support for tunneled VLAN offloads, so outer or inner are never + * referring to a tunneled packet from the VF's perspective. + */ +struct virtchnl_vlan_supported_caps { + u32 outer; + u32 inner; +}; + +/* The PF populates these fields based on the supported VLAN filtering. If a + * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will + * reject any VIRTCHNL_OP_ADD_VLAN_V2 or VIRTCHNL_OP_DEL_VLAN_V2 messages using + * the unsupported fields. + * + * Also, a VF is only allowed to toggle its VLAN filtering setting if the + * VIRTCHNL_VLAN_TOGGLE bit is set. + * + * The ethertype(s) specified in the ethertype_init field are the ethertypes + * enabled for VLAN filtering. VLAN filtering in this case refers to the outer + * most VLAN from the VF's perspective. If both inner and outer filtering are + * allowed then ethertype_init only refers to the outer most VLAN as only + * VLAN ethertype supported for inner VLAN filtering is + * VIRTCHNL_VLAN_ETHERTYPE_8100. By default, inner VLAN filtering is disabled + * when both inner and outer filtering are allowed. + * + * The max_filters field tells the VF how many VLAN filters it's allowed to have + * at any one time. If it exceeds this amount and tries to add another filter, + * then the request will be rejected by the PF. To prevent failures, the VF + * should keep track of how many VLAN filters it has added and not attempt to + * add more than max_filters. + */ +struct virtchnl_vlan_filtering_caps { + struct virtchnl_vlan_supported_caps filtering_support; + u32 ethertype_init; + u16 max_filters; + u8 pad[2]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_filtering_caps); + +/* This enum is used for the virtchnl_vlan_offload_caps structure to specify + * if the PF supports a different ethertype for stripping and insertion. + * + * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION - The ethertype(s) specified + * for stripping affect the ethertype(s) specified for insertion and visa versa + * as well. If the VF tries to configure VLAN stripping via + * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 with VIRTCHNL_VLAN_ETHERTYPE_8100 then + * that will be the ethertype for both stripping and insertion. + * + * VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED - The ethertype(s) specified for + * stripping do not affect the ethertype(s) specified for insertion and visa + * versa. + */ +enum virtchnl_vlan_ethertype_match { + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION = 0, + VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED = 1, +}; + +/* The PF populates these fields based on the supported VLAN offloads. If a + * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will + * reject any VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 or + * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 messages using the unsupported fields. + * + * Also, a VF is only allowed to toggle its VLAN offload setting if the + * VIRTCHNL_VLAN_TOGGLE_ALLOWED bit is set. + * + * The VF driver needs to be aware of how the tags are stripped by hardware and + * inserted by the VF driver based on the level of offload support. The PF will + * populate these fields based on where the VLAN tags are expected to be + * offloaded via the VIRTHCNL_VLAN_TAG_LOCATION_* bits. The VF will need to + * interpret these fields. See the definition of the + * VIRTCHNL_VLAN_TAG_LOCATION_* bits above the virtchnl_vlan_support + * enumeration. + */ +struct virtchnl_vlan_offload_caps { + struct virtchnl_vlan_supported_caps stripping_support; + struct virtchnl_vlan_supported_caps insertion_support; + u32 ethertype_init; + u8 ethertype_match; + u8 pad[3]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_vlan_offload_caps); + +/* VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS + * VF sends this message to determine its VLAN capabilities. + * + * PF will mark which capabilities it supports based on hardware support and + * current configuration. For example, if a port VLAN is configured the PF will + * not allow outer VLAN filtering, stripping, or insertion to be configured so + * it will block these features from the VF. + * + * The VF will need to cross reference its capabilities with the PFs + * capabilities in the response message from the PF to determine the VLAN + * support. + */ +struct virtchnl_vlan_caps { + struct virtchnl_vlan_filtering_caps filtering; + struct virtchnl_vlan_offload_caps offloads; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_caps); + +struct virtchnl_vlan { + u16 tci; /* tci[15:13] = PCP and tci[11:0] = VID */ + u16 tci_mask; /* only valid if VIRTCHNL_VLAN_FILTER_MASK set in + * filtering caps + */ + u16 tpid; /* 0x8100, 0x88a8, etc. and only type(s) set in + * filtering caps. Note that tpid here does not refer to + * VIRTCHNL_VLAN_ETHERTYPE_*, but it refers to the + * actual 2-byte VLAN TPID + */ + u8 pad[2]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_vlan); + +struct virtchnl_vlan_filter { + struct virtchnl_vlan inner; + struct virtchnl_vlan outer; + u8 pad[16]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(32, virtchnl_vlan_filter); + +/* VIRTCHNL_OP_ADD_VLAN_V2 + * VIRTCHNL_OP_DEL_VLAN_V2 + * + * VF sends these messages to add/del one or more VLAN tag filters for Rx + * traffic. + * + * The PF attempts to add the filters and returns status. + * + * The VF should only ever attempt to add/del virtchnl_vlan_filter(s) using the + * supported fields negotiated via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS. + */ +struct virtchnl_vlan_filter_list_v2 { + u16 vport_id; + u16 num_elements; + u8 pad[4]; + struct virtchnl_vlan_filter filters[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_filter_list_v2); + +/* VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 + * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 + * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 + * VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 + * + * VF sends this message to enable or disable VLAN stripping or insertion. It + * also needs to specify an ethertype. The VF knows which VLAN ethertypes are + * allowed and whether or not it's allowed to enable/disable the specific + * offload via the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. The VF needs to + * parse the virtchnl_vlan_caps.offloads fields to determine which offload + * messages are allowed. + * + * For example, if the PF populates the virtchnl_vlan_caps.offloads in the + * following manner the VF will be allowed to enable and/or disable 0x8100 inner + * VLAN insertion and/or stripping via the opcodes listed above. Inner in this + * case means the outer most or single VLAN from the VF's perspective. This is + * because no outer offloads are supported. See the comments above the + * virtchnl_vlan_supported_caps structure for more details. + * + * virtchnl_vlan_caps.offloads.stripping_support.inner = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100; + * + * virtchnl_vlan_caps.offloads.insertion_support.inner = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100; + * + * In order to enable inner (again note that in this case inner is the outer + * most or single VLAN from the VF's perspective) VLAN stripping for 0x8100 + * VLANs, the VF would populate the virtchnl_vlan_setting structure in the + * following manner and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message. + * + * virtchnl_vlan_setting.inner_ethertype_setting = + * VIRTCHNL_VLAN_ETHERTYPE_8100; + * + * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on + * initialization. + * + * The reason that VLAN TPID(s) are not being used for the + * outer_ethertype_setting and inner_ethertype_setting fields is because it's + * possible a device could support VLAN insertion and/or stripping offload on + * multiple ethertypes concurrently, so this method allows a VF to request + * multiple ethertypes in one message using the virtchnl_vlan_support + * enumeration. + * + * For example, if the PF populates the virtchnl_vlan_caps.offloads in the + * following manner the VF will be allowed to enable 0x8100 and 0x88a8 outer + * VLAN insertion and stripping simultaneously. The + * virtchnl_vlan_caps.offloads.ethertype_match field will also have to be + * populated based on what the PF can support. + * + * virtchnl_vlan_caps.offloads.stripping_support.outer = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_AND; + * + * virtchnl_vlan_caps.offloads.insertion_support.outer = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_AND; + * + * In order to enable outer VLAN stripping for 0x8100 and 0x88a8 VLANs, the VF + * would populate the virthcnl_vlan_offload_structure in the following manner + * and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message. + * + * virtchnl_vlan_setting.outer_ethertype_setting = + * VIRTHCNL_VLAN_ETHERTYPE_8100 | + * VIRTHCNL_VLAN_ETHERTYPE_88A8; + * + * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on + * initialization. + * + * There is also the case where a PF and the underlying hardware can support + * VLAN offloads on multiple ethertypes, but not concurrently. For example, if + * the PF populates the virtchnl_vlan_caps.offloads in the following manner the + * VF will be allowed to enable and/or disable 0x8100 XOR 0x88a8 outer VLAN + * offloads. The ethertypes must match for stripping and insertion. + * + * virtchnl_vlan_caps.offloads.stripping_support.outer = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_XOR; + * + * virtchnl_vlan_caps.offloads.insertion_support.outer = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_XOR; + * + * virtchnl_vlan_caps.offloads.ethertype_match = + * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + * + * In order to enable outer VLAN stripping for 0x88a8 VLANs, the VF would + * populate the virtchnl_vlan_setting structure in the following manner and send + * the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2. Also, this will change the + * ethertype for VLAN insertion if it's enabled. So, for completeness, a + * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 with the same ethertype should be sent. + * + * virtchnl_vlan_setting.outer_ethertype_setting = VIRTHCNL_VLAN_ETHERTYPE_88A8; + * + * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on + * initialization. + */ +struct virtchnl_vlan_setting { + u32 outer_ethertype_setting; + u32 inner_ethertype_setting; + u16 vport_id; + u8 pad[6]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_setting); + /* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE * VF sends VSI id and flags. * PF returns status code in retval. @@ -1156,6 +1509,31 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_DEL_FDIR_FILTER: valid_len = sizeof(struct virtchnl_fdir_del); break; + case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: + break; + case VIRTCHNL_OP_ADD_VLAN_V2: + case VIRTCHNL_OP_DEL_VLAN_V2: + valid_len = sizeof(struct virtchnl_vlan_filter_list_v2); + if (msglen >= valid_len) { + struct virtchnl_vlan_filter_list_v2 *vfl = + (struct virtchnl_vlan_filter_list_v2 *)msg; + + valid_len += (vfl->num_elements - 1) * + sizeof(struct virtchnl_vlan_filter); + + if (vfl->num_elements == 0) { + err_msg_format = true; + break; + } + + } + break; + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2: + case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: + valid_len = sizeof(struct virtchnl_vlan_setting); + break; /* These are always errors coming from the VF. */ case VIRTCHNL_OP_EVENT: case VIRTCHNL_OP_UNKNOWN: -- 2.20.1 From anthony.l.nguyen at intel.com Mon Nov 29 19:22:57 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 11:22:57 -0800 Subject: [Intel-wired-lan] [PATCH net-next 3/6] iavf: Add support VIRTCHNL_VF_OFFLOAD_VLAN_V2 during netdev config In-Reply-To: <20211129192300.14188-1-anthony.l.nguyen@intel.com> References: <20211129192300.14188-1-anthony.l.nguyen@intel.com> Message-ID: <20211129192300.14188-4-anthony.l.nguyen@intel.com> From: Brett Creeley Based on VIRTCHNL_VF_OFFLOAD_VLAN_V2, the VF can now support more VLAN capabilities (i.e. 802.1AD offloads and filtering). In order to communicate these capabilities to the netdev layer, the VF needs to parse its VLAN capabilities based on whether it was able to negotiation VIRTCHNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 or neither of these. In order to support this, add the following functionality: iavf_get_netdev_vlan_hw_features() - This is used to determine the VLAN features that the underlying hardware supports and that can be toggled off/on based on the negotiated capabiltiies. For example, if VIRTCHNL_VF_OFFLOAD_VLAN_V2 was negotiated, then any capability marked with VIRTCHNL_VLAN_TOGGLE can be toggled on/off by the VF. If VIRTCHNL_VF_OFFLOAD_VLAN was negotiated, then only VLAN insertion and/or stripping can be toggled on/off. iavf_get_netdev_vlan_features() - This is used to determine the VLAN features that the underlying hardware supports and that should be enabled by default. For example, if VIRTHCNL_VF_OFFLOAD_VLAN_V2 was negotiated, then any supported capability that has its ethertype_init filed set should be enabled by default. If VIRTCHNL_VF_OFFLOAD_VLAN was negotiated, then filtering, stripping, and insertion should be enabled by default. Also, refactor iavf_fix_features() to take into account the new capabilities. To do this, query all the supported features (enabled by default and toggleable) and make sure the requested change is supported. If VIRTCHNL_VF_OFFLOAD_VLAN_V2 is successfully negotiated, there is no need to check VIRTCHNL_VLAN_TOGGLE here because the driver already told the netdev layer which features can be toggled via netdev->hw_features during iavf_process_config(), so only those features will be requested to change. Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf.h | 17 +- drivers/net/ethernet/intel/iavf/iavf_main.c | 279 ++++++++++++++++-- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 251 +++++++++++----- 3 files changed, 453 insertions(+), 94 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index edb139834437..5fb6ebf9a760 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -55,7 +55,8 @@ enum iavf_vsi_state_t { struct iavf_vsi { struct iavf_adapter *back; struct net_device *netdev; - unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; + unsigned long active_cvlans[BITS_TO_LONGS(VLAN_N_VID)]; + unsigned long active_svlans[BITS_TO_LONGS(VLAN_N_VID)]; u16 seid; u16 id; DECLARE_BITMAP(state, __IAVF_VSI_STATE_SIZE__); @@ -146,9 +147,15 @@ struct iavf_mac_filter { }; }; +#define IAVF_VLAN(vid, tpid) ((struct iavf_vlan){ vid, tpid }) +struct iavf_vlan { + u16 vid; + u16 tpid; +}; + struct iavf_vlan_filter { struct list_head list; - u16 vlan; + struct iavf_vlan vlan; bool remove; /* filter needs to be removed */ bool add; /* filter needs to be added */ }; @@ -354,6 +361,12 @@ struct iavf_adapter { VIRTCHNL_VF_OFFLOAD_VLAN) #define VLAN_V2_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_OFFLOAD_VLAN_V2) +#define VLAN_V2_FILTERING_ALLOWED(_a) \ + (VLAN_V2_ALLOWED((_a)) && \ + ((_a)->vlan_v2_caps.filtering.filtering_support.outer || \ + (_a)->vlan_v2_caps.filtering.filtering_support.inner)) +#define VLAN_FILTERING_ALLOWED(_a) \ + (VLAN_ALLOWED((_a)) || VLAN_V2_FILTERING_ALLOWED((_a))) #define ADV_LINK_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_CAP_ADV_LINK_SPEED) #define FDIR_FLTR_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index a5291363468f..fe7b68b0bbb7 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -646,14 +646,17 @@ static void iavf_configure_rx(struct iavf_adapter *adapter) * mac_vlan_list_lock. **/ static struct -iavf_vlan_filter *iavf_find_vlan(struct iavf_adapter *adapter, u16 vlan) +iavf_vlan_filter *iavf_find_vlan(struct iavf_adapter *adapter, + struct iavf_vlan vlan) { struct iavf_vlan_filter *f; list_for_each_entry(f, &adapter->vlan_filter_list, list) { - if (vlan == f->vlan) + if (f->vlan.vid == vlan.vid && + f->vlan.tpid == vlan.tpid) return f; } + return NULL; } @@ -665,7 +668,8 @@ iavf_vlan_filter *iavf_find_vlan(struct iavf_adapter *adapter, u16 vlan) * Returns ptr to the filter object or NULL when no memory available. **/ static struct -iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter, u16 vlan) +iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter, + struct iavf_vlan vlan) { struct iavf_vlan_filter *f = NULL; @@ -694,7 +698,7 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter, u16 vlan) * @adapter: board private structure * @vlan: VLAN tag **/ -static void iavf_del_vlan(struct iavf_adapter *adapter, u16 vlan) +static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan) { struct iavf_vlan_filter *f; @@ -720,8 +724,11 @@ static void iavf_restore_filters(struct iavf_adapter *adapter) u16 vid; /* re-add all VLAN filters */ - for_each_set_bit(vid, adapter->vsi.active_vlans, VLAN_N_VID) - iavf_add_vlan(adapter, vid); + for_each_set_bit(vid, adapter->vsi.active_cvlans, VLAN_N_VID) + iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021Q)); + + for_each_set_bit(vid, adapter->vsi.active_svlans, VLAN_N_VID) + iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021AD)); } /** @@ -735,13 +742,17 @@ static int iavf_vlan_rx_add_vid(struct net_device *netdev, { struct iavf_adapter *adapter = netdev_priv(netdev); - if (!VLAN_ALLOWED(adapter)) + if (!VLAN_FILTERING_ALLOWED(adapter)) return -EIO; - if (iavf_add_vlan(adapter, vid) == NULL) + if (!iavf_add_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto)))) return -ENOMEM; - set_bit(vid, adapter->vsi.active_vlans); + if (proto == cpu_to_be16(ETH_P_8021Q)) + set_bit(vid, adapter->vsi.active_cvlans); + else + set_bit(vid, adapter->vsi.active_svlans); + return 0; } @@ -756,8 +767,11 @@ static int iavf_vlan_rx_kill_vid(struct net_device *netdev, { struct iavf_adapter *adapter = netdev_priv(netdev); - iavf_del_vlan(adapter, vid); - clear_bit(vid, adapter->vsi.active_vlans); + iavf_del_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto))); + if (proto == cpu_to_be16(ETH_P_8021Q)) + clear_bit(vid, adapter->vsi.active_cvlans); + else + clear_bit(vid, adapter->vsi.active_svlans); return 0; } @@ -3685,6 +3699,228 @@ static netdev_features_t iavf_features_check(struct sk_buff *skb, return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); } +/** + * iavf_get_netdev_vlan_hw_features - get NETDEV VLAN features that can toggle on/off + * @adapter: board private structure + * + * Depending on whether VIRTHCNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 + * were negotiated determine the VLAN features that can be toggled on and off. + **/ +static netdev_features_t +iavf_get_netdev_vlan_hw_features(struct iavf_adapter *adapter) +{ + netdev_features_t hw_features = 0; + + if (!adapter->vf_res || !adapter->vf_res->vf_cap_flags) + return hw_features; + + /* Enable VLAN features if supported */ + if (VLAN_ALLOWED(adapter)) { + hw_features |= (NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_CTAG_RX); + } else if (VLAN_V2_ALLOWED(adapter)) { + struct virtchnl_vlan_caps *vlan_v2_caps = + &adapter->vlan_v2_caps; + struct virtchnl_vlan_supported_caps *stripping_support = + &vlan_v2_caps->offloads.stripping_support; + struct virtchnl_vlan_supported_caps *insertion_support = + &vlan_v2_caps->offloads.insertion_support; + + if (stripping_support->outer != VIRTCHNL_VLAN_UNSUPPORTED && + stripping_support->outer & VIRTCHNL_VLAN_TOGGLE) { + if (stripping_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100) + hw_features |= NETIF_F_HW_VLAN_CTAG_RX; + if (stripping_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8) + hw_features |= NETIF_F_HW_VLAN_STAG_RX; + } else if (stripping_support->inner != + VIRTCHNL_VLAN_UNSUPPORTED && + stripping_support->inner & VIRTCHNL_VLAN_TOGGLE) { + if (stripping_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100) + hw_features |= NETIF_F_HW_VLAN_CTAG_RX; + } + + if (insertion_support->outer != VIRTCHNL_VLAN_UNSUPPORTED && + insertion_support->outer & VIRTCHNL_VLAN_TOGGLE) { + if (insertion_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100) + hw_features |= NETIF_F_HW_VLAN_CTAG_TX; + if (insertion_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8) + hw_features |= NETIF_F_HW_VLAN_STAG_TX; + } else if (insertion_support->inner && + insertion_support->inner & VIRTCHNL_VLAN_TOGGLE) { + if (insertion_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100) + hw_features |= NETIF_F_HW_VLAN_CTAG_TX; + } + } + + return hw_features; +} + +/** + * iavf_get_netdev_vlan_features - get the enabled NETDEV VLAN fetures + * @adapter: board private structure + * + * Depending on whether VIRTHCNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 + * were negotiated determine the VLAN features that are enabled by default. + **/ +static netdev_features_t +iavf_get_netdev_vlan_features(struct iavf_adapter *adapter) +{ + netdev_features_t features = 0; + + if (!adapter->vf_res || !adapter->vf_res->vf_cap_flags) + return features; + + if (VLAN_ALLOWED(adapter)) { + features |= NETIF_F_HW_VLAN_CTAG_FILTER | + NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX; + } else if (VLAN_V2_ALLOWED(adapter)) { + struct virtchnl_vlan_caps *vlan_v2_caps = + &adapter->vlan_v2_caps; + struct virtchnl_vlan_supported_caps *filtering_support = + &vlan_v2_caps->filtering.filtering_support; + struct virtchnl_vlan_supported_caps *stripping_support = + &vlan_v2_caps->offloads.stripping_support; + struct virtchnl_vlan_supported_caps *insertion_support = + &vlan_v2_caps->offloads.insertion_support; + u32 ethertype_init; + + /* give priority to outer stripping and don't support both outer + * and inner stripping + */ + ethertype_init = vlan_v2_caps->offloads.ethertype_init; + if (stripping_support->outer != VIRTCHNL_VLAN_UNSUPPORTED) { + if (stripping_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_RX; + else if (stripping_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_88A8) + features |= NETIF_F_HW_VLAN_STAG_RX; + } else if (stripping_support->inner != + VIRTCHNL_VLAN_UNSUPPORTED) { + if (stripping_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_RX; + } + + /* give priority to outer insertion and don't support both outer + * and inner insertion + */ + if (insertion_support->outer != VIRTCHNL_VLAN_UNSUPPORTED) { + if (insertion_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_TX; + else if (insertion_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_88A8) + features |= NETIF_F_HW_VLAN_STAG_TX; + } else if (insertion_support->inner != + VIRTCHNL_VLAN_UNSUPPORTED) { + if (insertion_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_TX; + } + + /* give priority to outer filtering and don't bother if both + * outer and inner filtering are enabled + */ + ethertype_init = vlan_v2_caps->filtering.ethertype_init; + if (filtering_support->outer != VIRTCHNL_VLAN_UNSUPPORTED) { + if (filtering_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_FILTER; + if (filtering_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_88A8) + features |= NETIF_F_HW_VLAN_STAG_FILTER; + } else if (filtering_support->inner != + VIRTCHNL_VLAN_UNSUPPORTED) { + if (filtering_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_FILTER; + if (filtering_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_88A8 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_88A8) + features |= NETIF_F_HW_VLAN_STAG_FILTER; + } + } + + return features; +} + +#define IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested, allowed, feature_bit) \ + (!(((requested) & (feature_bit)) && \ + !((allowed) & (feature_bit)))) + +/** + * iavf_fix_netdev_vlan_features - fix NETDEV VLAN features based on support + * @adapter: board private structure + * @requested_features: stack requested NETDEV features + **/ +static netdev_features_t +iavf_fix_netdev_vlan_features(struct iavf_adapter *adapter, + netdev_features_t requested_features) +{ + netdev_features_t allowed_features; + + allowed_features = iavf_get_netdev_vlan_hw_features(adapter) | + iavf_get_netdev_vlan_features(adapter); + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_CTAG_TX)) + requested_features &= ~NETIF_F_HW_VLAN_CTAG_TX; + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_CTAG_RX)) + requested_features &= ~NETIF_F_HW_VLAN_CTAG_RX; + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_STAG_TX)) + requested_features &= ~NETIF_F_HW_VLAN_STAG_TX; + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_STAG_RX)) + requested_features &= ~NETIF_F_HW_VLAN_STAG_RX; + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_CTAG_FILTER)) + requested_features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_STAG_FILTER)) + requested_features &= ~NETIF_F_HW_VLAN_STAG_FILTER; + + if ((requested_features & + (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) && + (requested_features & + (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX)) && + adapter->vlan_v2_caps.offloads.ethertype_match == + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION) { + netdev_warn(adapter->netdev, "cannot support CTAG and STAG VLAN stripping and/or insertion simultaneously since CTAG and STAG offloads are mutually exclusive, clearing STAG offload settings\n"); + requested_features &= ~(NETIF_F_HW_VLAN_STAG_RX | + NETIF_F_HW_VLAN_STAG_TX); + } + + return requested_features; +} + /** * iavf_fix_features - fix up the netdev feature bits * @netdev: our net device @@ -3697,13 +3933,7 @@ static netdev_features_t iavf_fix_features(struct net_device *netdev, { struct iavf_adapter *adapter = netdev_priv(netdev); - if (adapter->vf_res && - !(adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) - features &= ~(NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_RX | - NETIF_F_HW_VLAN_CTAG_FILTER); - - return features; + return iavf_fix_netdev_vlan_features(adapter, features); } static const struct net_device_ops iavf_netdev_ops = { @@ -3755,6 +3985,7 @@ static int iavf_check_reset_complete(struct iavf_hw *hw) int iavf_process_config(struct iavf_adapter *adapter) { struct virtchnl_vf_resource *vfres = adapter->vf_res; + netdev_features_t hw_vlan_features, vlan_features; struct net_device *netdev = adapter->netdev; netdev_features_t hw_enc_features; netdev_features_t hw_features; @@ -3802,19 +4033,19 @@ int iavf_process_config(struct iavf_adapter *adapter) */ hw_features = hw_enc_features; - /* Enable VLAN features if supported */ - if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) - hw_features |= (NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_RX); + /* get HW VLAN features that can be toggled */ + hw_vlan_features = iavf_get_netdev_vlan_hw_features(adapter); + /* Enable cloud filter if ADQ is supported */ if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ) hw_features |= NETIF_F_HW_TC; if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_USO) hw_features |= NETIF_F_GSO_UDP_L4; - netdev->hw_features |= hw_features; + netdev->hw_features |= hw_features | hw_vlan_features; + vlan_features = iavf_get_netdev_vlan_features(adapter); - netdev->features |= hw_features; + netdev->features |= hw_features | vlan_features; if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 2ad426f13462..2dc1c435223c 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -642,7 +642,6 @@ static void iavf_mac_add_reject(struct iavf_adapter *adapter) **/ void iavf_add_vlans(struct iavf_adapter *adapter) { - struct virtchnl_vlan_filter_list *vvfl; int len, i = 0, count = 0; struct iavf_vlan_filter *f; bool more = false; @@ -660,48 +659,105 @@ void iavf_add_vlans(struct iavf_adapter *adapter) if (f->add) count++; } - if (!count || !VLAN_ALLOWED(adapter)) { + if (!count || !VLAN_FILTERING_ALLOWED(adapter)) { adapter->aq_required &= ~IAVF_FLAG_AQ_ADD_VLAN_FILTER; spin_unlock_bh(&adapter->mac_vlan_list_lock); return; } - adapter->current_op = VIRTCHNL_OP_ADD_VLAN; - len = sizeof(struct virtchnl_vlan_filter_list) + - (count * sizeof(u16)); - if (len > IAVF_MAX_AQ_BUF_SIZE) { - dev_warn(&adapter->pdev->dev, "Too many add VLAN changes in one request\n"); - count = (IAVF_MAX_AQ_BUF_SIZE - - sizeof(struct virtchnl_vlan_filter_list)) / - sizeof(u16); - len = sizeof(struct virtchnl_vlan_filter_list) + - (count * sizeof(u16)); - more = true; - } - vvfl = kzalloc(len, GFP_ATOMIC); - if (!vvfl) { + if (VLAN_ALLOWED(adapter)) { + struct virtchnl_vlan_filter_list *vvfl; + + adapter->current_op = VIRTCHNL_OP_ADD_VLAN; + + len = sizeof(*vvfl) + (count * sizeof(u16)); + if (len > IAVF_MAX_AQ_BUF_SIZE) { + dev_warn(&adapter->pdev->dev, "Too many add VLAN changes in one request\n"); + count = (IAVF_MAX_AQ_BUF_SIZE - sizeof(*vvfl)) / + sizeof(u16); + len = sizeof(*vvfl) + (count * sizeof(u16)); + more = true; + } + vvfl = kzalloc(len, GFP_ATOMIC); + if (!vvfl) { + spin_unlock_bh(&adapter->mac_vlan_list_lock); + return; + } + + vvfl->vsi_id = adapter->vsi_res->vsi_id; + vvfl->num_elements = count; + list_for_each_entry(f, &adapter->vlan_filter_list, list) { + if (f->add) { + vvfl->vlan_id[i] = f->vlan.vid; + i++; + f->add = false; + if (i == count) + break; + } + } + if (!more) + adapter->aq_required &= ~IAVF_FLAG_AQ_ADD_VLAN_FILTER; + spin_unlock_bh(&adapter->mac_vlan_list_lock); - return; - } - vvfl->vsi_id = adapter->vsi_res->vsi_id; - vvfl->num_elements = count; - list_for_each_entry(f, &adapter->vlan_filter_list, list) { - if (f->add) { - vvfl->vlan_id[i] = f->vlan; - i++; - f->add = false; - if (i == count) - break; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_ADD_VLAN, (u8 *)vvfl, len); + kfree(vvfl); + } else { + struct virtchnl_vlan_filter_list_v2 *vvfl_v2; + + adapter->current_op = VIRTCHNL_OP_ADD_VLAN_V2; + + len = sizeof(*vvfl_v2) + ((count - 1) * + sizeof(struct virtchnl_vlan_filter)); + if (len > IAVF_MAX_AQ_BUF_SIZE) { + dev_warn(&adapter->pdev->dev, "Too many add VLAN changes in one request\n"); + count = (IAVF_MAX_AQ_BUF_SIZE - sizeof(*vvfl_v2)) / + sizeof(struct virtchnl_vlan_filter); + len = sizeof(*vvfl_v2) + + ((count - 1) * + sizeof(struct virtchnl_vlan_filter)); + more = true; } - } - if (!more) - adapter->aq_required &= ~IAVF_FLAG_AQ_ADD_VLAN_FILTER; - spin_unlock_bh(&adapter->mac_vlan_list_lock); + vvfl_v2 = kzalloc(len, GFP_ATOMIC); + if (!vvfl_v2) { + spin_unlock_bh(&adapter->mac_vlan_list_lock); + return; + } - iavf_send_pf_msg(adapter, VIRTCHNL_OP_ADD_VLAN, (u8 *)vvfl, len); - kfree(vvfl); + vvfl_v2->vport_id = adapter->vsi_res->vsi_id; + vvfl_v2->num_elements = count; + list_for_each_entry(f, &adapter->vlan_filter_list, list) { + if (f->add) { + struct virtchnl_vlan_supported_caps *filtering_support = + &adapter->vlan_v2_caps.filtering.filtering_support; + struct virtchnl_vlan *vlan; + + /* give priority over outer if it's enabled */ + if (filtering_support->outer) + vlan = &vvfl_v2->filters[i].outer; + else + vlan = &vvfl_v2->filters[i].inner; + + vlan->tci = f->vlan.vid; + vlan->tpid = f->vlan.tpid; + + i++; + f->add = false; + if (i == count) + break; + } + } + + if (!more) + adapter->aq_required &= ~IAVF_FLAG_AQ_ADD_VLAN_FILTER; + + spin_unlock_bh(&adapter->mac_vlan_list_lock); + + iavf_send_pf_msg(adapter, VIRTCHNL_OP_ADD_VLAN_V2, + (u8 *)vvfl_v2, len); + kfree(vvfl_v2); + } } /** @@ -712,7 +768,6 @@ void iavf_add_vlans(struct iavf_adapter *adapter) **/ void iavf_del_vlans(struct iavf_adapter *adapter) { - struct virtchnl_vlan_filter_list *vvfl; struct iavf_vlan_filter *f, *ftmp; int len, i = 0, count = 0; bool more = false; @@ -733,56 +788,116 @@ void iavf_del_vlans(struct iavf_adapter *adapter) * filters marked for removal to enable bailing out before * sending a virtchnl message */ - if (f->remove && !VLAN_ALLOWED(adapter)) { + if (f->remove && !VLAN_FILTERING_ALLOWED(adapter)) { list_del(&f->list); kfree(f); } else if (f->remove) { count++; } } - if (!count) { + if (!count || !VLAN_FILTERING_ALLOWED(adapter)) { adapter->aq_required &= ~IAVF_FLAG_AQ_DEL_VLAN_FILTER; spin_unlock_bh(&adapter->mac_vlan_list_lock); return; } - adapter->current_op = VIRTCHNL_OP_DEL_VLAN; - len = sizeof(struct virtchnl_vlan_filter_list) + - (count * sizeof(u16)); - if (len > IAVF_MAX_AQ_BUF_SIZE) { - dev_warn(&adapter->pdev->dev, "Too many delete VLAN changes in one request\n"); - count = (IAVF_MAX_AQ_BUF_SIZE - - sizeof(struct virtchnl_vlan_filter_list)) / - sizeof(u16); - len = sizeof(struct virtchnl_vlan_filter_list) + - (count * sizeof(u16)); - more = true; - } - vvfl = kzalloc(len, GFP_ATOMIC); - if (!vvfl) { + if (VLAN_ALLOWED(adapter)) { + struct virtchnl_vlan_filter_list *vvfl; + + adapter->current_op = VIRTCHNL_OP_DEL_VLAN; + + len = sizeof(*vvfl) + (count * sizeof(u16)); + if (len > IAVF_MAX_AQ_BUF_SIZE) { + dev_warn(&adapter->pdev->dev, "Too many delete VLAN changes in one request\n"); + count = (IAVF_MAX_AQ_BUF_SIZE - sizeof(*vvfl)) / + sizeof(u16); + len = sizeof(*vvfl) + (count * sizeof(u16)); + more = true; + } + vvfl = kzalloc(len, GFP_ATOMIC); + if (!vvfl) { + spin_unlock_bh(&adapter->mac_vlan_list_lock); + return; + } + + vvfl->vsi_id = adapter->vsi_res->vsi_id; + vvfl->num_elements = count; + list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) { + if (f->remove) { + vvfl->vlan_id[i] = f->vlan.vid; + i++; + list_del(&f->list); + kfree(f); + if (i == count) + break; + } + } + + if (!more) + adapter->aq_required &= ~IAVF_FLAG_AQ_DEL_VLAN_FILTER; + spin_unlock_bh(&adapter->mac_vlan_list_lock); - return; - } - vvfl->vsi_id = adapter->vsi_res->vsi_id; - vvfl->num_elements = count; - list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) { - if (f->remove) { - vvfl->vlan_id[i] = f->vlan; - i++; - list_del(&f->list); - kfree(f); - if (i == count) - break; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_DEL_VLAN, (u8 *)vvfl, len); + kfree(vvfl); + } else { + struct virtchnl_vlan_filter_list_v2 *vvfl_v2; + + adapter->current_op = VIRTCHNL_OP_DEL_VLAN_V2; + + len = sizeof(*vvfl_v2) + + ((count - 1) * sizeof(struct virtchnl_vlan_filter)); + if (len > IAVF_MAX_AQ_BUF_SIZE) { + dev_warn(&adapter->pdev->dev, "Too many add VLAN changes in one request\n"); + count = (IAVF_MAX_AQ_BUF_SIZE - + sizeof(*vvfl_v2)) / + sizeof(struct virtchnl_vlan_filter); + len = sizeof(*vvfl_v2) + + ((count - 1) * + sizeof(struct virtchnl_vlan_filter)); + more = true; } - } - if (!more) - adapter->aq_required &= ~IAVF_FLAG_AQ_DEL_VLAN_FILTER; - spin_unlock_bh(&adapter->mac_vlan_list_lock); + vvfl_v2 = kzalloc(len, GFP_ATOMIC); + if (!vvfl_v2) { + spin_unlock_bh(&adapter->mac_vlan_list_lock); + return; + } + + vvfl_v2->vport_id = adapter->vsi_res->vsi_id; + vvfl_v2->num_elements = count; + list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) { + if (f->remove) { + struct virtchnl_vlan_supported_caps *filtering_support = + &adapter->vlan_v2_caps.filtering.filtering_support; + struct virtchnl_vlan *vlan; + + /* give priority over outer if it's enabled */ + if (filtering_support->outer) + vlan = &vvfl_v2->filters[i].outer; + else + vlan = &vvfl_v2->filters[i].inner; + + vlan->tci = f->vlan.vid; + vlan->tpid = f->vlan.tpid; + + list_del(&f->list); + kfree(f); + i++; + if (i == count) + break; + } + } - iavf_send_pf_msg(adapter, VIRTCHNL_OP_DEL_VLAN, (u8 *)vvfl, len); - kfree(vvfl); + if (!more) + adapter->aq_required &= ~IAVF_FLAG_AQ_DEL_VLAN_FILTER; + + spin_unlock_bh(&adapter->mac_vlan_list_lock); + + iavf_send_pf_msg(adapter, VIRTCHNL_OP_DEL_VLAN_V2, + (u8 *)vvfl_v2, len); + kfree(vvfl_v2); + } } /** -- 2.20.1 From hkallweit1 at gmail.com Mon Nov 29 21:14:06 2021 From: hkallweit1 at gmail.com (Heiner Kallweit) Date: Mon, 29 Nov 2021 22:14:06 +0100 Subject: [Intel-wired-lan] [PATCH net] igb: fix deadlock caused by taking RTNL in RPM resume path Message-ID: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> Recent net core changes caused an issue with few Intel drivers (reportedly igb), where taking RTNL in RPM resume path results in a deadlock. See [0] for a bug report. I don't think the core changes are wrong, but taking RTNL in RPM resume path isn't needed. The Intel drivers are the only ones doing this. See [1] for a discussion on the issue. Following patch changes the RPM resume path to not take RTNL. [0] https://bugzilla.kernel.org/show_bug.cgi?id=215129 [1] https://lore.kernel.org/netdev/20211125074949.5f897431 at kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com/t/ Fixes: bd869245a3dc ("net: core: try to runtime-resume detached device in __dev_open") Fixes: f32a21376573 ("ethtool: runtime-resume netdev parent before ethtool ioctl ops") Tested-by: Martin Stolpe Signed-off-by: Heiner Kallweit --- drivers/net/ethernet/intel/igb/igb_main.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index dd208930f..8073cce73 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -9254,7 +9254,7 @@ static int __maybe_unused igb_suspend(struct device *dev) return __igb_shutdown(to_pci_dev(dev), NULL, 0); } -static int __maybe_unused igb_resume(struct device *dev) +static int __maybe_unused __igb_resume(struct device *dev, bool rpm) { struct pci_dev *pdev = to_pci_dev(dev); struct net_device *netdev = pci_get_drvdata(pdev); @@ -9297,17 +9297,24 @@ static int __maybe_unused igb_resume(struct device *dev) wr32(E1000_WUS, ~0); - rtnl_lock(); + if (!rpm) + rtnl_lock(); if (!err && netif_running(netdev)) err = __igb_open(netdev, true); if (!err) netif_device_attach(netdev); - rtnl_unlock(); + if (!rpm) + rtnl_unlock(); return err; } +static int __maybe_unused igb_resume(struct device *dev) +{ + return __igb_resume(dev, false); +} + static int __maybe_unused igb_runtime_idle(struct device *dev) { struct net_device *netdev = dev_get_drvdata(dev); @@ -9326,7 +9333,7 @@ static int __maybe_unused igb_runtime_suspend(struct device *dev) static int __maybe_unused igb_runtime_resume(struct device *dev) { - return igb_resume(dev); + return __igb_resume(dev, true); } static void igb_shutdown(struct pci_dev *pdev) @@ -9442,7 +9449,7 @@ static pci_ers_result_t igb_io_error_detected(struct pci_dev *pdev, * @pdev: Pointer to PCI device * * Restart the card from scratch, as if from a cold-boot. Implementation - * resembles the first-half of the igb_resume routine. + * resembles the first-half of the __igb_resume routine. **/ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev) { @@ -9482,7 +9489,7 @@ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev) * * This callback is called when the error recovery driver tells us that * its OK to resume normal operation. Implementation resembles the - * second-half of the igb_resume routine. + * second-half of the __igb_resume routine. */ static void igb_io_resume(struct pci_dev *pdev) { -- 2.34.1 From jesse.brandeburg at intel.com Mon Nov 29 21:31:28 2021 From: jesse.brandeburg at intel.com (Jesse Brandeburg) Date: Mon, 29 Nov 2021 13:31:28 -0800 Subject: [Intel-wired-lan] [PATCH 2/2] net/ice: Remove unused enum In-Reply-To: <6f95e1de-6c35-76e5-eb83-68b60dc55c05@molgen.mpg.de> References: <20211124124136.1776-1-shiraz.saleem@intel.com> <20211124124136.1776-2-shiraz.saleem@intel.com> <6f95e1de-6c35-76e5-eb83-68b60dc55c05@molgen.mpg.de> Message-ID: <42f45779-a6e1-9033-376d-4dae36261873@intel.com> On 11/24/2021 4:47 AM, Paul Menzel wrote: > Dear Shiraz, > > > Am 24.11.21 um 13:41 schrieb Shiraz Saleem: >> From: "Shiraz Saleem" >> >> Remove ice_devlink_param_id enum as its not used. > > Please add the name `ice_devlink_param_id` to the commit message summary. Hi Paul, I don't think that is necessary, is this just personal preference or are you following some style guideline that I don't know about or maybe just don't remember? I'd argue that the subject line has a different bug, it should be: [PATCH net-next] ice: Remove unused enum But I see no reason to add the long string of the actual enum removed to the subject. Jesse From stephen at networkplumber.org Mon Nov 29 23:09:20 2021 From: stephen at networkplumber.org (Stephen Hemminger) Date: Mon, 29 Nov 2021 15:09:20 -0800 Subject: [Intel-wired-lan] [PATCH net] igb: fix deadlock caused by taking RTNL in RPM resume path In-Reply-To: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> References: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> Message-ID: <20211129150920.4a400828@hermes.local> On Mon, 29 Nov 2021 22:14:06 +0100 Heiner Kallweit wrote: > diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c > index dd208930f..8073cce73 100644 > --- a/drivers/net/ethernet/intel/igb/igb_main.c > +++ b/drivers/net/ethernet/intel/igb/igb_main.c > @@ -9254,7 +9254,7 @@ static int __maybe_unused igb_suspend(struct device *dev) > return __igb_shutdown(to_pci_dev(dev), NULL, 0); > } > > -static int __maybe_unused igb_resume(struct device *dev) > +static int __maybe_unused __igb_resume(struct device *dev, bool rpm) > { > struct pci_dev *pdev = to_pci_dev(dev); > struct net_device *netdev = pci_get_drvdata(pdev); > @@ -9297,17 +9297,24 @@ static int __maybe_unused igb_resume(struct device *dev) > > wr32(E1000_WUS, ~0); > > - rtnl_lock(); > + if (!rpm) > + rtnl_lock(); > if (!err && netif_running(netdev)) > err = __igb_open(netdev, true); > > if (!err) > netif_device_attach(netdev); > - rtnl_unlock(); > + if (!rpm) > + rtnl_unlock(); > > return err; > } > > +static int __maybe_unused igb_resume(struct device *dev) > +{ > + return __igb_resume(dev, false); > +} > + > static int __maybe_unused igb_runtime_idle(struct device *dev) > { > struct net_device *netdev = dev_get_drvdata(dev); > @@ -9326,7 +9333,7 @@ static int __maybe_unused igb_runtime_suspend(struct device *dev) > > static int __maybe_unused igb_runtime_resume(struct device *dev) > { > - return igb_resume(dev); > + return __igb_resume(dev, true); > } Rather than conditional locking which is one of the seven deadly sins of SMP, why not just have __igb_resume() be the locked version where lock is held by caller? From anthony.l.nguyen at intel.com Tue Nov 30 00:15:58 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 16:15:58 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 0/6] Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 Message-ID: <20211130001604.22112-1-anthony.l.nguyen@intel.com> From: Brett Creeley This patch series adds support in the iavf driver for communicating and using VIRTCHNL_VF_OFFLOAD_VLAN_V2. The current VIRTCHNL_VF_OFFLOAD_VLAN is very limited and covers all 802.1Q VLAN offloads and filtering with no granularity. The new VIRTCHNL_VF_OFFLOAD_VLAN_V2 adds more granularity, flexibility, and support for 802.1ad offloads and filtering. This includes the VF negotiating which VLAN offloads/filtering it's allowed, where VLAN tags should be inserted and/or stripped into and from descriptors, and the supported VLAN protocols. --- v2: - Remove excess newline - Remove 'inline' use - Fix incorrect flag usage - Change BIT() to BIT_ULL() for 32 bit build issue Brett Creeley (6): virtchnl: Add support for new VLAN capabilities iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 negotiation iavf: Add support VIRTCHNL_VF_OFFLOAD_VLAN_V2 during netdev config iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 hotpath iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 offload enable/disable iavf: Restrict maximum VLAN filters for VIRTCHNL_VF_OFFLOAD_VLAN_V2 drivers/net/ethernet/intel/iavf/iavf.h | 105 ++- drivers/net/ethernet/intel/iavf/iavf_main.c | 767 +++++++++++++++--- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 71 +- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 30 +- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 534 ++++++++++-- include/linux/avf/virtchnl.h | 377 +++++++++ 6 files changed, 1637 insertions(+), 247 deletions(-) -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 00:16:04 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 16:16:04 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 6/6] iavf: Restrict maximum VLAN filters for VIRTCHNL_VF_OFFLOAD_VLAN_V2 In-Reply-To: <20211130001604.22112-1-anthony.l.nguyen@intel.com> References: <20211130001604.22112-1-anthony.l.nguyen@intel.com> Message-ID: <20211130001604.22112-7-anthony.l.nguyen@intel.com> From: Brett Creeley For VIRTCHNL_VF_OFFLOAD_VLAN, PF's would limit the number of VLAN filters a VF was allowed to add. However, by the time the opcode failed, the VLAN netdev had already been added. VIRTCHNL_VF_OFFLOAD_VLAN_V2 added the ability for a PF to tell the VF how many VLAN filters it's allowed to add. Make changes to support that functionality. Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf_main.c | 50 +++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 8bdadf6a3c0c..cb48a4ecd221 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -731,6 +731,50 @@ static void iavf_restore_filters(struct iavf_adapter *adapter) iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021AD)); } +/** + * iavf_get_num_vlans_added - get number of VLANs added + * @adapter: board private structure + */ +static u16 iavf_get_num_vlans_added(struct iavf_adapter *adapter) +{ + return bitmap_weight(adapter->vsi.active_cvlans, VLAN_N_VID) + + bitmap_weight(adapter->vsi.active_svlans, VLAN_N_VID); +} + +/** + * iavf_get_max_vlans_allowed - get maximum VLANs allowed for this VF + * @adapter: board private structure + * + * This depends on the negotiated VLAN capability. For VIRTCHNL_VF_OFFLOAD_VLAN, + * do not impose a limit as that maintains current behavior and for + * VIRTCHNL_VF_OFFLOAD_VLAN_V2, use the maximum allowed sent from the PF. + **/ +static u16 iavf_get_max_vlans_allowed(struct iavf_adapter *adapter) +{ + /* don't impose any limit for VIRTCHNL_VF_OFFLOAD_VLAN since there has + * never been a limit on the VF driver side + */ + if (VLAN_ALLOWED(adapter)) + return VLAN_N_VID; + else if (VLAN_V2_ALLOWED(adapter)) + return adapter->vlan_v2_caps.filtering.max_filters; + + return 0; +} + +/** + * iavf_max_vlans_added - check if maximum VLANs allowed already exist + * @adapter: board private structure + **/ +static bool iavf_max_vlans_added(struct iavf_adapter *adapter) +{ + if (iavf_get_num_vlans_added(adapter) < + iavf_get_max_vlans_allowed(adapter)) + return false; + + return true; +} + /** * iavf_vlan_rx_add_vid - Add a VLAN filter to a device * @netdev: network device struct @@ -745,6 +789,12 @@ static int iavf_vlan_rx_add_vid(struct net_device *netdev, if (!VLAN_FILTERING_ALLOWED(adapter)) return -EIO; + if (iavf_max_vlans_added(adapter)) { + netdev_err(netdev, "Max allowed VLAN filters %u. Remove existing VLANs or disable filtering via Ethtool if supported.\n", + iavf_get_max_vlans_allowed(adapter)); + return -EIO; + } + if (!iavf_add_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto)))) return -ENOMEM; -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 00:16:02 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 16:16:02 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 4/6] iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 hotpath In-Reply-To: <20211130001604.22112-1-anthony.l.nguyen@intel.com> References: <20211130001604.22112-1-anthony.l.nguyen@intel.com> Message-ID: <20211130001604.22112-5-anthony.l.nguyen@intel.com> From: Brett Creeley The new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability added support that allows the PF to set the location of the Tx and Rx VLAN tag for insertion and stripping offloads. In order to support this functionality a few changes are needed. 1. Add a new method to cache the VLAN tag location based on negotiated capabilities for the Tx and Rx ring flags. This needs to be called in the initialization and reset paths. 2. Refactor the transmit hotpath to account for the new Tx ring flags. When IAVF_TXR_FLAGS_VLAN_LOC_L2TAG2 is set, then the driver needs to insert the VLAN tag in the L2TAG2 field of the transmit descriptor. When the IAVF_TXRX_FLAGS_VLAN_LOC_L2TAG1 is set, then the driver needs to use the l2tag1 field of the data descriptor (same behavior as before). 3. Refactor the iavf_tx_prepare_vlan_flags() function to simplify transmit hardware VLAN offload functionality by only depending on the skb_vlan_tag_present() function. This can be done because the OS won't request transmit offload for a VLAN unless the driver told the OS it's supported and enabled. 4. Refactor the receive hotpath to account for the new Rx ring flags and VLAN ethertypes. This requires checking the Rx ring flags and descriptor status bits to determine the location of the VLAN tag. Also, since only a single ethertype can be supported at a time, check the enabled netdev features before specifying a VLAN ethertype in __vlan_hwaccel_put_tag(). Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf.h | 1 + drivers/net/ethernet/intel/iavf/iavf_main.c | 82 +++++++++++++++++++ drivers/net/ethernet/intel/iavf/iavf_txrx.c | 71 ++++++++-------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 30 ++++--- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 2 + 5 files changed, 135 insertions(+), 51 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 5fb6ebf9a760..2660d46da1b5 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -488,6 +488,7 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter); int iavf_get_vf_config(struct iavf_adapter *adapter); int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter); int iavf_send_vf_offload_vlan_v2_msg(struct iavf_adapter *adapter); +void iavf_set_queue_vlan_tag_loc(struct iavf_adapter *adapter); void iavf_irq_enable(struct iavf_adapter *adapter, bool flush); void iavf_configure_queues(struct iavf_adapter *adapter); void iavf_deconfigure_queues(struct iavf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 89d08b2f8a13..945369bbe04a 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1165,6 +1165,86 @@ static void iavf_free_queues(struct iavf_adapter *adapter) adapter->rx_rings = NULL; } +/** + * iavf_set_queue_vlan_tag_loc - set location for VLAN tag offload + * @adapter: board private structure + * + * Based on negotiated capabilities, the VLAN tag needs to be inserted and/or + * stripped in certain descriptor fields. Instead of checking the offload + * capability bits in the hot path, cache the location the ring specific + * flags. + */ +void iavf_set_queue_vlan_tag_loc(struct iavf_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_active_queues; i++) { + struct iavf_ring *tx_ring = &adapter->tx_rings[i]; + struct iavf_ring *rx_ring = &adapter->rx_rings[i]; + + /* prevent multiple L2TAG bits being set after VFR */ + tx_ring->flags &= + ~(IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 | + IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2); + rx_ring->flags &= + ~(IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 | + IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2); + + if (VLAN_ALLOWED(adapter)) { + tx_ring->flags |= IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + rx_ring->flags |= IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + } else if (VLAN_V2_ALLOWED(adapter)) { + struct virtchnl_vlan_supported_caps *stripping_support; + struct virtchnl_vlan_supported_caps *insertion_support; + + stripping_support = + &adapter->vlan_v2_caps.offloads.stripping_support; + insertion_support = + &adapter->vlan_v2_caps.offloads.insertion_support; + + if (stripping_support->outer) { + if (stripping_support->outer & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1) + rx_ring->flags |= + IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + else if (stripping_support->outer & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2) + rx_ring->flags |= + IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2; + } else if (stripping_support->inner) { + if (stripping_support->inner & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1) + rx_ring->flags |= + IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + else if (stripping_support->inner & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2) + rx_ring->flags |= + IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2; + } + + if (insertion_support->outer) { + if (insertion_support->outer & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1) + tx_ring->flags |= + IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + else if (insertion_support->outer & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2) + tx_ring->flags |= + IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2; + } else if (insertion_support->inner) { + if (insertion_support->inner & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1) + tx_ring->flags |= + IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1; + else if (insertion_support->inner & + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2) + tx_ring->flags |= + IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2; + } + } + } +} + /** * iavf_alloc_queues - Allocate memory for all rings * @adapter: board private structure to initialize @@ -1226,6 +1306,8 @@ static int iavf_alloc_queues(struct iavf_adapter *adapter) adapter->num_active_queues = num_active_queues; + iavf_set_queue_vlan_tag_loc(adapter); + return 0; err_out: diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 8f2376d17466..8cbe7ad1347c 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -865,6 +865,9 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring, if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) && (vlan_tag & VLAN_VID_MASK)) __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag); + else if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_STAG_RX) && + vlan_tag & VLAN_VID_MASK) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021AD), vlan_tag); napi_gro_receive(&q_vector->napi, skb); } @@ -1468,7 +1471,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) struct iavf_rx_buffer *rx_buffer; union iavf_rx_desc *rx_desc; unsigned int size; - u16 vlan_tag; + u16 vlan_tag = 0; u8 rx_ptype; u64 qword; @@ -1551,9 +1554,13 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* populate checksum, VLAN, and protocol */ iavf_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype); - - vlan_tag = (qword & BIT(IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) ? - le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1) : 0; + if (qword & BIT(IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT) && + rx_ring->flags & IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1) + vlan_tag = le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1); + if (rx_desc->wb.qword2.ext_status & + cpu_to_le16(BIT(IAVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) && + rx_ring->flags & IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2) + vlan_tag = le16_to_cpu(rx_desc->wb.qword2.l2tag2_2); iavf_trace(clean_rx_irq_rx, rx_ring, rx_desc, skb); iavf_receive_skb(rx_ring, skb, vlan_tag); @@ -1781,46 +1788,29 @@ int iavf_napi_poll(struct napi_struct *napi, int budget) * Returns error code indicate the frame should be dropped upon error and the * otherwise returns 0 to indicate the flags has been set properly. **/ -static inline int iavf_tx_prepare_vlan_flags(struct sk_buff *skb, - struct iavf_ring *tx_ring, - u32 *flags) +static void iavf_tx_prepare_vlan_flags(struct sk_buff *skb, + struct iavf_ring *tx_ring, u32 *flags) { - __be16 protocol = skb->protocol; u32 tx_flags = 0; - if (protocol == htons(ETH_P_8021Q) && - !(tx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) { - /* When HW VLAN acceleration is turned off by the user the - * stack sets the protocol to 8021q so that the driver - * can take any steps required to support the SW only - * VLAN handling. In our case the driver doesn't need - * to take any further steps so just set the protocol - * to the encapsulated ethertype. - */ - skb->protocol = vlan_get_protocol(skb); - goto out; - } - /* if we have a HW VLAN tag being added, default to the HW one */ - if (skb_vlan_tag_present(skb)) { - tx_flags |= skb_vlan_tag_get(skb) << IAVF_TX_FLAGS_VLAN_SHIFT; - tx_flags |= IAVF_TX_FLAGS_HW_VLAN; - /* else if it is a SW VLAN, check the next protocol and store the tag */ - } else if (protocol == htons(ETH_P_8021Q)) { - struct vlan_hdr *vhdr, _vhdr; - - vhdr = skb_header_pointer(skb, ETH_HLEN, sizeof(_vhdr), &_vhdr); - if (!vhdr) - return -EINVAL; + /* stack will only request hardware VLAN insertion offload for protocols + * that the driver supports and has enabled + */ + if (!skb_vlan_tag_present(skb)) + return; - protocol = vhdr->h_vlan_encapsulated_proto; - tx_flags |= ntohs(vhdr->h_vlan_TCI) << IAVF_TX_FLAGS_VLAN_SHIFT; - tx_flags |= IAVF_TX_FLAGS_SW_VLAN; + tx_flags |= skb_vlan_tag_get(skb) << IAVF_TX_FLAGS_VLAN_SHIFT; + if (tx_ring->flags & IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2) { + tx_flags |= IAVF_TX_FLAGS_HW_OUTER_SINGLE_VLAN; + } else if (tx_ring->flags & IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1) { + tx_flags |= IAVF_TX_FLAGS_HW_VLAN; + } else { + dev_dbg(tx_ring->dev, "Unsupported Tx VLAN tag location requested\n"); + return; } -out: *flags = tx_flags; - return 0; } /** @@ -2440,8 +2430,13 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb, first->gso_segs = 1; /* prepare the xmit flags */ - if (iavf_tx_prepare_vlan_flags(skb, tx_ring, &tx_flags)) - goto out_drop; + iavf_tx_prepare_vlan_flags(skb, tx_ring, &tx_flags); + if (tx_flags & IAVF_TX_FLAGS_HW_OUTER_SINGLE_VLAN) { + cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + cd_l2tag2 = (tx_flags & IAVF_TX_FLAGS_VLAN_MASK) >> + IAVF_TX_FLAGS_VLAN_SHIFT; + } /* obtain protocol of skb */ protocol = vlan_get_protocol(skb); diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index e5b9ba42dd00..2624bf6d009e 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -243,19 +243,20 @@ static inline unsigned int iavf_txd_use_count(unsigned int size) #define DESC_NEEDED (MAX_SKB_FRAGS + 6) #define IAVF_MIN_DESC_PENDING 4 -#define IAVF_TX_FLAGS_HW_VLAN BIT(1) -#define IAVF_TX_FLAGS_SW_VLAN BIT(2) -#define IAVF_TX_FLAGS_TSO BIT(3) -#define IAVF_TX_FLAGS_IPV4 BIT(4) -#define IAVF_TX_FLAGS_IPV6 BIT(5) -#define IAVF_TX_FLAGS_FCCRC BIT(6) -#define IAVF_TX_FLAGS_FSO BIT(7) -#define IAVF_TX_FLAGS_FD_SB BIT(9) -#define IAVF_TX_FLAGS_VXLAN_TUNNEL BIT(10) -#define IAVF_TX_FLAGS_VLAN_MASK 0xffff0000 -#define IAVF_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000 -#define IAVF_TX_FLAGS_VLAN_PRIO_SHIFT 29 -#define IAVF_TX_FLAGS_VLAN_SHIFT 16 +#define IAVF_TX_FLAGS_HW_VLAN BIT(1) +#define IAVF_TX_FLAGS_SW_VLAN BIT(2) +#define IAVF_TX_FLAGS_TSO BIT(3) +#define IAVF_TX_FLAGS_IPV4 BIT(4) +#define IAVF_TX_FLAGS_IPV6 BIT(5) +#define IAVF_TX_FLAGS_FCCRC BIT(6) +#define IAVF_TX_FLAGS_FSO BIT(7) +#define IAVF_TX_FLAGS_FD_SB BIT(9) +#define IAVF_TX_FLAGS_VXLAN_TUNNEL BIT(10) +#define IAVF_TX_FLAGS_HW_OUTER_SINGLE_VLAN BIT(11) +#define IAVF_TX_FLAGS_VLAN_MASK 0xffff0000 +#define IAVF_TX_FLAGS_VLAN_PRIO_MASK 0xe0000000 +#define IAVF_TX_FLAGS_VLAN_PRIO_SHIFT 29 +#define IAVF_TX_FLAGS_VLAN_SHIFT 16 struct iavf_tx_buffer { struct iavf_tx_desc *next_to_watch; @@ -362,6 +363,9 @@ struct iavf_ring { u16 flags; #define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0) #define IAVF_RXR_FLAGS_BUILD_SKB_ENABLED BIT(1) +#define IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(3) +#define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4) +#define IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2 BIT(5) /* stats structs */ struct iavf_queue_stats stats; diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 2dc1c435223c..613fcc491fd7 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -1962,6 +1962,8 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__); + iavf_set_queue_vlan_tag_loc(adapter); + } break; case VIRTCHNL_OP_ENABLE_QUEUES: -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 00:16:01 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 16:16:01 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 3/6] iavf: Add support VIRTCHNL_VF_OFFLOAD_VLAN_V2 during netdev config In-Reply-To: <20211130001604.22112-1-anthony.l.nguyen@intel.com> References: <20211130001604.22112-1-anthony.l.nguyen@intel.com> Message-ID: <20211130001604.22112-4-anthony.l.nguyen@intel.com> From: Brett Creeley Based on VIRTCHNL_VF_OFFLOAD_VLAN_V2, the VF can now support more VLAN capabilities (i.e. 802.1AD offloads and filtering). In order to communicate these capabilities to the netdev layer, the VF needs to parse its VLAN capabilities based on whether it was able to negotiation VIRTCHNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 or neither of these. In order to support this, add the following functionality: iavf_get_netdev_vlan_hw_features() - This is used to determine the VLAN features that the underlying hardware supports and that can be toggled off/on based on the negotiated capabiltiies. For example, if VIRTCHNL_VF_OFFLOAD_VLAN_V2 was negotiated, then any capability marked with VIRTCHNL_VLAN_TOGGLE can be toggled on/off by the VF. If VIRTCHNL_VF_OFFLOAD_VLAN was negotiated, then only VLAN insertion and/or stripping can be toggled on/off. iavf_get_netdev_vlan_features() - This is used to determine the VLAN features that the underlying hardware supports and that should be enabled by default. For example, if VIRTHCNL_VF_OFFLOAD_VLAN_V2 was negotiated, then any supported capability that has its ethertype_init filed set should be enabled by default. If VIRTCHNL_VF_OFFLOAD_VLAN was negotiated, then filtering, stripping, and insertion should be enabled by default. Also, refactor iavf_fix_features() to take into account the new capabilities. To do this, query all the supported features (enabled by default and toggleable) and make sure the requested change is supported. If VIRTCHNL_VF_OFFLOAD_VLAN_V2 is successfully negotiated, there is no need to check VIRTCHNL_VLAN_TOGGLE here because the driver already told the netdev layer which features can be toggled via netdev->hw_features during iavf_process_config(), so only those features will be requested to change. Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf.h | 17 +- drivers/net/ethernet/intel/iavf/iavf_main.c | 279 ++++++++++++++++-- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 251 +++++++++++----- 3 files changed, 453 insertions(+), 94 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index edb139834437..5fb6ebf9a760 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -55,7 +55,8 @@ enum iavf_vsi_state_t { struct iavf_vsi { struct iavf_adapter *back; struct net_device *netdev; - unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)]; + unsigned long active_cvlans[BITS_TO_LONGS(VLAN_N_VID)]; + unsigned long active_svlans[BITS_TO_LONGS(VLAN_N_VID)]; u16 seid; u16 id; DECLARE_BITMAP(state, __IAVF_VSI_STATE_SIZE__); @@ -146,9 +147,15 @@ struct iavf_mac_filter { }; }; +#define IAVF_VLAN(vid, tpid) ((struct iavf_vlan){ vid, tpid }) +struct iavf_vlan { + u16 vid; + u16 tpid; +}; + struct iavf_vlan_filter { struct list_head list; - u16 vlan; + struct iavf_vlan vlan; bool remove; /* filter needs to be removed */ bool add; /* filter needs to be added */ }; @@ -354,6 +361,12 @@ struct iavf_adapter { VIRTCHNL_VF_OFFLOAD_VLAN) #define VLAN_V2_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_OFFLOAD_VLAN_V2) +#define VLAN_V2_FILTERING_ALLOWED(_a) \ + (VLAN_V2_ALLOWED((_a)) && \ + ((_a)->vlan_v2_caps.filtering.filtering_support.outer || \ + (_a)->vlan_v2_caps.filtering.filtering_support.inner)) +#define VLAN_FILTERING_ALLOWED(_a) \ + (VLAN_ALLOWED((_a)) || VLAN_V2_FILTERING_ALLOWED((_a))) #define ADV_LINK_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_CAP_ADV_LINK_SPEED) #define FDIR_FLTR_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 714709a28ad8..89d08b2f8a13 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -646,14 +646,17 @@ static void iavf_configure_rx(struct iavf_adapter *adapter) * mac_vlan_list_lock. **/ static struct -iavf_vlan_filter *iavf_find_vlan(struct iavf_adapter *adapter, u16 vlan) +iavf_vlan_filter *iavf_find_vlan(struct iavf_adapter *adapter, + struct iavf_vlan vlan) { struct iavf_vlan_filter *f; list_for_each_entry(f, &adapter->vlan_filter_list, list) { - if (vlan == f->vlan) + if (f->vlan.vid == vlan.vid && + f->vlan.tpid == vlan.tpid) return f; } + return NULL; } @@ -665,7 +668,8 @@ iavf_vlan_filter *iavf_find_vlan(struct iavf_adapter *adapter, u16 vlan) * Returns ptr to the filter object or NULL when no memory available. **/ static struct -iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter, u16 vlan) +iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter, + struct iavf_vlan vlan) { struct iavf_vlan_filter *f = NULL; @@ -694,7 +698,7 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter, u16 vlan) * @adapter: board private structure * @vlan: VLAN tag **/ -static void iavf_del_vlan(struct iavf_adapter *adapter, u16 vlan) +static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan) { struct iavf_vlan_filter *f; @@ -720,8 +724,11 @@ static void iavf_restore_filters(struct iavf_adapter *adapter) u16 vid; /* re-add all VLAN filters */ - for_each_set_bit(vid, adapter->vsi.active_vlans, VLAN_N_VID) - iavf_add_vlan(adapter, vid); + for_each_set_bit(vid, adapter->vsi.active_cvlans, VLAN_N_VID) + iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021Q)); + + for_each_set_bit(vid, adapter->vsi.active_svlans, VLAN_N_VID) + iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021AD)); } /** @@ -735,13 +742,17 @@ static int iavf_vlan_rx_add_vid(struct net_device *netdev, { struct iavf_adapter *adapter = netdev_priv(netdev); - if (!VLAN_ALLOWED(adapter)) + if (!VLAN_FILTERING_ALLOWED(adapter)) return -EIO; - if (iavf_add_vlan(adapter, vid) == NULL) + if (!iavf_add_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto)))) return -ENOMEM; - set_bit(vid, adapter->vsi.active_vlans); + if (proto == cpu_to_be16(ETH_P_8021Q)) + set_bit(vid, adapter->vsi.active_cvlans); + else + set_bit(vid, adapter->vsi.active_svlans); + return 0; } @@ -756,8 +767,11 @@ static int iavf_vlan_rx_kill_vid(struct net_device *netdev, { struct iavf_adapter *adapter = netdev_priv(netdev); - iavf_del_vlan(adapter, vid); - clear_bit(vid, adapter->vsi.active_vlans); + iavf_del_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto))); + if (proto == cpu_to_be16(ETH_P_8021Q)) + clear_bit(vid, adapter->vsi.active_cvlans); + else + clear_bit(vid, adapter->vsi.active_svlans); return 0; } @@ -3685,6 +3699,228 @@ static netdev_features_t iavf_features_check(struct sk_buff *skb, return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); } +/** + * iavf_get_netdev_vlan_hw_features - get NETDEV VLAN features that can toggle on/off + * @adapter: board private structure + * + * Depending on whether VIRTHCNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 + * were negotiated determine the VLAN features that can be toggled on and off. + **/ +static netdev_features_t +iavf_get_netdev_vlan_hw_features(struct iavf_adapter *adapter) +{ + netdev_features_t hw_features = 0; + + if (!adapter->vf_res || !adapter->vf_res->vf_cap_flags) + return hw_features; + + /* Enable VLAN features if supported */ + if (VLAN_ALLOWED(adapter)) { + hw_features |= (NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_CTAG_RX); + } else if (VLAN_V2_ALLOWED(adapter)) { + struct virtchnl_vlan_caps *vlan_v2_caps = + &adapter->vlan_v2_caps; + struct virtchnl_vlan_supported_caps *stripping_support = + &vlan_v2_caps->offloads.stripping_support; + struct virtchnl_vlan_supported_caps *insertion_support = + &vlan_v2_caps->offloads.insertion_support; + + if (stripping_support->outer != VIRTCHNL_VLAN_UNSUPPORTED && + stripping_support->outer & VIRTCHNL_VLAN_TOGGLE) { + if (stripping_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100) + hw_features |= NETIF_F_HW_VLAN_CTAG_RX; + if (stripping_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8) + hw_features |= NETIF_F_HW_VLAN_STAG_RX; + } else if (stripping_support->inner != + VIRTCHNL_VLAN_UNSUPPORTED && + stripping_support->inner & VIRTCHNL_VLAN_TOGGLE) { + if (stripping_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100) + hw_features |= NETIF_F_HW_VLAN_CTAG_RX; + } + + if (insertion_support->outer != VIRTCHNL_VLAN_UNSUPPORTED && + insertion_support->outer & VIRTCHNL_VLAN_TOGGLE) { + if (insertion_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100) + hw_features |= NETIF_F_HW_VLAN_CTAG_TX; + if (insertion_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8) + hw_features |= NETIF_F_HW_VLAN_STAG_TX; + } else if (insertion_support->inner && + insertion_support->inner & VIRTCHNL_VLAN_TOGGLE) { + if (insertion_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100) + hw_features |= NETIF_F_HW_VLAN_CTAG_TX; + } + } + + return hw_features; +} + +/** + * iavf_get_netdev_vlan_features - get the enabled NETDEV VLAN fetures + * @adapter: board private structure + * + * Depending on whether VIRTHCNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 + * were negotiated determine the VLAN features that are enabled by default. + **/ +static netdev_features_t +iavf_get_netdev_vlan_features(struct iavf_adapter *adapter) +{ + netdev_features_t features = 0; + + if (!adapter->vf_res || !adapter->vf_res->vf_cap_flags) + return features; + + if (VLAN_ALLOWED(adapter)) { + features |= NETIF_F_HW_VLAN_CTAG_FILTER | + NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX; + } else if (VLAN_V2_ALLOWED(adapter)) { + struct virtchnl_vlan_caps *vlan_v2_caps = + &adapter->vlan_v2_caps; + struct virtchnl_vlan_supported_caps *filtering_support = + &vlan_v2_caps->filtering.filtering_support; + struct virtchnl_vlan_supported_caps *stripping_support = + &vlan_v2_caps->offloads.stripping_support; + struct virtchnl_vlan_supported_caps *insertion_support = + &vlan_v2_caps->offloads.insertion_support; + u32 ethertype_init; + + /* give priority to outer stripping and don't support both outer + * and inner stripping + */ + ethertype_init = vlan_v2_caps->offloads.ethertype_init; + if (stripping_support->outer != VIRTCHNL_VLAN_UNSUPPORTED) { + if (stripping_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_RX; + else if (stripping_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_88A8) + features |= NETIF_F_HW_VLAN_STAG_RX; + } else if (stripping_support->inner != + VIRTCHNL_VLAN_UNSUPPORTED) { + if (stripping_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_RX; + } + + /* give priority to outer insertion and don't support both outer + * and inner insertion + */ + if (insertion_support->outer != VIRTCHNL_VLAN_UNSUPPORTED) { + if (insertion_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_TX; + else if (insertion_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_88A8) + features |= NETIF_F_HW_VLAN_STAG_TX; + } else if (insertion_support->inner != + VIRTCHNL_VLAN_UNSUPPORTED) { + if (insertion_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_TX; + } + + /* give priority to outer filtering and don't bother if both + * outer and inner filtering are enabled + */ + ethertype_init = vlan_v2_caps->filtering.ethertype_init; + if (filtering_support->outer != VIRTCHNL_VLAN_UNSUPPORTED) { + if (filtering_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_FILTER; + if (filtering_support->outer & + VIRTCHNL_VLAN_ETHERTYPE_88A8 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_88A8) + features |= NETIF_F_HW_VLAN_STAG_FILTER; + } else if (filtering_support->inner != + VIRTCHNL_VLAN_UNSUPPORTED) { + if (filtering_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_8100 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_8100) + features |= NETIF_F_HW_VLAN_CTAG_FILTER; + if (filtering_support->inner & + VIRTCHNL_VLAN_ETHERTYPE_88A8 && + ethertype_init & VIRTCHNL_VLAN_ETHERTYPE_88A8) + features |= NETIF_F_HW_VLAN_STAG_FILTER; + } + } + + return features; +} + +#define IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested, allowed, feature_bit) \ + (!(((requested) & (feature_bit)) && \ + !((allowed) & (feature_bit)))) + +/** + * iavf_fix_netdev_vlan_features - fix NETDEV VLAN features based on support + * @adapter: board private structure + * @requested_features: stack requested NETDEV features + **/ +static netdev_features_t +iavf_fix_netdev_vlan_features(struct iavf_adapter *adapter, + netdev_features_t requested_features) +{ + netdev_features_t allowed_features; + + allowed_features = iavf_get_netdev_vlan_hw_features(adapter) | + iavf_get_netdev_vlan_features(adapter); + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_CTAG_TX)) + requested_features &= ~NETIF_F_HW_VLAN_CTAG_TX; + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_CTAG_RX)) + requested_features &= ~NETIF_F_HW_VLAN_CTAG_RX; + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_STAG_TX)) + requested_features &= ~NETIF_F_HW_VLAN_STAG_TX; + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_STAG_RX)) + requested_features &= ~NETIF_F_HW_VLAN_STAG_RX; + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_CTAG_FILTER)) + requested_features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; + + if (!IAVF_NETDEV_VLAN_FEATURE_ALLOWED(requested_features, + allowed_features, + NETIF_F_HW_VLAN_STAG_FILTER)) + requested_features &= ~NETIF_F_HW_VLAN_STAG_FILTER; + + if ((requested_features & + (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) && + (requested_features & + (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX)) && + adapter->vlan_v2_caps.offloads.ethertype_match == + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION) { + netdev_warn(adapter->netdev, "cannot support CTAG and STAG VLAN stripping and/or insertion simultaneously since CTAG and STAG offloads are mutually exclusive, clearing STAG offload settings\n"); + requested_features &= ~(NETIF_F_HW_VLAN_STAG_RX | + NETIF_F_HW_VLAN_STAG_TX); + } + + return requested_features; +} + /** * iavf_fix_features - fix up the netdev feature bits * @netdev: our net device @@ -3697,13 +3933,7 @@ static netdev_features_t iavf_fix_features(struct net_device *netdev, { struct iavf_adapter *adapter = netdev_priv(netdev); - if (adapter->vf_res && - !(adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) - features &= ~(NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_RX | - NETIF_F_HW_VLAN_CTAG_FILTER); - - return features; + return iavf_fix_netdev_vlan_features(adapter, features); } static const struct net_device_ops iavf_netdev_ops = { @@ -3755,6 +3985,7 @@ static int iavf_check_reset_complete(struct iavf_hw *hw) int iavf_process_config(struct iavf_adapter *adapter) { struct virtchnl_vf_resource *vfres = adapter->vf_res; + netdev_features_t hw_vlan_features, vlan_features; struct net_device *netdev = adapter->netdev; netdev_features_t hw_enc_features; netdev_features_t hw_features; @@ -3802,19 +4033,19 @@ int iavf_process_config(struct iavf_adapter *adapter) */ hw_features = hw_enc_features; - /* Enable VLAN features if supported */ - if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) - hw_features |= (NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_RX); + /* get HW VLAN features that can be toggled */ + hw_vlan_features = iavf_get_netdev_vlan_hw_features(adapter); + /* Enable cloud filter if ADQ is supported */ if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ) hw_features |= NETIF_F_HW_TC; if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_USO) hw_features |= NETIF_F_GSO_UDP_L4; - netdev->hw_features |= hw_features; + netdev->hw_features |= hw_features | hw_vlan_features; + vlan_features = iavf_get_netdev_vlan_features(adapter); - netdev->features |= hw_features; + netdev->features |= hw_features | vlan_features; if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 2ad426f13462..2dc1c435223c 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -642,7 +642,6 @@ static void iavf_mac_add_reject(struct iavf_adapter *adapter) **/ void iavf_add_vlans(struct iavf_adapter *adapter) { - struct virtchnl_vlan_filter_list *vvfl; int len, i = 0, count = 0; struct iavf_vlan_filter *f; bool more = false; @@ -660,48 +659,105 @@ void iavf_add_vlans(struct iavf_adapter *adapter) if (f->add) count++; } - if (!count || !VLAN_ALLOWED(adapter)) { + if (!count || !VLAN_FILTERING_ALLOWED(adapter)) { adapter->aq_required &= ~IAVF_FLAG_AQ_ADD_VLAN_FILTER; spin_unlock_bh(&adapter->mac_vlan_list_lock); return; } - adapter->current_op = VIRTCHNL_OP_ADD_VLAN; - len = sizeof(struct virtchnl_vlan_filter_list) + - (count * sizeof(u16)); - if (len > IAVF_MAX_AQ_BUF_SIZE) { - dev_warn(&adapter->pdev->dev, "Too many add VLAN changes in one request\n"); - count = (IAVF_MAX_AQ_BUF_SIZE - - sizeof(struct virtchnl_vlan_filter_list)) / - sizeof(u16); - len = sizeof(struct virtchnl_vlan_filter_list) + - (count * sizeof(u16)); - more = true; - } - vvfl = kzalloc(len, GFP_ATOMIC); - if (!vvfl) { + if (VLAN_ALLOWED(adapter)) { + struct virtchnl_vlan_filter_list *vvfl; + + adapter->current_op = VIRTCHNL_OP_ADD_VLAN; + + len = sizeof(*vvfl) + (count * sizeof(u16)); + if (len > IAVF_MAX_AQ_BUF_SIZE) { + dev_warn(&adapter->pdev->dev, "Too many add VLAN changes in one request\n"); + count = (IAVF_MAX_AQ_BUF_SIZE - sizeof(*vvfl)) / + sizeof(u16); + len = sizeof(*vvfl) + (count * sizeof(u16)); + more = true; + } + vvfl = kzalloc(len, GFP_ATOMIC); + if (!vvfl) { + spin_unlock_bh(&adapter->mac_vlan_list_lock); + return; + } + + vvfl->vsi_id = adapter->vsi_res->vsi_id; + vvfl->num_elements = count; + list_for_each_entry(f, &adapter->vlan_filter_list, list) { + if (f->add) { + vvfl->vlan_id[i] = f->vlan.vid; + i++; + f->add = false; + if (i == count) + break; + } + } + if (!more) + adapter->aq_required &= ~IAVF_FLAG_AQ_ADD_VLAN_FILTER; + spin_unlock_bh(&adapter->mac_vlan_list_lock); - return; - } - vvfl->vsi_id = adapter->vsi_res->vsi_id; - vvfl->num_elements = count; - list_for_each_entry(f, &adapter->vlan_filter_list, list) { - if (f->add) { - vvfl->vlan_id[i] = f->vlan; - i++; - f->add = false; - if (i == count) - break; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_ADD_VLAN, (u8 *)vvfl, len); + kfree(vvfl); + } else { + struct virtchnl_vlan_filter_list_v2 *vvfl_v2; + + adapter->current_op = VIRTCHNL_OP_ADD_VLAN_V2; + + len = sizeof(*vvfl_v2) + ((count - 1) * + sizeof(struct virtchnl_vlan_filter)); + if (len > IAVF_MAX_AQ_BUF_SIZE) { + dev_warn(&adapter->pdev->dev, "Too many add VLAN changes in one request\n"); + count = (IAVF_MAX_AQ_BUF_SIZE - sizeof(*vvfl_v2)) / + sizeof(struct virtchnl_vlan_filter); + len = sizeof(*vvfl_v2) + + ((count - 1) * + sizeof(struct virtchnl_vlan_filter)); + more = true; } - } - if (!more) - adapter->aq_required &= ~IAVF_FLAG_AQ_ADD_VLAN_FILTER; - spin_unlock_bh(&adapter->mac_vlan_list_lock); + vvfl_v2 = kzalloc(len, GFP_ATOMIC); + if (!vvfl_v2) { + spin_unlock_bh(&adapter->mac_vlan_list_lock); + return; + } - iavf_send_pf_msg(adapter, VIRTCHNL_OP_ADD_VLAN, (u8 *)vvfl, len); - kfree(vvfl); + vvfl_v2->vport_id = adapter->vsi_res->vsi_id; + vvfl_v2->num_elements = count; + list_for_each_entry(f, &adapter->vlan_filter_list, list) { + if (f->add) { + struct virtchnl_vlan_supported_caps *filtering_support = + &adapter->vlan_v2_caps.filtering.filtering_support; + struct virtchnl_vlan *vlan; + + /* give priority over outer if it's enabled */ + if (filtering_support->outer) + vlan = &vvfl_v2->filters[i].outer; + else + vlan = &vvfl_v2->filters[i].inner; + + vlan->tci = f->vlan.vid; + vlan->tpid = f->vlan.tpid; + + i++; + f->add = false; + if (i == count) + break; + } + } + + if (!more) + adapter->aq_required &= ~IAVF_FLAG_AQ_ADD_VLAN_FILTER; + + spin_unlock_bh(&adapter->mac_vlan_list_lock); + + iavf_send_pf_msg(adapter, VIRTCHNL_OP_ADD_VLAN_V2, + (u8 *)vvfl_v2, len); + kfree(vvfl_v2); + } } /** @@ -712,7 +768,6 @@ void iavf_add_vlans(struct iavf_adapter *adapter) **/ void iavf_del_vlans(struct iavf_adapter *adapter) { - struct virtchnl_vlan_filter_list *vvfl; struct iavf_vlan_filter *f, *ftmp; int len, i = 0, count = 0; bool more = false; @@ -733,56 +788,116 @@ void iavf_del_vlans(struct iavf_adapter *adapter) * filters marked for removal to enable bailing out before * sending a virtchnl message */ - if (f->remove && !VLAN_ALLOWED(adapter)) { + if (f->remove && !VLAN_FILTERING_ALLOWED(adapter)) { list_del(&f->list); kfree(f); } else if (f->remove) { count++; } } - if (!count) { + if (!count || !VLAN_FILTERING_ALLOWED(adapter)) { adapter->aq_required &= ~IAVF_FLAG_AQ_DEL_VLAN_FILTER; spin_unlock_bh(&adapter->mac_vlan_list_lock); return; } - adapter->current_op = VIRTCHNL_OP_DEL_VLAN; - len = sizeof(struct virtchnl_vlan_filter_list) + - (count * sizeof(u16)); - if (len > IAVF_MAX_AQ_BUF_SIZE) { - dev_warn(&adapter->pdev->dev, "Too many delete VLAN changes in one request\n"); - count = (IAVF_MAX_AQ_BUF_SIZE - - sizeof(struct virtchnl_vlan_filter_list)) / - sizeof(u16); - len = sizeof(struct virtchnl_vlan_filter_list) + - (count * sizeof(u16)); - more = true; - } - vvfl = kzalloc(len, GFP_ATOMIC); - if (!vvfl) { + if (VLAN_ALLOWED(adapter)) { + struct virtchnl_vlan_filter_list *vvfl; + + adapter->current_op = VIRTCHNL_OP_DEL_VLAN; + + len = sizeof(*vvfl) + (count * sizeof(u16)); + if (len > IAVF_MAX_AQ_BUF_SIZE) { + dev_warn(&adapter->pdev->dev, "Too many delete VLAN changes in one request\n"); + count = (IAVF_MAX_AQ_BUF_SIZE - sizeof(*vvfl)) / + sizeof(u16); + len = sizeof(*vvfl) + (count * sizeof(u16)); + more = true; + } + vvfl = kzalloc(len, GFP_ATOMIC); + if (!vvfl) { + spin_unlock_bh(&adapter->mac_vlan_list_lock); + return; + } + + vvfl->vsi_id = adapter->vsi_res->vsi_id; + vvfl->num_elements = count; + list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) { + if (f->remove) { + vvfl->vlan_id[i] = f->vlan.vid; + i++; + list_del(&f->list); + kfree(f); + if (i == count) + break; + } + } + + if (!more) + adapter->aq_required &= ~IAVF_FLAG_AQ_DEL_VLAN_FILTER; + spin_unlock_bh(&adapter->mac_vlan_list_lock); - return; - } - vvfl->vsi_id = adapter->vsi_res->vsi_id; - vvfl->num_elements = count; - list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) { - if (f->remove) { - vvfl->vlan_id[i] = f->vlan; - i++; - list_del(&f->list); - kfree(f); - if (i == count) - break; + iavf_send_pf_msg(adapter, VIRTCHNL_OP_DEL_VLAN, (u8 *)vvfl, len); + kfree(vvfl); + } else { + struct virtchnl_vlan_filter_list_v2 *vvfl_v2; + + adapter->current_op = VIRTCHNL_OP_DEL_VLAN_V2; + + len = sizeof(*vvfl_v2) + + ((count - 1) * sizeof(struct virtchnl_vlan_filter)); + if (len > IAVF_MAX_AQ_BUF_SIZE) { + dev_warn(&adapter->pdev->dev, "Too many add VLAN changes in one request\n"); + count = (IAVF_MAX_AQ_BUF_SIZE - + sizeof(*vvfl_v2)) / + sizeof(struct virtchnl_vlan_filter); + len = sizeof(*vvfl_v2) + + ((count - 1) * + sizeof(struct virtchnl_vlan_filter)); + more = true; } - } - if (!more) - adapter->aq_required &= ~IAVF_FLAG_AQ_DEL_VLAN_FILTER; - spin_unlock_bh(&adapter->mac_vlan_list_lock); + vvfl_v2 = kzalloc(len, GFP_ATOMIC); + if (!vvfl_v2) { + spin_unlock_bh(&adapter->mac_vlan_list_lock); + return; + } + + vvfl_v2->vport_id = adapter->vsi_res->vsi_id; + vvfl_v2->num_elements = count; + list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) { + if (f->remove) { + struct virtchnl_vlan_supported_caps *filtering_support = + &adapter->vlan_v2_caps.filtering.filtering_support; + struct virtchnl_vlan *vlan; + + /* give priority over outer if it's enabled */ + if (filtering_support->outer) + vlan = &vvfl_v2->filters[i].outer; + else + vlan = &vvfl_v2->filters[i].inner; + + vlan->tci = f->vlan.vid; + vlan->tpid = f->vlan.tpid; + + list_del(&f->list); + kfree(f); + i++; + if (i == count) + break; + } + } - iavf_send_pf_msg(adapter, VIRTCHNL_OP_DEL_VLAN, (u8 *)vvfl, len); - kfree(vvfl); + if (!more) + adapter->aq_required &= ~IAVF_FLAG_AQ_DEL_VLAN_FILTER; + + spin_unlock_bh(&adapter->mac_vlan_list_lock); + + iavf_send_pf_msg(adapter, VIRTCHNL_OP_DEL_VLAN_V2, + (u8 *)vvfl_v2, len); + kfree(vvfl_v2); + } } /** -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 00:16:03 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 16:16:03 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 5/6] iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 offload enable/disable In-Reply-To: <20211130001604.22112-1-anthony.l.nguyen@intel.com> References: <20211130001604.22112-1-anthony.l.nguyen@intel.com> Message-ID: <20211130001604.22112-6-anthony.l.nguyen@intel.com> From: Brett Creeley The new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability added support that allows the VF to support 802.1Q and 802.1ad VLAN insertion and stripping if successfully negotiated via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS. Multiple changes were needed to support this new functionality. 1. Added new aq_required flags to support any kind of VLAN stripping and insertion offload requests via virtchnl. 2. Added the new method iavf_set_vlan_offload_features() that's used during VF initialization, VF reset, and iavf_set_features() to set the aq_required bits based on the current VLAN offload configuration of the VF's netdev. 3. Added virtchnl handling for VIRTCHNL_OP_ENABLE_STRIPPING_V2, VIRTCHNL_OP_DISABLE_STRIPPING_V2, VIRTCHNL_OP_ENABLE_INSERTION_V2, and VIRTCHNL_OP_ENABLE_INSERTION_V2. Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf.h | 80 ++++--- drivers/net/ethernet/intel/iavf/iavf_main.c | 151 +++++++++++-- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 203 ++++++++++++++++++ 3 files changed, 383 insertions(+), 51 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index 2660d46da1b5..59806d1f7e79 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -287,39 +287,47 @@ struct iavf_adapter { /* duplicates for common code */ #define IAVF_FLAG_DCB_ENABLED 0 /* flags for admin queue service task */ - u32 aq_required; -#define IAVF_FLAG_AQ_ENABLE_QUEUES BIT(0) -#define IAVF_FLAG_AQ_DISABLE_QUEUES BIT(1) -#define IAVF_FLAG_AQ_ADD_MAC_FILTER BIT(2) -#define IAVF_FLAG_AQ_ADD_VLAN_FILTER BIT(3) -#define IAVF_FLAG_AQ_DEL_MAC_FILTER BIT(4) -#define IAVF_FLAG_AQ_DEL_VLAN_FILTER BIT(5) -#define IAVF_FLAG_AQ_CONFIGURE_QUEUES BIT(6) -#define IAVF_FLAG_AQ_MAP_VECTORS BIT(7) -#define IAVF_FLAG_AQ_HANDLE_RESET BIT(8) -#define IAVF_FLAG_AQ_CONFIGURE_RSS BIT(9) /* direct AQ config */ -#define IAVF_FLAG_AQ_GET_CONFIG BIT(10) + u64 aq_required; +#define IAVF_FLAG_AQ_ENABLE_QUEUES BIT_ULL(0) +#define IAVF_FLAG_AQ_DISABLE_QUEUES BIT_ULL(1) +#define IAVF_FLAG_AQ_ADD_MAC_FILTER BIT_ULL(2) +#define IAVF_FLAG_AQ_ADD_VLAN_FILTER BIT_ULL(3) +#define IAVF_FLAG_AQ_DEL_MAC_FILTER BIT_ULL(4) +#define IAVF_FLAG_AQ_DEL_VLAN_FILTER BIT_ULL(5) +#define IAVF_FLAG_AQ_CONFIGURE_QUEUES BIT_ULL(6) +#define IAVF_FLAG_AQ_MAP_VECTORS BIT_ULL(7) +#define IAVF_FLAG_AQ_HANDLE_RESET BIT_ULL(8) +#define IAVF_FLAG_AQ_CONFIGURE_RSS BIT_ULL(9) /* direct AQ config */ +#define IAVF_FLAG_AQ_GET_CONFIG BIT_ULL(10) /* Newer style, RSS done by the PF so we can ignore hardware vagaries. */ -#define IAVF_FLAG_AQ_GET_HENA BIT(11) -#define IAVF_FLAG_AQ_SET_HENA BIT(12) -#define IAVF_FLAG_AQ_SET_RSS_KEY BIT(13) -#define IAVF_FLAG_AQ_SET_RSS_LUT BIT(14) -#define IAVF_FLAG_AQ_REQUEST_PROMISC BIT(15) -#define IAVF_FLAG_AQ_RELEASE_PROMISC BIT(16) -#define IAVF_FLAG_AQ_REQUEST_ALLMULTI BIT(17) -#define IAVF_FLAG_AQ_RELEASE_ALLMULTI BIT(18) -#define IAVF_FLAG_AQ_ENABLE_VLAN_STRIPPING BIT(19) -#define IAVF_FLAG_AQ_DISABLE_VLAN_STRIPPING BIT(20) -#define IAVF_FLAG_AQ_ENABLE_CHANNELS BIT(21) -#define IAVF_FLAG_AQ_DISABLE_CHANNELS BIT(22) -#define IAVF_FLAG_AQ_ADD_CLOUD_FILTER BIT(23) -#define IAVF_FLAG_AQ_DEL_CLOUD_FILTER BIT(24) -#define IAVF_FLAG_AQ_ADD_FDIR_FILTER BIT(25) -#define IAVF_FLAG_AQ_DEL_FDIR_FILTER BIT(26) -#define IAVF_FLAG_AQ_ADD_ADV_RSS_CFG BIT(27) -#define IAVF_FLAG_AQ_DEL_ADV_RSS_CFG BIT(28) -#define IAVF_FLAG_AQ_REQUEST_STATS BIT(29) -#define IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS BIT(30) +#define IAVF_FLAG_AQ_GET_HENA BIT_ULL(11) +#define IAVF_FLAG_AQ_SET_HENA BIT_ULL(12) +#define IAVF_FLAG_AQ_SET_RSS_KEY BIT_ULL(13) +#define IAVF_FLAG_AQ_SET_RSS_LUT BIT_ULL(14) +#define IAVF_FLAG_AQ_REQUEST_PROMISC BIT_ULL(15) +#define IAVF_FLAG_AQ_RELEASE_PROMISC BIT_ULL(16) +#define IAVF_FLAG_AQ_REQUEST_ALLMULTI BIT_ULL(17) +#define IAVF_FLAG_AQ_RELEASE_ALLMULTI BIT_ULL(18) +#define IAVF_FLAG_AQ_ENABLE_VLAN_STRIPPING BIT_ULL(19) +#define IAVF_FLAG_AQ_DISABLE_VLAN_STRIPPING BIT_ULL(20) +#define IAVF_FLAG_AQ_ENABLE_CHANNELS BIT_ULL(21) +#define IAVF_FLAG_AQ_DISABLE_CHANNELS BIT_ULL(22) +#define IAVF_FLAG_AQ_ADD_CLOUD_FILTER BIT_ULL(23) +#define IAVF_FLAG_AQ_DEL_CLOUD_FILTER BIT_ULL(24) +#define IAVF_FLAG_AQ_ADD_FDIR_FILTER BIT_ULL(25) +#define IAVF_FLAG_AQ_DEL_FDIR_FILTER BIT_ULL(26) +#define IAVF_FLAG_AQ_ADD_ADV_RSS_CFG BIT_ULL(27) +#define IAVF_FLAG_AQ_DEL_ADV_RSS_CFG BIT_ULL(28) +#define IAVF_FLAG_AQ_REQUEST_STATS BIT_ULL(29) +#define IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS BIT_ULL(30) +#define IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_STRIPPING BIT_ULL(31) +#define IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_STRIPPING BIT_ULL(32) +#define IAVF_FLAG_AQ_ENABLE_STAG_VLAN_STRIPPING BIT_ULL(33) +#define IAVF_FLAG_AQ_DISABLE_STAG_VLAN_STRIPPING BIT_ULL(34) +#define IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_INSERTION BIT_ULL(35) +#define IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION BIT_ULL(36) +#define IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION BIT_ULL(37) +#define IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION BIT_ULL(38) /* OS defined structs */ struct net_device *netdev; @@ -524,6 +532,14 @@ void iavf_enable_channels(struct iavf_adapter *adapter); void iavf_disable_channels(struct iavf_adapter *adapter); void iavf_add_cloud_filter(struct iavf_adapter *adapter); void iavf_del_cloud_filter(struct iavf_adapter *adapter); +void iavf_enable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid); +void iavf_disable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid); +void iavf_enable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid); +void iavf_disable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid); +void +iavf_set_vlan_offload_features(struct iavf_adapter *adapter, + netdev_features_t prev_features, + netdev_features_t features); void iavf_add_fdir_filter(struct iavf_adapter *adapter); void iavf_del_fdir_filter(struct iavf_adapter *adapter); void iavf_add_adv_rss_cfg(struct iavf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 945369bbe04a..8bdadf6a3c0c 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1815,6 +1815,39 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter) iavf_del_adv_rss_cfg(adapter); return 0; } + if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_STRIPPING) { + iavf_disable_vlan_stripping_v2(adapter, ETH_P_8021Q); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_STAG_VLAN_STRIPPING) { + iavf_disable_vlan_stripping_v2(adapter, ETH_P_8021AD); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_STRIPPING) { + iavf_enable_vlan_stripping_v2(adapter, ETH_P_8021Q); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_STAG_VLAN_STRIPPING) { + iavf_enable_vlan_stripping_v2(adapter, ETH_P_8021AD); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION) { + iavf_disable_vlan_insertion_v2(adapter, ETH_P_8021Q); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION) { + iavf_disable_vlan_insertion_v2(adapter, ETH_P_8021AD); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_INSERTION) { + iavf_enable_vlan_insertion_v2(adapter, ETH_P_8021Q); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION) { + iavf_enable_vlan_insertion_v2(adapter, ETH_P_8021AD); + return 0; + } + if (adapter->aq_required & IAVF_FLAG_AQ_REQUEST_STATS) { iavf_request_stats(adapter); return 0; @@ -1823,6 +1856,91 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter) return -EAGAIN; } +/** + * iavf_set_vlan_offload_features - set VLAN offload configuration + * @adapter: board private structure + * @prev_features: previous features used for comparison + * @features: updated features used for configuration + * + * Set the aq_required bit(s) based on the requested features passed in to + * configure VLAN stripping and/or VLAN insertion if supported. Also, schedule + * the watchdog if any changes are requested to expedite the request via + * virtchnl. + **/ +void +iavf_set_vlan_offload_features(struct iavf_adapter *adapter, + netdev_features_t prev_features, + netdev_features_t features) +{ + bool enable_stripping = true, enable_insertion = true; + u16 vlan_ethertype = 0; + u64 aq_required = 0; + + /* keep cases separate because one ethertype for offloads can be + * disabled at the same time as another is disabled, so check for an + * enabled ethertype first, then check for disabled. Default to + * ETH_P_8021Q so an ethertype is specified if disabling insertion and + * stripping. + */ + if (features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX)) + vlan_ethertype = ETH_P_8021AD; + else if (features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) + vlan_ethertype = ETH_P_8021Q; + else if (prev_features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX)) + vlan_ethertype = ETH_P_8021AD; + else if (prev_features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) + vlan_ethertype = ETH_P_8021Q; + else + vlan_ethertype = ETH_P_8021Q; + + if (!(features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_CTAG_RX))) + enable_stripping = false; + if (!(features & (NETIF_F_HW_VLAN_STAG_TX | NETIF_F_HW_VLAN_CTAG_TX))) + enable_insertion = false; + + if (VLAN_ALLOWED(adapter)) { + /* VIRTCHNL_VF_OFFLOAD_VLAN only has support for toggling VLAN + * stripping via virtchnl. VLAN insertion can be toggled on the + * netdev, but it doesn't require a virtchnl message + */ + if (enable_stripping) + aq_required |= IAVF_FLAG_AQ_ENABLE_VLAN_STRIPPING; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_VLAN_STRIPPING; + + } else if (VLAN_V2_ALLOWED(adapter)) { + switch (vlan_ethertype) { + case ETH_P_8021Q: + if (enable_stripping) + aq_required |= IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_STRIPPING; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_STRIPPING; + + if (enable_insertion) + aq_required |= IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_INSERTION; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION; + break; + case ETH_P_8021AD: + if (enable_stripping) + aq_required |= IAVF_FLAG_AQ_ENABLE_STAG_VLAN_STRIPPING; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_STAG_VLAN_STRIPPING; + + if (enable_insertion) + aq_required |= IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION; + else + aq_required |= IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION; + break; + } + } + + if (aq_required) { + adapter->aq_required |= aq_required; + mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0); + } +} + /** * iavf_startup - first step of driver startup * @adapter: board private structure @@ -2179,6 +2297,10 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter) else iavf_init_rss(adapter); + if (VLAN_V2_ALLOWED(adapter)) + /* request initial VLAN offload settings */ + iavf_set_vlan_offload_features(adapter, 0, netdev->features); + return; err_mem: iavf_free_rss(adapter); @@ -3689,6 +3811,11 @@ static int iavf_change_mtu(struct net_device *netdev, int new_mtu) return 0; } +#define NETIF_VLAN_OFFLOAD_FEATURES (NETIF_F_HW_VLAN_CTAG_RX | \ + NETIF_F_HW_VLAN_CTAG_TX | \ + NETIF_F_HW_VLAN_STAG_RX | \ + NETIF_F_HW_VLAN_STAG_TX) + /** * iavf_set_features - set the netdev feature flags * @netdev: ptr to the netdev being adjusted @@ -3700,25 +3827,11 @@ static int iavf_set_features(struct net_device *netdev, { struct iavf_adapter *adapter = netdev_priv(netdev); - /* Don't allow enabling VLAN features when adapter is not capable - * of VLAN offload/filtering - */ - if (!VLAN_ALLOWED(adapter)) { - netdev->hw_features &= ~(NETIF_F_HW_VLAN_CTAG_RX | - NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_FILTER); - if (features & (NETIF_F_HW_VLAN_CTAG_RX | - NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_CTAG_FILTER)) - return -EINVAL; - } else if ((netdev->features ^ features) & NETIF_F_HW_VLAN_CTAG_RX) { - if (features & NETIF_F_HW_VLAN_CTAG_RX) - adapter->aq_required |= - IAVF_FLAG_AQ_ENABLE_VLAN_STRIPPING; - else - adapter->aq_required |= - IAVF_FLAG_AQ_DISABLE_VLAN_STRIPPING; - } + /* trigger update on any VLAN feature change */ + if ((netdev->features & NETIF_VLAN_OFFLOAD_FEATURES) ^ + (features & NETIF_VLAN_OFFLOAD_FEATURES)) + iavf_set_vlan_offload_features(adapter, netdev->features, + features); return 0; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 613fcc491fd7..1fe6ab40409a 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -1122,6 +1122,204 @@ void iavf_disable_vlan_stripping(struct iavf_adapter *adapter) iavf_send_pf_msg(adapter, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING, NULL, 0); } +/** + * iavf_tpid_to_vc_ethertype - transform from VLAN TPID to virtchnl ethertype + * @tpid: VLAN TPID (i.e. 0x8100, 0x88a8, etc.) + */ +static u32 iavf_tpid_to_vc_ethertype(u16 tpid) +{ + switch (tpid) { + case ETH_P_8021Q: + return VIRTCHNL_VLAN_ETHERTYPE_8100; + case ETH_P_8021AD: + return VIRTCHNL_VLAN_ETHERTYPE_88A8; + } + + return 0; +} + +/** + * iavf_set_vc_offload_ethertype - set virtchnl ethertype for offload message + * @adapter: adapter structure + * @msg: message structure used for updating offloads over virtchnl to update + * @tpid: VLAN TPID (i.e. 0x8100, 0x88a8, etc.) + * @offload_op: opcode used to determine which support structure to check + */ +static int +iavf_set_vc_offload_ethertype(struct iavf_adapter *adapter, + struct virtchnl_vlan_setting *msg, u16 tpid, + enum virtchnl_ops offload_op) +{ + struct virtchnl_vlan_supported_caps *offload_support; + u16 vc_ethertype = iavf_tpid_to_vc_ethertype(tpid); + + /* reference the correct offload support structure */ + switch (offload_op) { + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + offload_support = + &adapter->vlan_v2_caps.offloads.stripping_support; + break; + case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2: + case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: + offload_support = + &adapter->vlan_v2_caps.offloads.insertion_support; + break; + default: + dev_err(&adapter->pdev->dev, "Invalid opcode %d for setting virtchnl ethertype to enable/disable VLAN offloads\n", + offload_op); + return -EINVAL; + } + + /* make sure ethertype is supported */ + if (offload_support->outer & vc_ethertype && + offload_support->outer & VIRTCHNL_VLAN_TOGGLE) { + msg->outer_ethertype_setting = vc_ethertype; + } else if (offload_support->inner & vc_ethertype && + offload_support->inner & VIRTCHNL_VLAN_TOGGLE) { + msg->inner_ethertype_setting = vc_ethertype; + } else { + dev_dbg(&adapter->pdev->dev, "opcode %d unsupported for VLAN TPID 0x%04x\n", + offload_op, tpid); + return -EINVAL; + } + + return 0; +} + +/** + * iavf_clear_offload_v2_aq_required - clear AQ required bit for offload request + * @adapter: adapter structure + * @tpid: VLAN TPID + * @offload_op: opcode used to determine which AQ required bit to clear + */ +static void +iavf_clear_offload_v2_aq_required(struct iavf_adapter *adapter, u16 tpid, + enum virtchnl_ops offload_op) +{ + switch (offload_op) { + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + if (tpid == ETH_P_8021Q) + adapter->aq_required &= + ~IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_STRIPPING; + else if (tpid == ETH_P_8021AD) + adapter->aq_required &= + ~IAVF_FLAG_AQ_ENABLE_STAG_VLAN_STRIPPING; + break; + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + if (tpid == ETH_P_8021Q) + adapter->aq_required &= + ~IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_STRIPPING; + else if (tpid == ETH_P_8021AD) + adapter->aq_required &= + ~IAVF_FLAG_AQ_DISABLE_STAG_VLAN_STRIPPING; + break; + case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2: + if (tpid == ETH_P_8021Q) + adapter->aq_required &= + ~IAVF_FLAG_AQ_ENABLE_CTAG_VLAN_INSERTION; + else if (tpid == ETH_P_8021AD) + adapter->aq_required &= + ~IAVF_FLAG_AQ_ENABLE_STAG_VLAN_INSERTION; + break; + case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: + if (tpid == ETH_P_8021Q) + adapter->aq_required &= + ~IAVF_FLAG_AQ_DISABLE_CTAG_VLAN_INSERTION; + else if (tpid == ETH_P_8021AD) + adapter->aq_required &= + ~IAVF_FLAG_AQ_DISABLE_STAG_VLAN_INSERTION; + break; + default: + dev_err(&adapter->pdev->dev, "Unsupported opcode %d specified for clearing aq_required bits for VIRTCHNL_VF_OFFLOAD_VLAN_V2 offload request\n", + offload_op); + } +} + +/** + * iavf_send_vlan_offload_v2 - send offload enable/disable over virtchnl + * @adapter: adapter structure + * @tpid: VLAN TPID used for the command (i.e. 0x8100 or 0x88a8) + * @offload_op: offload_op used to make the request over virtchnl + */ +static void +iavf_send_vlan_offload_v2(struct iavf_adapter *adapter, u16 tpid, + enum virtchnl_ops offload_op) +{ + struct virtchnl_vlan_setting *msg; + int len = sizeof(*msg); + + if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) { + /* bail because we already have a command pending */ + dev_err(&adapter->pdev->dev, "Cannot send %d, command %d pending\n", + offload_op, adapter->current_op); + return; + } + + adapter->current_op = offload_op; + + msg = kzalloc(len, GFP_KERNEL); + if (!msg) + return; + + msg->vport_id = adapter->vsi_res->vsi_id; + + /* always clear to prevent unsupported and endless requests */ + iavf_clear_offload_v2_aq_required(adapter, tpid, offload_op); + + /* only send valid offload requests */ + if (!iavf_set_vc_offload_ethertype(adapter, msg, tpid, offload_op)) + iavf_send_pf_msg(adapter, offload_op, (u8 *)msg, len); + else + adapter->current_op = VIRTCHNL_OP_UNKNOWN; + + kfree(msg); +} + +/** + * iavf_enable_vlan_stripping_v2 - enable VLAN stripping + * @adapter: adapter structure + * @tpid: VLAN TPID used to enable VLAN stripping + */ +void iavf_enable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid) +{ + iavf_send_vlan_offload_v2(adapter, tpid, + VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2); +} + +/** + * iavf_disable_vlan_stripping_v2 - disable VLAN stripping + * @adapter: adapter structure + * @tpid: VLAN TPID used to disable VLAN stripping + */ +void iavf_disable_vlan_stripping_v2(struct iavf_adapter *adapter, u16 tpid) +{ + iavf_send_vlan_offload_v2(adapter, tpid, + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2); +} + +/** + * iavf_enable_vlan_insertion_v2 - enable VLAN insertion + * @adapter: adapter structure + * @tpid: VLAN TPID used to enable VLAN insertion + */ +void iavf_enable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid) +{ + iavf_send_vlan_offload_v2(adapter, tpid, + VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2); +} + +/** + * iavf_disable_vlan_insertion_v2 - disable VLAN insertion + * @adapter: adapter structure + * @tpid: VLAN TPID used to disable VLAN insertion + */ +void iavf_disable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid) +{ + iavf_send_vlan_offload_v2(adapter, tpid, + VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2); +} + #define IAVF_MAX_SPEED_STRLEN 13 /** @@ -1962,6 +2160,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__); + /* Request VLAN offload settings */ + if (VLAN_V2_ALLOWED(adapter)) + iavf_set_vlan_offload_features(adapter, 0, + netdev->features); + iavf_set_queue_vlan_tag_loc(adapter); } -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 00:15:59 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 16:15:59 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 1/6] virtchnl: Add support for new VLAN capabilities In-Reply-To: <20211130001604.22112-1-anthony.l.nguyen@intel.com> References: <20211130001604.22112-1-anthony.l.nguyen@intel.com> Message-ID: <20211130001604.22112-2-anthony.l.nguyen@intel.com> From: Brett Creeley Currently VIRTCHNL only allows for VLAN filtering and offloads to happen on a single 802.1Q VLAN. Add support to filter and offload on inner, outer, and/or inner + outer VLANs. This is done by introducing the new capability VIRTCHNL_VF_OFFLOAD_VLAN_V2. The flow to negotiate this new capability is shown below. 1. VF - sets the VIRTCHNL_VF_OFFLOAD_VLAN_V2 bit in the virtchnl_vf_resource.vf_caps_flags during the VIRTCHNL_OP_GET_VF_RESOURCES request message. The VF should also set the VIRTCHNL_VF_OFFLOAD_VLAN bit in case the PF driver doesn't support the new capability. 2. PF - sets the VLAN capability bit it supports in the VIRTCHNL_OP_GET_VF_RESOURCES response message. This will either be VIRTCHNL_VF_OFFLOAD_VLAN_V2, VIRTCHNL_VF_OFFLOAD_VLAN, or none. 3. VF - If the VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability was ACK'd by the PF, then the VF needs to request the VLAN capabilities of the PF/Device by issuing a VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS request. If the VIRTCHNL_VF_OFFLOAD_VLAN capability was ACK'd then the VF knows only single 802.1Q VLAN filtering/offloads are supported. If no VLAN capability is ACK'd then the PF/Device doesn't support hardware VLAN filtering/offloads for this VF. 4. PF - Populates the virtchnl_vlan_caps structure based on what it allows/supports for that VF and sends that response via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS. After VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS is successfully negotiated the VF driver needs to interpret the capabilities supported by the underlying PF/Device. The VF will be allowed to filter/offload the inner 802.1Q, outer (various ethertype), inner 802.1Q + outer (various ethertypes), or none based on which fields are set. The VF will also need to interpret where the VLAN tag should be inserted and/or stripped based on the negotiated capabilities. Signed-off-by: Brett Creeley --- include/linux/avf/virtchnl.h | 377 +++++++++++++++++++++++++++++++++++ 1 file changed, 377 insertions(+) diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index b30a1bc74fc7..2ce27e8e4f19 100644 --- a/include/linux/avf/virtchnl.h +++ b/include/linux/avf/virtchnl.h @@ -141,6 +141,13 @@ enum virtchnl_ops { VIRTCHNL_OP_DEL_RSS_CFG = 46, VIRTCHNL_OP_ADD_FDIR_FILTER = 47, VIRTCHNL_OP_DEL_FDIR_FILTER = 48, + VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS = 51, + VIRTCHNL_OP_ADD_VLAN_V2 = 52, + VIRTCHNL_OP_DEL_VLAN_V2 = 53, + VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 = 54, + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 = 55, + VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 = 56, + VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 = 57, VIRTCHNL_OP_MAX, }; @@ -246,6 +253,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource); #define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES BIT(6) /* used to negotiate communicating link speeds in Mbps */ #define VIRTCHNL_VF_CAP_ADV_LINK_SPEED BIT(7) +#define VIRTCHNL_VF_OFFLOAD_VLAN_V2 BIT(15) #define VIRTCHNL_VF_OFFLOAD_VLAN BIT(16) #define VIRTCHNL_VF_OFFLOAD_RX_POLLING BIT(17) #define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 BIT(18) @@ -475,6 +483,351 @@ struct virtchnl_vlan_filter_list { VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list); +/* This enum is used for all of the VIRTCHNL_VF_OFFLOAD_VLAN_V2_CAPS related + * structures and opcodes. + * + * VIRTCHNL_VLAN_UNSUPPORTED - This field is not supported and if a VF driver + * populates it the PF should return VIRTCHNL_STATUS_ERR_NOT_SUPPORTED. + * + * VIRTCHNL_VLAN_ETHERTYPE_8100 - This field supports 0x8100 ethertype. + * VIRTCHNL_VLAN_ETHERTYPE_88A8 - This field supports 0x88A8 ethertype. + * VIRTCHNL_VLAN_ETHERTYPE_9100 - This field supports 0x9100 ethertype. + * + * VIRTCHNL_VLAN_ETHERTYPE_AND - Used when multiple ethertypes can be supported + * by the PF concurrently. For example, if the PF can support + * VIRTCHNL_VLAN_ETHERTYPE_8100 AND VIRTCHNL_VLAN_ETHERTYPE_88A8 filters it + * would OR the following bits: + * + * VIRTHCNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_AND; + * + * The VF would interpret this as VLAN filtering can be supported on both 0x8100 + * and 0x88A8 VLAN ethertypes. + * + * VIRTCHNL_ETHERTYPE_XOR - Used when only a single ethertype can be supported + * by the PF concurrently. For example if the PF can support + * VIRTCHNL_VLAN_ETHERTYPE_8100 XOR VIRTCHNL_VLAN_ETHERTYPE_88A8 stripping + * offload it would OR the following bits: + * + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_XOR; + * + * The VF would interpret this as VLAN stripping can be supported on either + * 0x8100 or 0x88a8 VLAN ethertypes. So when requesting VLAN stripping via + * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 the specified ethertype will override + * the previously set value. + * + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 - Used to tell the VF to insert and/or + * strip the VLAN tag using the L2TAG1 field of the Tx/Rx descriptors. + * + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to insert hardware + * offloaded VLAN tags using the L2TAG2 field of the Tx descriptor. + * + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to strip hardware + * offloaded VLAN tags using the L2TAG2_2 field of the Rx descriptor. + * + * VIRTCHNL_VLAN_PRIO - This field supports VLAN priority bits. This is used for + * VLAN filtering if the underlying PF supports it. + * + * VIRTCHNL_VLAN_TOGGLE_ALLOWED - This field is used to say whether a + * certain VLAN capability can be toggled. For example if the underlying PF/CP + * allows the VF to toggle VLAN filtering, stripping, and/or insertion it should + * set this bit along with the supported ethertypes. + */ +enum virtchnl_vlan_support { + VIRTCHNL_VLAN_UNSUPPORTED = 0, + VIRTCHNL_VLAN_ETHERTYPE_8100 = BIT(0), + VIRTCHNL_VLAN_ETHERTYPE_88A8 = BIT(1), + VIRTCHNL_VLAN_ETHERTYPE_9100 = BIT(2), + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 = BIT(8), + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 = BIT(9), + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 = BIT(10), + VIRTCHNL_VLAN_PRIO = BIT(24), + VIRTCHNL_VLAN_FILTER_MASK = BIT(28), + VIRTCHNL_VLAN_ETHERTYPE_AND = BIT(29), + VIRTCHNL_VLAN_ETHERTYPE_XOR = BIT(30), + VIRTCHNL_VLAN_TOGGLE = BIT(31), +}; + +/* This structure is used as part of the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS + * for filtering, insertion, and stripping capabilities. + * + * If only outer capabilities are supported (for filtering, insertion, and/or + * stripping) then this refers to the outer most or single VLAN from the VF's + * perspective. + * + * If only inner capabilities are supported (for filtering, insertion, and/or + * stripping) then this refers to the outer most or single VLAN from the VF's + * perspective. Functionally this is the same as if only outer capabilities are + * supported. The VF driver is just forced to use the inner fields when + * adding/deleting filters and enabling/disabling offloads (if supported). + * + * If both outer and inner capabilities are supported (for filtering, insertion, + * and/or stripping) then outer refers to the outer most or single VLAN and + * inner refers to the second VLAN, if it exists, in the packet. + * + * There is no support for tunneled VLAN offloads, so outer or inner are never + * referring to a tunneled packet from the VF's perspective. + */ +struct virtchnl_vlan_supported_caps { + u32 outer; + u32 inner; +}; + +/* The PF populates these fields based on the supported VLAN filtering. If a + * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will + * reject any VIRTCHNL_OP_ADD_VLAN_V2 or VIRTCHNL_OP_DEL_VLAN_V2 messages using + * the unsupported fields. + * + * Also, a VF is only allowed to toggle its VLAN filtering setting if the + * VIRTCHNL_VLAN_TOGGLE bit is set. + * + * The ethertype(s) specified in the ethertype_init field are the ethertypes + * enabled for VLAN filtering. VLAN filtering in this case refers to the outer + * most VLAN from the VF's perspective. If both inner and outer filtering are + * allowed then ethertype_init only refers to the outer most VLAN as only + * VLAN ethertype supported for inner VLAN filtering is + * VIRTCHNL_VLAN_ETHERTYPE_8100. By default, inner VLAN filtering is disabled + * when both inner and outer filtering are allowed. + * + * The max_filters field tells the VF how many VLAN filters it's allowed to have + * at any one time. If it exceeds this amount and tries to add another filter, + * then the request will be rejected by the PF. To prevent failures, the VF + * should keep track of how many VLAN filters it has added and not attempt to + * add more than max_filters. + */ +struct virtchnl_vlan_filtering_caps { + struct virtchnl_vlan_supported_caps filtering_support; + u32 ethertype_init; + u16 max_filters; + u8 pad[2]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_filtering_caps); + +/* This enum is used for the virtchnl_vlan_offload_caps structure to specify + * if the PF supports a different ethertype for stripping and insertion. + * + * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION - The ethertype(s) specified + * for stripping affect the ethertype(s) specified for insertion and visa versa + * as well. If the VF tries to configure VLAN stripping via + * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 with VIRTCHNL_VLAN_ETHERTYPE_8100 then + * that will be the ethertype for both stripping and insertion. + * + * VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED - The ethertype(s) specified for + * stripping do not affect the ethertype(s) specified for insertion and visa + * versa. + */ +enum virtchnl_vlan_ethertype_match { + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION = 0, + VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED = 1, +}; + +/* The PF populates these fields based on the supported VLAN offloads. If a + * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will + * reject any VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 or + * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 messages using the unsupported fields. + * + * Also, a VF is only allowed to toggle its VLAN offload setting if the + * VIRTCHNL_VLAN_TOGGLE_ALLOWED bit is set. + * + * The VF driver needs to be aware of how the tags are stripped by hardware and + * inserted by the VF driver based on the level of offload support. The PF will + * populate these fields based on where the VLAN tags are expected to be + * offloaded via the VIRTHCNL_VLAN_TAG_LOCATION_* bits. The VF will need to + * interpret these fields. See the definition of the + * VIRTCHNL_VLAN_TAG_LOCATION_* bits above the virtchnl_vlan_support + * enumeration. + */ +struct virtchnl_vlan_offload_caps { + struct virtchnl_vlan_supported_caps stripping_support; + struct virtchnl_vlan_supported_caps insertion_support; + u32 ethertype_init; + u8 ethertype_match; + u8 pad[3]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_vlan_offload_caps); + +/* VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS + * VF sends this message to determine its VLAN capabilities. + * + * PF will mark which capabilities it supports based on hardware support and + * current configuration. For example, if a port VLAN is configured the PF will + * not allow outer VLAN filtering, stripping, or insertion to be configured so + * it will block these features from the VF. + * + * The VF will need to cross reference its capabilities with the PFs + * capabilities in the response message from the PF to determine the VLAN + * support. + */ +struct virtchnl_vlan_caps { + struct virtchnl_vlan_filtering_caps filtering; + struct virtchnl_vlan_offload_caps offloads; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_caps); + +struct virtchnl_vlan { + u16 tci; /* tci[15:13] = PCP and tci[11:0] = VID */ + u16 tci_mask; /* only valid if VIRTCHNL_VLAN_FILTER_MASK set in + * filtering caps + */ + u16 tpid; /* 0x8100, 0x88a8, etc. and only type(s) set in + * filtering caps. Note that tpid here does not refer to + * VIRTCHNL_VLAN_ETHERTYPE_*, but it refers to the + * actual 2-byte VLAN TPID + */ + u8 pad[2]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_vlan); + +struct virtchnl_vlan_filter { + struct virtchnl_vlan inner; + struct virtchnl_vlan outer; + u8 pad[16]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(32, virtchnl_vlan_filter); + +/* VIRTCHNL_OP_ADD_VLAN_V2 + * VIRTCHNL_OP_DEL_VLAN_V2 + * + * VF sends these messages to add/del one or more VLAN tag filters for Rx + * traffic. + * + * The PF attempts to add the filters and returns status. + * + * The VF should only ever attempt to add/del virtchnl_vlan_filter(s) using the + * supported fields negotiated via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS. + */ +struct virtchnl_vlan_filter_list_v2 { + u16 vport_id; + u16 num_elements; + u8 pad[4]; + struct virtchnl_vlan_filter filters[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_filter_list_v2); + +/* VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 + * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 + * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 + * VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 + * + * VF sends this message to enable or disable VLAN stripping or insertion. It + * also needs to specify an ethertype. The VF knows which VLAN ethertypes are + * allowed and whether or not it's allowed to enable/disable the specific + * offload via the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. The VF needs to + * parse the virtchnl_vlan_caps.offloads fields to determine which offload + * messages are allowed. + * + * For example, if the PF populates the virtchnl_vlan_caps.offloads in the + * following manner the VF will be allowed to enable and/or disable 0x8100 inner + * VLAN insertion and/or stripping via the opcodes listed above. Inner in this + * case means the outer most or single VLAN from the VF's perspective. This is + * because no outer offloads are supported. See the comments above the + * virtchnl_vlan_supported_caps structure for more details. + * + * virtchnl_vlan_caps.offloads.stripping_support.inner = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100; + * + * virtchnl_vlan_caps.offloads.insertion_support.inner = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100; + * + * In order to enable inner (again note that in this case inner is the outer + * most or single VLAN from the VF's perspective) VLAN stripping for 0x8100 + * VLANs, the VF would populate the virtchnl_vlan_setting structure in the + * following manner and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message. + * + * virtchnl_vlan_setting.inner_ethertype_setting = + * VIRTCHNL_VLAN_ETHERTYPE_8100; + * + * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on + * initialization. + * + * The reason that VLAN TPID(s) are not being used for the + * outer_ethertype_setting and inner_ethertype_setting fields is because it's + * possible a device could support VLAN insertion and/or stripping offload on + * multiple ethertypes concurrently, so this method allows a VF to request + * multiple ethertypes in one message using the virtchnl_vlan_support + * enumeration. + * + * For example, if the PF populates the virtchnl_vlan_caps.offloads in the + * following manner the VF will be allowed to enable 0x8100 and 0x88a8 outer + * VLAN insertion and stripping simultaneously. The + * virtchnl_vlan_caps.offloads.ethertype_match field will also have to be + * populated based on what the PF can support. + * + * virtchnl_vlan_caps.offloads.stripping_support.outer = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_AND; + * + * virtchnl_vlan_caps.offloads.insertion_support.outer = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_AND; + * + * In order to enable outer VLAN stripping for 0x8100 and 0x88a8 VLANs, the VF + * would populate the virthcnl_vlan_offload_structure in the following manner + * and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message. + * + * virtchnl_vlan_setting.outer_ethertype_setting = + * VIRTHCNL_VLAN_ETHERTYPE_8100 | + * VIRTHCNL_VLAN_ETHERTYPE_88A8; + * + * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on + * initialization. + * + * There is also the case where a PF and the underlying hardware can support + * VLAN offloads on multiple ethertypes, but not concurrently. For example, if + * the PF populates the virtchnl_vlan_caps.offloads in the following manner the + * VF will be allowed to enable and/or disable 0x8100 XOR 0x88a8 outer VLAN + * offloads. The ethertypes must match for stripping and insertion. + * + * virtchnl_vlan_caps.offloads.stripping_support.outer = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_XOR; + * + * virtchnl_vlan_caps.offloads.insertion_support.outer = + * VIRTCHNL_VLAN_TOGGLE | + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_XOR; + * + * virtchnl_vlan_caps.offloads.ethertype_match = + * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + * + * In order to enable outer VLAN stripping for 0x88a8 VLANs, the VF would + * populate the virtchnl_vlan_setting structure in the following manner and send + * the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2. Also, this will change the + * ethertype for VLAN insertion if it's enabled. So, for completeness, a + * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 with the same ethertype should be sent. + * + * virtchnl_vlan_setting.outer_ethertype_setting = VIRTHCNL_VLAN_ETHERTYPE_88A8; + * + * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on + * initialization. + */ +struct virtchnl_vlan_setting { + u32 outer_ethertype_setting; + u32 inner_ethertype_setting; + u16 vport_id; + u8 pad[6]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_setting); + /* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE * VF sends VSI id and flags. * PF returns status code in retval. @@ -1156,6 +1509,30 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_DEL_FDIR_FILTER: valid_len = sizeof(struct virtchnl_fdir_del); break; + case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: + break; + case VIRTCHNL_OP_ADD_VLAN_V2: + case VIRTCHNL_OP_DEL_VLAN_V2: + valid_len = sizeof(struct virtchnl_vlan_filter_list_v2); + if (msglen >= valid_len) { + struct virtchnl_vlan_filter_list_v2 *vfl = + (struct virtchnl_vlan_filter_list_v2 *)msg; + + valid_len += (vfl->num_elements - 1) * + sizeof(struct virtchnl_vlan_filter); + + if (vfl->num_elements == 0) { + err_msg_format = true; + break; + } + } + break; + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2: + case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: + valid_len = sizeof(struct virtchnl_vlan_setting); + break; /* These are always errors coming from the VF. */ case VIRTCHNL_OP_EVENT: case VIRTCHNL_OP_UNKNOWN: -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 00:16:00 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Mon, 29 Nov 2021 16:16:00 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 2/6] iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 negotiation In-Reply-To: <20211130001604.22112-1-anthony.l.nguyen@intel.com> References: <20211130001604.22112-1-anthony.l.nguyen@intel.com> Message-ID: <20211130001604.22112-3-anthony.l.nguyen@intel.com> From: Brett Creeley In order to support the new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability the VF driver needs to rework it's initialization state machine and reset flow. This has to be done because successful negotiation of VIRTCHNL_VF_OFFLOAD_VLAN_V2 requires the VF driver to perform a second capability request via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS before configuring the adapter and its netdev. Add the VIRTCHNL_VF_OFFLOAD_VLAN_V2 bit when sending the VIRTHCNL_OP_GET_VF_RESOURECES message. The underlying PF will either support VIRTCHNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 or neither. Both of these offloads should never be supported together. Based on this, add 2 new states to the initialization state machine: __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS __IAVF_INIT_CONFIG_ADAPTER The __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS state is used to request/store the new VLAN capabilities if and only if VIRTCHNL_VLAN_OFFLOAD_VLAN_V2 was successfully negotiated in the __IAVF_INIT_GET_RESOURCES state. The __IAVF_INIT_CONFIG_ADAPTER state is used to configure the adapter/netdev after the resource requests have finished. The VF will move into this state regardless of whether it successfully negotiated VIRTCHNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2. Also, add a the new flag IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS and set it during VF reset. If VIRTCHNL_VF_OFFLOAD_VLAN_V2 was successfully negotiated then the VF will request its VLAN capabilities via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS during the reset. This is needed because the PF may change/modify the VF's configuration during VF reset (i.e. modifying the VF's port VLAN configuration). This also, required the VF to call netdev_update_features() since its VLAN features may change during VF reset. Make sure to call this under rtnl_lock(). Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/iavf/iavf.h | 9 + drivers/net/ethernet/intel/iavf/iavf_main.c | 205 +++++++++++++----- .../net/ethernet/intel/iavf/iavf_virtchnl.c | 78 ++++++- 3 files changed, 240 insertions(+), 52 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h index b5728bdbcf33..edb139834437 100644 --- a/drivers/net/ethernet/intel/iavf/iavf.h +++ b/drivers/net/ethernet/intel/iavf/iavf.h @@ -181,6 +181,8 @@ enum iavf_state_t { __IAVF_REMOVE, /* driver is being unloaded */ __IAVF_INIT_VERSION_CHECK, /* aq msg sent, awaiting reply */ __IAVF_INIT_GET_RESOURCES, /* aq msg sent, awaiting reply */ + __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS, + __IAVF_INIT_CONFIG_ADAPTER, __IAVF_INIT_SW, /* got resources, setting up structs */ __IAVF_INIT_FAILED, /* init failed, restarting procedure */ __IAVF_RESETTING, /* in reset */ @@ -310,6 +312,7 @@ struct iavf_adapter { #define IAVF_FLAG_AQ_ADD_ADV_RSS_CFG BIT(27) #define IAVF_FLAG_AQ_DEL_ADV_RSS_CFG BIT(28) #define IAVF_FLAG_AQ_REQUEST_STATS BIT(29) +#define IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS BIT(30) /* OS defined structs */ struct net_device *netdev; @@ -349,6 +352,8 @@ struct iavf_adapter { VIRTCHNL_VF_OFFLOAD_RSS_PF))) #define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_OFFLOAD_VLAN) +#define VLAN_V2_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \ + VIRTCHNL_VF_OFFLOAD_VLAN_V2) #define ADV_LINK_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ VIRTCHNL_VF_CAP_ADV_LINK_SPEED) #define FDIR_FLTR_SUPPORT(_a) ((_a)->vf_res->vf_cap_flags & \ @@ -360,6 +365,7 @@ struct iavf_adapter { struct virtchnl_version_info pf_version; #define PF_IS_V11(_a) (((_a)->pf_version.major == 1) && \ ((_a)->pf_version.minor == 1)) + struct virtchnl_vlan_caps vlan_v2_caps; u16 msg_enable; struct iavf_eth_stats current_stats; struct iavf_vsi vsi; @@ -448,6 +454,7 @@ static inline void iavf_change_state(struct iavf_adapter *adapter, int iavf_up(struct iavf_adapter *adapter); void iavf_down(struct iavf_adapter *adapter); int iavf_process_config(struct iavf_adapter *adapter); +int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter); void iavf_schedule_reset(struct iavf_adapter *adapter); void iavf_schedule_request_stats(struct iavf_adapter *adapter); void iavf_reset(struct iavf_adapter *adapter); @@ -466,6 +473,8 @@ int iavf_send_api_ver(struct iavf_adapter *adapter); int iavf_verify_api_ver(struct iavf_adapter *adapter); int iavf_send_vf_config_msg(struct iavf_adapter *adapter); int iavf_get_vf_config(struct iavf_adapter *adapter); +int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter); +int iavf_send_vf_offload_vlan_v2_msg(struct iavf_adapter *adapter); void iavf_irq_enable(struct iavf_adapter *adapter, bool flush); void iavf_configure_queues(struct iavf_adapter *adapter); void iavf_deconfigure_queues(struct iavf_adapter *adapter); diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index f5ac2390d8ce..714709a28ad8 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1584,6 +1584,8 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter) { if (adapter->aq_required & IAVF_FLAG_AQ_GET_CONFIG) return iavf_send_vf_config_msg(adapter); + if (adapter->aq_required & IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS) + return iavf_send_vf_offload_vlan_v2_msg(adapter); if (adapter->aq_required & IAVF_FLAG_AQ_DISABLE_QUEUES) { iavf_disable_queues(adapter); return 0; @@ -1826,6 +1828,59 @@ static void iavf_init_version_check(struct iavf_adapter *adapter) iavf_change_state(adapter, __IAVF_INIT_FAILED); } +/** + * iavf_parse_vf_resource_msg - parse response from VIRTCHNL_OP_GET_VF_RESOURCES + * @adapter: board private structure + */ +int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter) +{ + int i, num_req_queues = adapter->num_req_queues; + struct iavf_vsi *vsi = &adapter->vsi; + + for (i = 0; i < adapter->vf_res->num_vsis; i++) { + if (adapter->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV) + adapter->vsi_res = &adapter->vf_res->vsi_res[i]; + } + if (!adapter->vsi_res) { + dev_err(&adapter->pdev->dev, "No LAN VSI found\n"); + return -ENODEV; + } + + if (num_req_queues && + num_req_queues > adapter->vsi_res->num_queue_pairs) { + /* Problem. The PF gave us fewer queues than what we had + * negotiated in our request. Need a reset to see if we can't + * get back to a working state. + */ + dev_err(&adapter->pdev->dev, + "Requested %d queues, but PF only gave us %d.\n", + num_req_queues, + adapter->vsi_res->num_queue_pairs); + adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED; + adapter->num_req_queues = adapter->vsi_res->num_queue_pairs; + iavf_schedule_reset(adapter); + + return -EAGAIN; + } + adapter->num_req_queues = 0; + adapter->vsi.id = adapter->vsi_res->vsi_id; + + adapter->vsi.back = adapter; + adapter->vsi.base_vector = 1; + adapter->vsi.work_limit = IAVF_DEFAULT_IRQ_WORK; + vsi->netdev = adapter->netdev; + vsi->qs_handle = adapter->vsi_res->qset_handle; + if (adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { + adapter->rss_key_size = adapter->vf_res->rss_key_size; + adapter->rss_lut_size = adapter->vf_res->rss_lut_size; + } else { + adapter->rss_key_size = IAVF_HKEY_ARRAY_SIZE; + adapter->rss_lut_size = IAVF_HLUT_ARRAY_SIZE; + } + + return 0; +} + /** * iavf_init_get_resources - third step of driver startup * @adapter: board private structure @@ -1837,7 +1892,6 @@ static void iavf_init_version_check(struct iavf_adapter *adapter) **/ static void iavf_init_get_resources(struct iavf_adapter *adapter) { - struct net_device *netdev = adapter->netdev; struct pci_dev *pdev = adapter->pdev; struct iavf_hw *hw = &adapter->hw; int err; @@ -1855,7 +1909,7 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter) err = iavf_get_vf_config(adapter); if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK) { err = iavf_send_vf_config_msg(adapter); - goto err; + goto err_alloc; } else if (err == IAVF_ERR_PARAM) { /* We only get ERR_PARAM if the device is in a very bad * state or if we've been disabled for previous bad @@ -1870,9 +1924,83 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter) goto err_alloc; } - err = iavf_process_config(adapter); + err = iavf_parse_vf_resource_msg(adapter); if (err) goto err_alloc; + + err = iavf_send_vf_offload_vlan_v2_msg(adapter); + if (err == -EOPNOTSUPP) { + /* underlying PF doesn't support VIRTCHNL_VF_OFFLOAD_VLAN_V2, so + * go directly to finishing initialization + */ + iavf_change_state(adapter, __IAVF_INIT_CONFIG_ADAPTER); + return; + } else if (err) { + dev_err(&pdev->dev, "Unable to send offload vlan v2 request (%d)\n", + err); + goto err_alloc; + } + + /* underlying PF supports VIRTCHNL_VF_OFFLOAD_VLAN_V2, so update the + * state accordingly + */ + iavf_change_state(adapter, __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS); + return; + +err_alloc: + kfree(adapter->vf_res); + adapter->vf_res = NULL; +err: + iavf_change_state(adapter, __IAVF_INIT_FAILED); +} + +/** + * iavf_init_get_offload_vlan_v2_caps - part of driver startup + * @adapter: board private structure + * + * Function processes __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS driver state if the + * VF negotiates VIRTCHNL_VF_OFFLOAD_VLAN_V2. If VIRTCHNL_VF_OFFLOAD_VLAN_V2 is + * not negotiated, then this state will never be entered. + **/ +static void iavf_init_get_offload_vlan_v2_caps(struct iavf_adapter *adapter) +{ + int ret; + + WARN_ON(adapter->state != __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS); + + memset(&adapter->vlan_v2_caps, 0, sizeof(adapter->vlan_v2_caps)); + + ret = iavf_get_vf_vlan_v2_caps(adapter); + if (ret) { + if (ret == IAVF_ERR_ADMIN_QUEUE_NO_WORK) + iavf_send_vf_offload_vlan_v2_msg(adapter); + goto err; + } + + iavf_change_state(adapter, __IAVF_INIT_CONFIG_ADAPTER); + return; +err: + iavf_change_state(adapter, __IAVF_INIT_FAILED); +} + +/** + * iavf_init_config_adapter - last part of driver startup + * @adapter: board private structure + * + * After all the supported capabilities are negotiated, then the + * __IAVF_INIT_CONFIG_ADAPTER state will finish driver initialization. + */ +static void iavf_init_config_adapter(struct iavf_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + struct pci_dev *pdev = adapter->pdev; + int err; + + WARN_ON(adapter->state != __IAVF_INIT_CONFIG_ADAPTER); + + if (iavf_process_config(adapter)) + goto err; + adapter->current_op = VIRTCHNL_OP_UNKNOWN; adapter->flags |= IAVF_FLAG_RX_CSUM_ENABLED; @@ -1962,9 +2090,6 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter) iavf_free_misc_irq(adapter); err_sw_init: iavf_reset_interrupt_capability(adapter); -err_alloc: - kfree(adapter->vf_res); - adapter->vf_res = NULL; err: iavf_change_state(adapter, __IAVF_INIT_FAILED); } @@ -2013,6 +2138,18 @@ static void iavf_watchdog_task(struct work_struct *work) queue_delayed_work(iavf_wq, &adapter->watchdog_task, msecs_to_jiffies(1)); return; + case __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS: + iavf_init_get_offload_vlan_v2_caps(adapter); + mutex_unlock(&adapter->crit_lock); + queue_delayed_work(iavf_wq, &adapter->watchdog_task, + msecs_to_jiffies(1)); + return; + case __IAVF_INIT_CONFIG_ADAPTER: + iavf_init_config_adapter(adapter); + mutex_unlock(&adapter->crit_lock); + queue_delayed_work(iavf_wq, &adapter->watchdog_task, + msecs_to_jiffies(1)); + return; case __IAVF_INIT_FAILED: if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) { dev_err(&adapter->pdev->dev, @@ -2066,10 +2203,13 @@ static void iavf_watchdog_task(struct work_struct *work) iavf_send_api_ver(adapter); } } else { + int ret = iavf_process_aq_command(adapter); + /* An error will be returned if no commands were * processed; use this opportunity to update stats + * if the error isn't -ENOTSUPP */ - if (iavf_process_aq_command(adapter) && + if (ret && ret != -EOPNOTSUPP && adapter->state == __IAVF_RUNNING) iavf_request_stats(adapter); } @@ -2309,6 +2449,13 @@ static void iavf_reset_task(struct work_struct *work) } adapter->aq_required |= IAVF_FLAG_AQ_GET_CONFIG; + /* always set since VIRTCHNL_OP_GET_VF_RESOURCES has not been + * sent/received yet, so VLAN_V2_ALLOWED() cannot is not reliable here, + * however the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS won't be sent until + * VIRTCHNL_OP_GET_VF_RESOURCES and VIRTCHNL_VF_OFFLOAD_VLAN_V2 have + * been successfully sent and negotiated + */ + adapter->aq_required |= IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS; adapter->aq_required |= IAVF_FLAG_AQ_MAP_VECTORS; spin_lock_bh(&adapter->mac_vlan_list_lock); @@ -3608,39 +3755,10 @@ static int iavf_check_reset_complete(struct iavf_hw *hw) int iavf_process_config(struct iavf_adapter *adapter) { struct virtchnl_vf_resource *vfres = adapter->vf_res; - int i, num_req_queues = adapter->num_req_queues; struct net_device *netdev = adapter->netdev; - struct iavf_vsi *vsi = &adapter->vsi; netdev_features_t hw_enc_features; netdev_features_t hw_features; - /* got VF config message back from PF, now we can parse it */ - for (i = 0; i < vfres->num_vsis; i++) { - if (vfres->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV) - adapter->vsi_res = &vfres->vsi_res[i]; - } - if (!adapter->vsi_res) { - dev_err(&adapter->pdev->dev, "No LAN VSI found\n"); - return -ENODEV; - } - - if (num_req_queues && - num_req_queues > adapter->vsi_res->num_queue_pairs) { - /* Problem. The PF gave us fewer queues than what we had - * negotiated in our request. Need a reset to see if we can't - * get back to a working state. - */ - dev_err(&adapter->pdev->dev, - "Requested %d queues, but PF only gave us %d.\n", - num_req_queues, - adapter->vsi_res->num_queue_pairs); - adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED; - adapter->num_req_queues = adapter->vsi_res->num_queue_pairs; - iavf_schedule_reset(adapter); - return -ENODEV; - } - adapter->num_req_queues = 0; - hw_enc_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | @@ -3721,21 +3839,6 @@ int iavf_process_config(struct iavf_adapter *adapter) netdev->features &= ~NETIF_F_GSO; } - adapter->vsi.id = adapter->vsi_res->vsi_id; - - adapter->vsi.back = adapter; - adapter->vsi.base_vector = 1; - adapter->vsi.work_limit = IAVF_DEFAULT_IRQ_WORK; - vsi->netdev = adapter->netdev; - vsi->qs_handle = adapter->vsi_res->qset_handle; - if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { - adapter->rss_key_size = vfres->rss_key_size; - adapter->rss_lut_size = vfres->rss_lut_size; - } else { - adapter->rss_key_size = IAVF_HKEY_ARRAY_SIZE; - adapter->rss_lut_size = IAVF_HLUT_ARRAY_SIZE; - } - return 0; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c index 52bfe2a853f0..2ad426f13462 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c @@ -137,6 +137,7 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter) VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 | VIRTCHNL_VF_OFFLOAD_ENCAP | + VIRTCHNL_VF_OFFLOAD_VLAN_V2 | VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES | VIRTCHNL_VF_OFFLOAD_ADQ | @@ -155,6 +156,19 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter) NULL, 0); } +int iavf_send_vf_offload_vlan_v2_msg(struct iavf_adapter *adapter) +{ + adapter->aq_required &= ~IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS; + + if (!VLAN_V2_ALLOWED(adapter)) + return -EOPNOTSUPP; + + adapter->current_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS; + + return iavf_send_pf_msg(adapter, VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, + NULL, 0); +} + /** * iavf_validate_num_queues * @adapter: adapter structure @@ -235,6 +249,45 @@ int iavf_get_vf_config(struct iavf_adapter *adapter) return err; } +int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter) +{ + struct iavf_hw *hw = &adapter->hw; + struct iavf_arq_event_info event; + enum virtchnl_ops op; + enum iavf_status err; + u16 len; + + len = sizeof(struct virtchnl_vlan_caps); + event.buf_len = len; + event.msg_buf = kzalloc(event.buf_len, GFP_KERNEL); + if (!event.msg_buf) { + err = -ENOMEM; + goto out; + } + + while (1) { + /* When the AQ is empty, iavf_clean_arq_element will return + * nonzero and this loop will terminate. + */ + err = iavf_clean_arq_element(hw, &event, NULL); + if (err) + goto out_alloc; + op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high); + if (op == VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS) + break; + } + + err = (enum iavf_status)le32_to_cpu(event.desc.cookie_low); + if (err) + goto out_alloc; + + memcpy(&adapter->vlan_v2_caps, event.msg_buf, min(event.msg_len, len)); +out_alloc: + kfree(event.msg_buf); +out: + return err; +} + /** * iavf_configure_queues * @adapter: adapter structure @@ -1757,6 +1810,26 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, } spin_unlock_bh(&adapter->mac_vlan_list_lock); + + iavf_parse_vf_resource_msg(adapter); + + /* negotiated VIRTCHNL_VF_OFFLOAD_VLAN_V2, so wait for the + * response to VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS to finish + * configuration + */ + if (VLAN_V2_ALLOWED(adapter)) + break; + /* fallthrough and finish config if VIRTCHNL_VF_OFFLOAD_VLAN_V2 + * wasn't successfully negotiated with the PF + */ + } + fallthrough; + case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: { + if (v_opcode == VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS) + memcpy(&adapter->vlan_v2_caps, msg, + min_t(u16, msglen, + sizeof(adapter->vlan_v2_caps))); + iavf_process_config(adapter); /* unlock crit_lock before acquiring rtnl_lock as other @@ -1764,8 +1837,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter, * crit_lock */ mutex_unlock(&adapter->crit_lock); + /* VLAN capabilities can change during VFR, so make sure to + * update the netdev features with the new capabilities + */ rtnl_lock(); - netdev_update_features(adapter->netdev); + netdev_update_features(netdev); rtnl_unlock(); if (iavf_lock_timeout(&adapter->crit_lock, 10000)) dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", -- 2.20.1 From lkp at intel.com Tue Nov 30 00:47:41 2021 From: lkp at intel.com (kernel test robot) Date: Tue, 30 Nov 2021 08:47:41 +0800 Subject: [Intel-wired-lan] [PATCH net-next 2/6] iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 negotiation In-Reply-To: <20211129192300.14188-3-anthony.l.nguyen@intel.com> References: <20211129192300.14188-3-anthony.l.nguyen@intel.com> Message-ID: <202111300803.mYiFKuhQ-lkp@intel.com> Hi Tony, Thank you for the patch! Yet something to improve: [auto build test ERROR on tnguy-next-queue/dev-queue] [also build test ERROR on v5.16-rc3 next-20211129] [cannot apply to net-next/master] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Tony-Nguyen/iavf-Add-support-for-VIRTCHNL_VF_OFFLOAD_VLAN_V2/20211130-032607 base: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue config: arc-allyesconfig (https://download.01.org/0day-ci/archive/20211130/202111300803.mYiFKuhQ-lkp at intel.com/config) compiler: arceb-elf-gcc (GCC) 11.2.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/7764feeed253d22b477b98db13e41782ae11a902 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Tony-Nguyen/iavf-Add-support-for-VIRTCHNL_VF_OFFLOAD_VLAN_V2/20211130-032607 git checkout 7764feeed253d22b477b98db13e41782ae11a902 # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arc SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All errors (new ones prefixed by >>): drivers/net/ethernet/intel/iavf/iavf_main.c: In function 'iavf_parse_vf_resource_msg': >> drivers/net/ethernet/intel/iavf/iavf_main.c:1859:35: error: 'IAVF_FLAG_REINIT_MSIX_NEEDED' undeclared (first use in this function); did you mean 'IAVF_FLAG_REINIT_ITR_NEEDED'? 1859 | adapter->flags |= IAVF_FLAG_REINIT_MSIX_NEEDED; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ | IAVF_FLAG_REINIT_ITR_NEEDED drivers/net/ethernet/intel/iavf/iavf_main.c:1859:35: note: each undeclared identifier is reported only once for each function it appears in In file included from include/linux/perf_event.h:25, from include/linux/trace_events.h:10, from include/trace/trace_events.h:21, from include/trace/define_trace.h:102, from drivers/net/ethernet/intel/iavf/iavf_trace.h:209, from drivers/net/ethernet/intel/iavf/iavf_main.c:12: At top level: arch/arc/include/asm/perf_event.h:126:27: warning: 'arc_pmu_cache_map' defined but not used [-Wunused-const-variable=] 126 | static const unsigned int arc_pmu_cache_map[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = { | ^~~~~~~~~~~~~~~~~ arch/arc/include/asm/perf_event.h:91:27: warning: 'arc_pmu_ev_hw_map' defined but not used [-Wunused-const-variable=] 91 | static const char * const arc_pmu_ev_hw_map[] = { | ^~~~~~~~~~~~~~~~~ vim +1859 drivers/net/ethernet/intel/iavf/iavf_main.c 1830 1831 /** 1832 * iavf_parse_vf_resource_msg - parse response from VIRTCHNL_OP_GET_VF_RESOURCES 1833 * @adapter: board private structure 1834 */ 1835 int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter) 1836 { 1837 int i, num_req_queues = adapter->num_req_queues; 1838 struct iavf_vsi *vsi = &adapter->vsi; 1839 1840 for (i = 0; i < adapter->vf_res->num_vsis; i++) { 1841 if (adapter->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV) 1842 adapter->vsi_res = &adapter->vf_res->vsi_res[i]; 1843 } 1844 if (!adapter->vsi_res) { 1845 dev_err(&adapter->pdev->dev, "No LAN VSI found\n"); 1846 return -ENODEV; 1847 } 1848 1849 if (num_req_queues && 1850 num_req_queues > adapter->vsi_res->num_queue_pairs) { 1851 /* Problem. The PF gave us fewer queues than what we had 1852 * negotiated in our request. Need a reset to see if we can't 1853 * get back to a working state. 1854 */ 1855 dev_err(&adapter->pdev->dev, 1856 "Requested %d queues, but PF only gave us %d.\n", 1857 num_req_queues, 1858 adapter->vsi_res->num_queue_pairs); > 1859 adapter->flags |= IAVF_FLAG_REINIT_MSIX_NEEDED; 1860 adapter->num_req_queues = adapter->vsi_res->num_queue_pairs; 1861 iavf_schedule_reset(adapter); 1862 1863 return -EAGAIN; 1864 } 1865 adapter->num_req_queues = 0; 1866 adapter->vsi.id = adapter->vsi_res->vsi_id; 1867 1868 adapter->vsi.back = adapter; 1869 adapter->vsi.base_vector = 1; 1870 adapter->vsi.work_limit = IAVF_DEFAULT_IRQ_WORK; 1871 vsi->netdev = adapter->netdev; 1872 vsi->qs_handle = adapter->vsi_res->qset_handle; 1873 if (adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { 1874 adapter->rss_key_size = adapter->vf_res->rss_key_size; 1875 adapter->rss_lut_size = adapter->vf_res->rss_lut_size; 1876 } else { 1877 adapter->rss_key_size = IAVF_HKEY_ARRAY_SIZE; 1878 adapter->rss_lut_size = IAVF_HLUT_ARRAY_SIZE; 1879 } 1880 1881 return 0; 1882 } 1883 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From kuba at kernel.org Tue Nov 30 01:17:12 2021 From: kuba at kernel.org (Jakub Kicinski) Date: Mon, 29 Nov 2021 17:17:12 -0800 Subject: [Intel-wired-lan] [PATCH net] igb: fix deadlock caused by taking RTNL in RPM resume path In-Reply-To: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> References: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> Message-ID: <20211129171712.500e37cb@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Mon, 29 Nov 2021 22:14:06 +0100 Heiner Kallweit wrote: > - rtnl_lock(); > + if (!rpm) > + rtnl_lock(); Is there an ASSERT_RTNL() hidden in any of the below? Can we add one? Unless we're 100% confident nobody will RPM resume without rtnl held.. > if (!err && netif_running(netdev)) > err = __igb_open(netdev, true); > > if (!err) > netif_device_attach(netdev); > - rtnl_unlock(); > + if (!rpm) > + rtnl_unlock(); From hkallweit1 at gmail.com Tue Nov 30 06:33:55 2021 From: hkallweit1 at gmail.com (Heiner Kallweit) Date: Tue, 30 Nov 2021 07:33:55 +0100 Subject: [Intel-wired-lan] [PATCH net] igb: fix deadlock caused by taking RTNL in RPM resume path In-Reply-To: <20211129150920.4a400828@hermes.local> References: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> <20211129150920.4a400828@hermes.local> Message-ID: <5675a5ef-5aa0-3f05-1c44-a91ce90d5f38@gmail.com> On 30.11.2021 00:09, Stephen Hemminger wrote: > On Mon, 29 Nov 2021 22:14:06 +0100 > Heiner Kallweit wrote: > >> diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c >> index dd208930f..8073cce73 100644 >> --- a/drivers/net/ethernet/intel/igb/igb_main.c >> +++ b/drivers/net/ethernet/intel/igb/igb_main.c >> @@ -9254,7 +9254,7 @@ static int __maybe_unused igb_suspend(struct device *dev) >> return __igb_shutdown(to_pci_dev(dev), NULL, 0); >> } >> >> -static int __maybe_unused igb_resume(struct device *dev) >> +static int __maybe_unused __igb_resume(struct device *dev, bool rpm) >> { >> struct pci_dev *pdev = to_pci_dev(dev); >> struct net_device *netdev = pci_get_drvdata(pdev); >> @@ -9297,17 +9297,24 @@ static int __maybe_unused igb_resume(struct device *dev) >> >> wr32(E1000_WUS, ~0); >> >> - rtnl_lock(); >> + if (!rpm) >> + rtnl_lock(); >> if (!err && netif_running(netdev)) >> err = __igb_open(netdev, true); >> >> if (!err) >> netif_device_attach(netdev); >> - rtnl_unlock(); >> + if (!rpm) >> + rtnl_unlock(); >> >> return err; >> } >> >> +static int __maybe_unused igb_resume(struct device *dev) >> +{ >> + return __igb_resume(dev, false); >> +} >> + >> static int __maybe_unused igb_runtime_idle(struct device *dev) >> { >> struct net_device *netdev = dev_get_drvdata(dev); >> @@ -9326,7 +9333,7 @@ static int __maybe_unused igb_runtime_suspend(struct device *dev) >> >> static int __maybe_unused igb_runtime_resume(struct device *dev) >> { >> - return igb_resume(dev); >> + return __igb_resume(dev, true); >> } > > Rather than conditional locking which is one of the seven deadly sins of SMP, > why not just have __igb_resume() be the locked version where lock is held by caller? > In this case we'd have to duplicate quite some code from igb_resume(). Even more simple alternative would be to remove RTNL from igb_resume(). Then we'd remove RTNL from RPM and system resume path. Should be ok as well. I just didn't want to change two paths at once. From hkallweit1 at gmail.com Tue Nov 30 06:46:22 2021 From: hkallweit1 at gmail.com (Heiner Kallweit) Date: Tue, 30 Nov 2021 07:46:22 +0100 Subject: [Intel-wired-lan] [PATCH net] igb: fix deadlock caused by taking RTNL in RPM resume path In-Reply-To: <20211129171712.500e37cb@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> References: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> <20211129171712.500e37cb@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> Message-ID: <6edc23a1-5907-3a41-7b46-8d53c5664a56@gmail.com> On 30.11.2021 02:17, Jakub Kicinski wrote: > On Mon, 29 Nov 2021 22:14:06 +0100 Heiner Kallweit wrote: >> - rtnl_lock(); >> + if (!rpm) >> + rtnl_lock(); > > Is there an ASSERT_RTNL() hidden in any of the below? Can we add one? > Unless we're 100% confident nobody will RPM resume without rtnl held.. > Not sure whether igb uses RPM the same way as r8169. There the device is runtime-suspended (D3hot) w/o link. Once cable is plugged in the PHY triggers a PME, and PCI core runtime-resumes the device (MAC). In this case RTNL isn't held by the caller. Therefore I don't think it's safe to assume that all callers hold RTNL. >> if (!err && netif_running(netdev)) >> err = __igb_open(netdev, true); >> >> if (!err) >> netif_device_attach(netdev); >> - rtnl_unlock(); >> + if (!rpm) >> + rtnl_unlock(); From karen.sornek at intel.com Tue Nov 30 07:32:11 2021 From: karen.sornek at intel.com (Sornek, Karen) Date: Tue, 30 Nov 2021 08:32:11 +0100 Subject: [Intel-wired-lan] [PATCH net v1] i40e: Fix for failed to init adminq while VF reset Message-ID: <20211130073211.1114232-1-karen.sornek@intel.com> From: Karen Sornek Fix for failed to init adminq: -53 while VF is resetting via MAC address changing procedure. Added sync module to avoid reading deadbeef value in reinit adminq during software reset. Without this patch it is possible to trigger VF reset procedure during reinit adminq. This resulted in an incorrect reading of value from the AQP registers and generated the -53 error. Signed-off-by: Grzegorz Szczurek Signed-off-by: Karen Sornek --- .../net/ethernet/intel/i40e/i40e_register.h | 3 ++ .../ethernet/intel/i40e/i40e_virtchnl_pf.c | 44 ++++++++++++++++++- .../ethernet/intel/i40e/i40e_virtchnl_pf.h | 1 + 3 files changed, 46 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_register.h b/drivers/net/ethernet/intel/i40e/i40e_register.h index 8d0588a27a05..1908eed4fa5e 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_register.h +++ b/drivers/net/ethernet/intel/i40e/i40e_register.h @@ -413,6 +413,9 @@ #define I40E_VFINT_DYN_CTLN(_INTVF) (0x00024800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ #define I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT 1 #define I40E_VFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT) +#define I40E_VFINT_ICR0_ADMINQ_SHIFT 30 +#define I40E_VFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ADMINQ_SHIFT) +#define I40E_VFINT_ICR0_ENA(_VF) (0x0002C000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ #define I40E_VPINT_AEQCTL(_VF) (0x0002B800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ #define I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT 0 #define I40E_VPINT_AEQCTL_ITR_INDX_SHIFT 11 diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 3efc6926d308..d4c6914d2347 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -1379,6 +1379,32 @@ static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf, return aq_ret; } +/** + * i40e_sync_vfr_reset + * @hw: pointer to hw struct + * @vf_id: VF identifier + * + * Before trigger hardware reset, we need to know if no other process has + * reserved the hardware for any reset operations. This check is done by + * examining the status of the RSTAT1 register used to signal the reset. + **/ +static int i40e_sync_vfr_reset(struct i40e_hw *hw, int vf_id) +{ + u32 reg; + int i; + + for (i = 0; i < I40E_VFR_WAIT_COUNT; i++) { + reg = rd32(hw, I40E_VFINT_ICR0_ENA(vf_id)) & + I40E_VFINT_ICR0_ADMINQ_MASK; + if (reg) + return 0; + + usleep_range(100, 200); + } + + return -EAGAIN; +} + /** * i40e_trigger_vf_reset * @vf: pointer to the VF structure @@ -1393,9 +1419,11 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr) struct i40e_pf *pf = vf->pf; struct i40e_hw *hw = &pf->hw; u32 reg, reg_idx, bit_idx; + bool vf_active; + u32 radq; /* warn the VF */ - clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states); + vf_active = test_and_clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states); /* Disable VF's configuration API during reset. The flag is re-enabled * in i40e_alloc_vf_res(), when it's safe again to access VF's VSI. @@ -1409,7 +1437,19 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr) * just need to clean up, so don't hit the VFRTRIG register. */ if (!flr) { - /* reset VF using VPGEN_VFRTRIG reg */ + /* Sync VFR reset before trigger next one */ + radq = rd32(hw, I40E_VFINT_ICR0_ENA(vf->vf_id)) & + I40E_VFINT_ICR0_ADMINQ_MASK; + if (vf_active && !radq) + /* waiting for finish reset by virtual driver */ + if (i40e_sync_vfr_reset(hw, vf->vf_id)) + dev_info(&pf->pdev->dev, + "Reset VF %d never finished\n", + vf->vf_id); + + /* Reset VF using VPGEN_VFRTRIG reg. It is also setting + * in progress state in rstat1 register. + */ reg = rd32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id)); reg |= I40E_VPGEN_VFRTRIG_VFSWR_MASK; wr32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id), reg); diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h index 6aa35c8c9091..8135bd6a1c0a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h @@ -19,6 +19,7 @@ #define I40E_MAX_VF_PROMISC_FLAGS 3 #define I40E_VF_STATE_WAIT_COUNT 20 +#define I40E_VFR_WAIT_COUNT 100 #define I40E_VF_RESET_TIME_MIN 30000000 /* time in nsec */ /* Various queue ctrls */ -- 2.27.0 From lkp at intel.com Tue Nov 30 09:05:14 2021 From: lkp at intel.com (kernel test robot) Date: Tue, 30 Nov 2021 17:05:14 +0800 Subject: [Intel-wired-lan] [tnguy-next-queue:dev-queue] BUILD SUCCESS 008b13e86219f60cc0c4b490e4bb8ec098f375f9 Message-ID: <61a5e94a.rmrWxMtgNp3QKiMQ%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue branch HEAD: 008b13e86219f60cc0c4b490e4bb8ec098f375f9 i40e: Fix queues reservation for XDP elapsed time: 727m configs tested: 139 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm defconfig arm64 allyesconfig arm64 defconfig arm allyesconfig arm allmodconfig i386 randconfig-c001-20211128 i386 randconfig-c001-20211130 powerpc tqm8555_defconfig arm mv78xx0_defconfig mips capcella_defconfig arm multi_v5_defconfig m68k sun3_defconfig powerpc tqm8540_defconfig um defconfig powerpc akebono_defconfig powerpc xes_mpc85xx_defconfig powerpc ebony_defconfig mips rm200_defconfig sh sh7770_generic_defconfig arm lpd270_defconfig arm realview_defconfig powerpc taishan_defconfig arm palmz72_defconfig powerpc wii_defconfig mips rbtx49xx_defconfig sh rsk7201_defconfig mips ip28_defconfig arm bcm2835_defconfig sh j2_defconfig powerpc tqm8548_defconfig sh sh7785lcr_32bit_defconfig arm pcm027_defconfig mips decstation_r4k_defconfig openrisc defconfig powerpc warp_defconfig powerpc mpc834x_itx_defconfig m68k atari_defconfig arm ixp4xx_defconfig sh se7712_defconfig mips gcw0_defconfig powerpc ep88xc_defconfig powerpc redwood_defconfig i386 defconfig arm simpad_defconfig arm randconfig-c002-20211129 arm randconfig-c002-20211128 ia64 defconfig ia64 allmodconfig ia64 allyesconfig m68k allmodconfig m68k defconfig m68k allyesconfig nios2 defconfig arc allyesconfig nds32 allnoconfig nds32 defconfig nios2 allyesconfig csky defconfig alpha defconfig alpha allyesconfig arc defconfig sh allmodconfig h8300 allyesconfig xtensa allyesconfig parisc defconfig s390 allyesconfig s390 allmodconfig parisc allyesconfig s390 defconfig i386 allyesconfig sparc allyesconfig sparc defconfig i386 debian-10.3-kselftests i386 debian-10.3 mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig x86_64 randconfig-a001-20211130 x86_64 randconfig-a006-20211130 x86_64 randconfig-a003-20211130 x86_64 randconfig-a004-20211130 x86_64 randconfig-a005-20211130 x86_64 randconfig-a002-20211130 i386 randconfig-a001-20211129 i386 randconfig-a002-20211129 i386 randconfig-a006-20211129 i386 randconfig-a005-20211129 i386 randconfig-a004-20211129 i386 randconfig-a003-20211129 x86_64 randconfig-a011-20211128 x86_64 randconfig-a014-20211128 x86_64 randconfig-a012-20211128 x86_64 randconfig-a013-20211128 x86_64 randconfig-a015-20211128 x86_64 randconfig-a016-20211128 i386 randconfig-a015-20211128 i386 randconfig-a016-20211128 i386 randconfig-a013-20211128 i386 randconfig-a012-20211128 i386 randconfig-a014-20211128 i386 randconfig-a011-20211128 arc randconfig-r043-20211128 s390 randconfig-r044-20211128 riscv randconfig-r042-20211128 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig um x86_64_defconfig um i386_defconfig x86_64 defconfig x86_64 kexec x86_64 rhel-8.3 x86_64 allyesconfig x86_64 rhel-8.3-func x86_64 rhel-8.3-kselftests clang tested configs: x86_64 randconfig-a001-20211128 x86_64 randconfig-a006-20211128 x86_64 randconfig-a003-20211128 x86_64 randconfig-a005-20211128 x86_64 randconfig-a004-20211128 x86_64 randconfig-a002-20211128 i386 randconfig-a001-20211128 i386 randconfig-a002-20211128 i386 randconfig-a006-20211128 i386 randconfig-a005-20211128 i386 randconfig-a004-20211128 i386 randconfig-a003-20211128 hexagon randconfig-r045-20211129 hexagon randconfig-r041-20211129 s390 randconfig-r044-20211129 riscv randconfig-r042-20211129 hexagon randconfig-r045-20211128 hexagon randconfig-r041-20211128 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From lkp at intel.com Tue Nov 30 09:05:19 2021 From: lkp at intel.com (kernel test robot) Date: Tue, 30 Nov 2021 17:05:19 +0800 Subject: [Intel-wired-lan] [tnguy-net-queue:dev-queue] BUILD SUCCESS 3f24bdd5e0d7bf9772ccb9dbed39ec790ae16486 Message-ID: <61a5e94f.OOad/D2LgfITEEbj%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue.git dev-queue branch HEAD: 3f24bdd5e0d7bf9772ccb9dbed39ec790ae16486 i40e: Fix queues reservation for XDP elapsed time: 728m configs tested: 190 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm defconfig arm64 allyesconfig arm64 defconfig arm allyesconfig arm allmodconfig i386 randconfig-c001-20211128 i386 randconfig-c001-20211130 powerpc pasemi_defconfig powerpc mpc832x_mds_defconfig xtensa smp_lx200_defconfig arm iop32x_defconfig mips vocore2_defconfig mips capcella_defconfig powerpc tqm8555_defconfig i386 defconfig arm mv78xx0_defconfig arm davinci_all_defconfig mips malta_qemu_32r6_defconfig powerpc cm5200_defconfig mips qi_lb60_defconfig mips cobalt_defconfig powerpc akebono_defconfig powerpc xes_mpc85xx_defconfig powerpc ebony_defconfig mips rm200_defconfig arm lpd270_defconfig arm xcep_defconfig mips loongson3_defconfig arm integrator_defconfig arm multi_v7_defconfig powerpc mpc836x_rdk_defconfig arm exynos_defconfig powerpc canyonlands_defconfig sh sh7710voipgw_defconfig mips tb0219_defconfig arm qcom_defconfig mips xway_defconfig parisc generic-64bit_defconfig powerpc makalu_defconfig arm omap2plus_defconfig powerpc mpc834x_itx_defconfig m68k m5208evb_defconfig powerpc linkstation_defconfig mips ci20_defconfig arm realview_defconfig powerpc taishan_defconfig arm palmz72_defconfig powerpc wii_defconfig mips rbtx49xx_defconfig sh rsk7201_defconfig arm mvebu_v7_defconfig arm versatile_defconfig microblaze mmu_defconfig arm orion5x_defconfig mips ip22_defconfig s390 zfcpdump_defconfig powerpc socrates_defconfig riscv nommu_k210_sdcard_defconfig sparc sparc32_defconfig mips ip28_defconfig mips maltasmvp_eva_defconfig m68k m5407c3_defconfig arc nsim_700_defconfig arm lubbock_defconfig sh magicpanelr2_defconfig arm bcm2835_defconfig sh j2_defconfig powerpc tqm8548_defconfig mips gcw0_defconfig arm imx_v6_v7_defconfig mips omega2p_defconfig mips rs90_defconfig arm h5000_defconfig m68k allyesconfig nds32 defconfig openrisc defconfig arm pcm027_defconfig powerpc warp_defconfig mips decstation_r4k_defconfig arm imx_v4_v5_defconfig arm simpad_defconfig m68k m5249evb_defconfig mips bmips_stb_defconfig sh sh7770_generic_defconfig arm ep93xx_defconfig arm neponset_defconfig powerpc ep88xc_defconfig powerpc redwood_defconfig sh sh7785lcr_32bit_defconfig arm shannon_defconfig sh rsk7203_defconfig arm u8500_defconfig arm cerfcube_defconfig m68k m5272c3_defconfig arm randconfig-c002-20211128 arm randconfig-c002-20211129 ia64 defconfig ia64 allmodconfig ia64 allyesconfig m68k allmodconfig m68k defconfig nios2 defconfig arc allyesconfig nds32 allnoconfig nios2 allyesconfig csky defconfig alpha defconfig alpha allyesconfig xtensa allyesconfig h8300 allyesconfig arc defconfig sh allmodconfig parisc defconfig s390 allyesconfig s390 allmodconfig parisc allyesconfig s390 defconfig i386 allyesconfig sparc allyesconfig sparc defconfig i386 debian-10.3-kselftests i386 debian-10.3 mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig i386 randconfig-a001-20211129 i386 randconfig-a002-20211129 i386 randconfig-a006-20211129 i386 randconfig-a005-20211129 i386 randconfig-a004-20211129 i386 randconfig-a003-20211129 x86_64 randconfig-a011-20211128 x86_64 randconfig-a014-20211128 x86_64 randconfig-a012-20211128 x86_64 randconfig-a016-20211128 x86_64 randconfig-a013-20211128 x86_64 randconfig-a015-20211128 i386 randconfig-a015-20211128 i386 randconfig-a016-20211128 i386 randconfig-a013-20211128 i386 randconfig-a012-20211128 i386 randconfig-a014-20211128 i386 randconfig-a011-20211128 arc randconfig-r043-20211128 s390 randconfig-r044-20211128 riscv randconfig-r042-20211128 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig x86_64 rhel-8.3-kselftests um x86_64_defconfig um i386_defconfig x86_64 allyesconfig x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec clang tested configs: s390 randconfig-c005-20211128 i386 randconfig-c001-20211128 riscv randconfig-c006-20211128 arm randconfig-c002-20211128 powerpc randconfig-c003-20211128 x86_64 randconfig-c007-20211128 mips randconfig-c004-20211128 x86_64 randconfig-a001-20211128 x86_64 randconfig-a006-20211128 x86_64 randconfig-a003-20211128 x86_64 randconfig-a005-20211128 x86_64 randconfig-a004-20211128 x86_64 randconfig-a002-20211128 i386 randconfig-a001-20211128 i386 randconfig-a002-20211128 i386 randconfig-a006-20211128 i386 randconfig-a005-20211128 i386 randconfig-a004-20211128 i386 randconfig-a003-20211128 i386 randconfig-a015-20211129 i386 randconfig-a016-20211129 i386 randconfig-a013-20211129 i386 randconfig-a012-20211129 i386 randconfig-a014-20211129 i386 randconfig-a011-20211129 hexagon randconfig-r045-20211129 hexagon randconfig-r041-20211129 s390 randconfig-r044-20211129 riscv randconfig-r042-20211129 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From bindushree.p at intel.com Tue Nov 30 09:37:53 2021 From: bindushree.p at intel.com (P, Bindushree) Date: Tue, 30 Nov 2021 09:37:53 +0000 Subject: [Intel-wired-lan] [PATCH net v3] i40e: Fix pre-set max number of queues for VF In-Reply-To: <20210716093356.7800-1-mateusz.palczewski@intel.com> References: <20210716093356.7800-1-mateusz.palczewski@intel.com> Message-ID: > -----Original Message----- > From: Intel-wired-lan On Behalf Of > Palczewski, Mateusz > Sent: Friday, July 16, 2021 3:04 PM > To: intel-wired-lan at lists.osuosl.org > Cc: Palczewski, Mateusz > Subject: [Intel-wired-lan] [PATCH net v3] i40e: Fix pre-set max number of > queues for VF > > After setting pre-set combined to 16 queues and reserving 16 queues by tc > qdisc, pre-set maximum combined queues returned to default value after VF > reset being 4 and this generated errors during removing tc. > Fixed by removing clear num_req_queues before reset VF. > > Fixes: e284fc280473 (i40e: Add and delete cloud filter) > Change-Id: Ib2db315e4b04eeb15e12301edf833014a929e914 > Signed-off-by: Grzegorz Szczurek > Signed-off-by: Mateusz Palczewski > --- > v2: Refactored commit message > v3: Fixed wrong e-mail address in commit message > --- > drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c | 5 ----- > 1 file changed, 5 deletions(-) > Tested-by: P, Bindushree From jbrouer at redhat.com Tue Nov 30 11:25:47 2021 From: jbrouer at redhat.com (Jesper Dangaard Brouer) Date: Tue, 30 Nov 2021 12:25:47 +0100 Subject: [Intel-wired-lan] [PATCH net-next 2/2] igc: enable XDP metadata in driver In-Reply-To: <9948428f33d013105108872d51f7e6ebec21203c.camel@intel.com> References: <163700856423.565980.10162564921347693758.stgit@firesoul> <163700859087.565980.3578855072170209153.stgit@firesoul> <20211126161649.151100-1-alexandr.lobakin@intel.com> <6de05aea-9cf4-c938-eff2-9e3b138512a4@redhat.com> <20211129145303.10507-1-alexandr.lobakin@intel.com> <20211129181320.579477-1-alexandr.lobakin@intel.com> <9948428f33d013105108872d51f7e6ebec21203c.camel@intel.com> Message-ID: On 29/11/2021 20.03, Nguyen, Anthony L wrote: > On Mon, 2021-11-29 at 19:13 +0100, Alexander Lobakin wrote: >> From: Alexander Lobakin >> Date: Mon, 29 Nov 2021 15:53:03 +0100 >> >>> From: Jesper Dangaard Brouer >>> Date: Mon, 29 Nov 2021 15:39:04 +0100 >>> >>>> On 26/11/2021 17.16, Alexander Lobakin wrote: >>>>> From: Jesper Dangaard Brouer >>>>> Date: Mon, 15 Nov 2021 21:36:30 +0100 >>>>> >>>>>> Enabling the XDP bpf_prog access to data_meta area is a very >>>>>> small >>>>>> change. Hint passing 'true' to xdp_prepare_buff(). >> >> [ snip ] [ snip ] >> >>> >>>> Tony is it worth resending a V2 of this patch? >>> >>> Tony, you can take it as it is if you want, I'll correct it later >>> in >>> mine. Up to you. >> >> My "fixup" looks like (in case of v2 needed or so): > > Thanks Al. If Jesper is ok with this, I'll incorporate it in before > sending the pull request to netdev. Otherwise, you can do it as follow > on in the other series you previously referenced. I'm fine with you incorporating this change. Thanks! :-) --Jesper From markpearson at lenovo.com Tue Nov 30 15:52:20 2021 From: markpearson at lenovo.com (Mark Pearson) Date: Tue, 30 Nov 2021 10:52:20 -0500 Subject: [Intel-wired-lan] [External] Re: [PATCH 3/3] Revert "e1000e: Add handshake with the CSME to support S0ix" In-Reply-To: <0ba36a30-95d3-a5f4-93c2-443cf2259756@intel.com> References: <20211122161927.874291-1-kai.heng.feng@canonical.com> <20211122161927.874291-3-kai.heng.feng@canonical.com> <0ba36a30-95d3-a5f4-93c2-443cf2259756@intel.com> Message-ID: <3fad0b95-fe97-8c4a-3ca9-3ed2a9fa2134@lenovo.com> Hi Sasha On 2021-11-28 08:23, Sasha Neftin wrote: > On 11/22/2021 18:19, Kai-Heng Feng wrote: >> This reverts commit 3e55d231716ea361b1520b801c6778c4c48de102. >> >> Bugzilla: >> https://bugzilla.kernel.org/show_bug.cgi?id=214821>>> >> Signed-off-by: Kai-Heng Feng >> --- >> > Hello Kai-Heng, > I believe it is the wrong approach. Reverting this patch will put > corporate systems in an unpredictable state. SW will perform s0ix flow > independent to CSME. (The CSME firmware will continue run > independently.) LAN controller could be in an unknown state. > Please, afford us to continue to debug the problem (it is could be > incredible complexity) > > You always can skip the s0ix flow on problematic corporate systems by > using privilege flag: ethtool --set-priv-flags enp0s31f6 s0ix-enabled off > > Also, there is no impact on consumer systems. > Sasha I know we've discussed this offline, and your team are working on the correct fix but I wanted to check based on your comments above that "it was complex". I thought, and maybe misunderstood, that it was going to be relatively simple to disable the change for older CPUs - which is the biggest problem caused by the patch. Right now it's breaking networking for folk who happen to have a vPro Tigerlake (and I believe even potentially Cometlake or older) system. I think the impact of that could potentially be quite severe. I understand not wanting to revert the change for the ADL platforms I believe this is targeting and to fix this instead - but your comment made me nervous that Linux users on older Intel based platforms are in for a long and painful wait - it is likely a lot of users.... Can you or Dima confirm the fix for older platforms will be available soon? I appreciate the ADL platform might take a bit more work and time to get right. Thanks Mark From kuba at kernel.org Tue Nov 30 17:12:06 2021 From: kuba at kernel.org (Jakub Kicinski) Date: Tue, 30 Nov 2021 09:12:06 -0800 Subject: [Intel-wired-lan] [PATCH net] igb: fix deadlock caused by taking RTNL in RPM resume path In-Reply-To: <6edc23a1-5907-3a41-7b46-8d53c5664a56@gmail.com> References: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> <20211129171712.500e37cb@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <6edc23a1-5907-3a41-7b46-8d53c5664a56@gmail.com> Message-ID: <20211130091206.488a541f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Tue, 30 Nov 2021 07:46:22 +0100 Heiner Kallweit wrote: > On 30.11.2021 02:17, Jakub Kicinski wrote: > > On Mon, 29 Nov 2021 22:14:06 +0100 Heiner Kallweit wrote: > >> - rtnl_lock(); > >> + if (!rpm) > >> + rtnl_lock(); > > > > Is there an ASSERT_RTNL() hidden in any of the below? Can we add one? > > Unless we're 100% confident nobody will RPM resume without rtnl held.. > > > > Not sure whether igb uses RPM the same way as r8169. There the device > is runtime-suspended (D3hot) w/o link. Once cable is plugged in the PHY > triggers a PME, and PCI core runtime-resumes the device (MAC). > In this case RTNL isn't held by the caller. Therefore I don't think > it's safe to assume that all callers hold RTNL. No, no - I meant to leave the locking in but add ASSERT_RTNL() to catch if rpm == true && rtnl_held() == false. From alexandr.lobakin at intel.com Tue Nov 30 18:36:48 2021 From: alexandr.lobakin at intel.com (Alexander Lobakin) Date: Tue, 30 Nov 2021 19:36:48 +0100 Subject: [Intel-wired-lan] [PATCH net-next 1/2] i40e: remove dead stores on XSK hotpath Message-ID: <20211130183649.1166842-1-alexandr.lobakin@intel.com> The 'if (ntu == rx_ring->count)' block in i40e_alloc_rx_buffers_zc() was previously residing in the loop, but after introducing the batched interface it is used only to wrap-around the NTU descriptor, thus no more need to assign 'xdp'. 'cleaned_count' in i40e_clean_rx_irq_zc() was previously being incremented in the loop, but after commit f12738b6ec06 ("i40e: remove unnecessary cleaned_count updates") it gets assigned only once after it, so the initialization can be dropped. Fixes: 6aab0bb0c5cd ("i40e: Use the xsk batched rx allocation interface") Fixes: f12738b6ec06 ("i40e: remove unnecessary cleaned_count updates") Signed-off-by: Alexander Lobakin Acked-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index ea06e957393e..f08d19b8c554 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -218,7 +218,6 @@ bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 count) ntu += nb_buffs; if (ntu == rx_ring->count) { rx_desc = I40E_RX_DESC(rx_ring, 0); - xdp = i40e_rx_bi(rx_ring, 0); ntu = 0; } @@ -324,11 +323,11 @@ static void i40e_handle_xdp_result_zc(struct i40e_ring *rx_ring, int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) { unsigned int total_rx_bytes = 0, total_rx_packets = 0; - u16 cleaned_count = I40E_DESC_UNUSED(rx_ring); u16 next_to_clean = rx_ring->next_to_clean; u16 count_mask = rx_ring->count - 1; unsigned int xdp_res, xdp_xmit = 0; bool failure = false; + u16 cleaned_count; while (likely(total_rx_packets < (unsigned int)budget)) { union i40e_rx_desc *rx_desc; -- 2.33.1 From alexandr.lobakin at intel.com Tue Nov 30 18:36:49 2021 From: alexandr.lobakin at intel.com (Alexander Lobakin) Date: Tue, 30 Nov 2021 19:36:49 +0100 Subject: [Intel-wired-lan] [PATCH net-next 2/2] ice: remove dead store on XSK hotpath In-Reply-To: <20211130183649.1166842-1-alexandr.lobakin@intel.com> References: <20211130183649.1166842-1-alexandr.lobakin@intel.com> Message-ID: <20211130183649.1166842-2-alexandr.lobakin@intel.com> The 'if (ntu == rx_ring->count)' block in ice_alloc_rx_buffers_zc() was previously residing in the loop, but after introducing the batched interface it is used only to wrap-around the NTU descriptor, thus no more need to assign 'xdp'. Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface") Signed-off-by: Alexander Lobakin Acked-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_xsk.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index ff55cb415b11..8573d2a3d873 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -391,7 +391,6 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) ntu += nb_buffs; if (ntu == rx_ring->count) { rx_desc = ICE_RX_DESC(rx_ring, 0); - xdp = rx_ring->xdp_buf; ntu = 0; } -- 2.33.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:43 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:43 -0800 Subject: [Intel-wired-lan] [PATCH net-next 02/14] ice: Add helper function for adding VLAN 0 In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-2-anthony.l.nguyen@intel.com> From: Brett Creeley There are multiple places where VLAN 0 is being added. Create a function to be called in order to minimize changes as the implementation is expanded to support double VLAN and avoid duplicated code. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 4 ++-- drivers/net/ethernet/intel/ice/ice_lib.c | 11 ++++++++++- drivers/net/ethernet/intel/ice/ice_lib.h | 2 +- drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 2 +- 4 files changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index a737c54c4895..291748553800 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -127,7 +127,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) __dev_mc_unsync(uplink_netdev, NULL); netif_addr_unlock_bh(uplink_netdev); - if (ice_vsi_add_vlan(uplink_vsi, 0, ICE_FWD_TO_VSI)) + if (ice_vsi_add_vlan_zero(uplink_vsi)) goto err_def_rx; if (!ice_is_dflt_vsi_in_use(uplink_vsi->vsw)) { @@ -231,7 +231,7 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) goto err; } - if (ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI)) { + if (ice_vsi_add_vlan_zero(vsi)) { ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr.addr, ICE_FWD_TO_VSI); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 2db3cd6d8907..cc135792834e 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2621,7 +2621,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, * so this handles those cases (i.e. adding the PF to a bridge * without the 8021q module loaded). */ - ret = ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); + ret = ice_vsi_add_vlan_zero(vsi); if (ret) goto unroll_clear_rings; @@ -4069,6 +4069,15 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) return 0; } +/** + * ice_vsi_add_vlan_zero - add VLAN 0 filter(s) for this VSI + * @vsi: VSI used to add VLAN filters + */ +int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) +{ + return ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); +} + /** * ice_is_feature_supported * @pf: pointer to the struct ice_pf instance diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 9fdd95dd5a14..28e0f1147c82 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -133,7 +133,7 @@ void ice_vsi_ctx_clear_antispoof(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx); - +int ice_vsi_add_vlan_zero(struct ice_vsi *vsi); bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f); void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_init_feature_support(struct ice_pf *pf); diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index f947d936def3..ab03010c822d 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -1855,7 +1855,7 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf) if (!vsi) return -ENOMEM; - err = ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); + err = ice_vsi_add_vlan_zero(vsi); if (err) { dev_warn(dev, "Failed to add VLAN 0 filter for VF %d\n", vf->vf_id); -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:42 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:42 -0800 Subject: [Intel-wired-lan] [PATCH net-next 01/14] ice: Refactor spoofcheck configuration functions Message-ID: <20211130212155.27852-1-anthony.l.nguyen@intel.com> From: Brett Creeley Add functions to configure Tx VLAN antispoof based on iproute configuration and/or VLAN mode and VF driver support. This is needed later so the driver can control when it can be configured. Also, add functions that can be used to enable and disable MAC and VLAN spoofcheck. Move spoofchk configuration during VSI setup into the SR-IOV initialization path and into the post VSI rebuild flow for VF VSIs. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_lib.c | 19 --- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 159 ++++++++++++++---- 2 files changed, 128 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 5ef959769104..2db3cd6d8907 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1125,25 +1125,6 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID); } - /* enable/disable MAC and VLAN anti-spoof when spoofchk is on/off - * respectively - */ - if (vsi->type == ICE_VSI_VF) { - ctxt->info.valid_sections |= - cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); - if (pf->vf[vsi->vf_id].spoofchk) { - ctxt->info.sec_flags |= - ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); - } else { - ctxt->info.sec_flags &= - ~(ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)); - } - } - /* Allow control frames out of main VSI */ if (vsi->type == ICE_VSI_PF) { ctxt->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 8f2045b7c29f..f947d936def3 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -837,6 +837,114 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) return 0; } +static int ice_cfg_vlan_antispoof(struct ice_vsi *vsi, bool enable) +{ + struct ice_vsi_ctx *ctx; + int err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->info.sec_flags = vsi->info.sec_flags; + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); + + if (enable) + ctx->info.sec_flags |= ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S; + else + ctx->info.sec_flags &= ~(ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); + + err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx VLAN anti-spoof %s for VSI %d, error %d\n", + enable ? "ON" : "OFF", vsi->vsi_num, err); + else + vsi->info.sec_flags = ctx->info.sec_flags; + + kfree(ctx); + + return err; +} + +static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable) +{ + struct ice_vsi_ctx *ctx; + int err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->info.sec_flags = vsi->info.sec_flags; + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); + + if (enable) + ctx->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF; + else + ctx->info.sec_flags &= ~ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF; + + err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx MAC anti-spoof %s for VSI %d, error %d\n", + enable ? "ON" : "OFF", vsi->vsi_num, err); + else + vsi->info.sec_flags = ctx->info.sec_flags; + + kfree(ctx); + + return err; +} + +/** + * ice_vsi_ena_spoofchk - enable Tx spoof checking for this VSI + * @vsi: VSI to enable Tx spoof checking for + */ +static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) +{ + int err; + + err = ice_cfg_vlan_antispoof(vsi, true); + if (err) + return err; + + return ice_cfg_mac_antispoof(vsi, true); +} + +/** + * ice_vsi_dis_spoofchk - disable Tx spoof checking for this VSI + * @vsi: VSI to disable Tx spoof checking for + */ +static int ice_vsi_dis_spoofchk(struct ice_vsi *vsi) +{ + int err; + + err = ice_cfg_vlan_antispoof(vsi, false); + if (err) + return err; + + return ice_cfg_mac_antispoof(vsi, false); +} + +/** + * ice_vf_set_spoofchk_cfg - apply Tx spoof checking setting + * @vf: VF set spoofchk for + * @vsi: VSI associated to the VF + */ +static int +ice_vf_set_spoofchk_cfg(struct ice_vf *vf, struct ice_vsi *vsi) +{ + int err; + + if (vf->spoofchk) + err = ice_vsi_ena_spoofchk(vsi); + else + err = ice_vsi_dis_spoofchk(vsi); + + return err; +} + /** * ice_vf_rebuild_host_mac_cfg - add broadcast and the VF's perm_addr/LAA * @vf: VF to add MAC filters for @@ -1344,6 +1452,10 @@ static void ice_vf_rebuild_host_cfg(struct ice_vf *vf) dev_err(dev, "failed to rebuild Tx rate limiting configuration for VF %u\n", vf->vf_id); + if (ice_vf_set_spoofchk_cfg(vf, vsi)) + dev_err(dev, "failed to rebuild spoofchk configuration for VF %d\n", + vf->vf_id); + /* rebuild aggregator node config for main VF VSI */ ice_vf_rebuild_aggregator_node_cfg(vsi); } @@ -1758,6 +1870,13 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf) goto release_vsi; } + err = ice_vf_set_spoofchk_cfg(vf, vsi); + if (err) { + dev_warn(dev, "Failed to initialize spoofchk setting for VF %d\n", + vf->vf_id); + goto release_vsi; + } + vf->num_mac = 1; return 0; @@ -2891,7 +3010,6 @@ int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_pf *pf = np->vsi->back; - struct ice_vsi_ctx *ctx; struct ice_vsi *vf_vsi; struct device *dev; struct ice_vf *vf; @@ -2924,37 +3042,16 @@ int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena) return 0; } - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); - if (!ctx) - return -ENOMEM; - - ctx->info.sec_flags = vf_vsi->info.sec_flags; - ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); - if (ena) { - ctx->info.sec_flags |= - ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); - } else { - ctx->info.sec_flags &= - ~(ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)); - } - - ret = ice_update_vsi(&pf->hw, vf_vsi->idx, ctx, NULL); - if (ret) { - dev_err(dev, "Failed to %sable spoofchk on VF %d VSI %d\n error %d\n", - ena ? "en" : "dis", vf->vf_id, vf_vsi->vsi_num, ret); - goto out; - } - - /* only update spoofchk state and VSI context on success */ - vf_vsi->info.sec_flags = ctx->info.sec_flags; - vf->spoofchk = ena; + if (ena) + ret = ice_vsi_ena_spoofchk(vf_vsi); + else + ret = ice_vsi_dis_spoofchk(vf_vsi); + if (ret) + dev_err(dev, "Failed to set spoofchk %s for VF %d VSI %d\n error %d\n", + ena ? "ON" : "OFF", vf->vf_id, vf_vsi->vsi_num, ret); + else + vf->spoofchk = ena; -out: - kfree(ctx); return ret; } -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:47 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:47 -0800 Subject: [Intel-wired-lan] [PATCH net-next 06/14] ice: Use the proto argument for VLAN ops In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-6-anthony.l.nguyen@intel.com> From: Brett Creeley Currently the proto argument is unused. This is because the driver only supports 802.1Q VLAN filtering. This policy is enforced via netdev features that the driver sets up when configuring the netdev, so the proto argument won't ever be anything other than 802.1Q. However, this will allow for future iterations of the driver to seemlessly support 802.1ad filtering. Begin using the proto argument and extend the related structures to support its use. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_fltr.c | 2 + drivers/net/ethernet/intel/ice/ice_lib.c | 2 +- drivers/net/ethernet/intel/ice/ice_main.c | 22 ++++----- drivers/net/ethernet/intel/ice/ice_switch.c | 5 ++ drivers/net/ethernet/intel/ice/ice_switch.h | 2 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 10 ++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 2 +- drivers/net/ethernet/intel/ice/ice_vlan.h | 3 +- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 48 ++++++++++++++++++- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 4 +- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 4 +- 11 files changed, 78 insertions(+), 26 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c index 8f543851e39f..67044556b5bd 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.c +++ b/drivers/net/ethernet/intel/ice/ice_fltr.c @@ -220,6 +220,8 @@ ice_fltr_add_vlan_to_list(struct ice_vsi *vsi, struct list_head *list, info.fltr_act = ICE_FWD_TO_VSI; info.vsi_handle = vsi->idx; info.l_data.vlan.vlan_id = vlan->vid; + info.l_data.vlan.tpid = vlan->tpid; + info.l_data.vlan.tpid_valid = true; return ice_fltr_add_entry_to_list(ice_pf_to_dev(vsi->back), &info, list); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 55a2aef54922..0fff5ec897c9 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3880,7 +3880,7 @@ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { struct ice_vlan vlan; - vlan = ICE_VLAN(0, 0); + vlan = ICE_VLAN(0, 0, 0); return vsi->vlan_ops.add_vlan(vsi, &vlan); } diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 8669858d104c..8a0684c0ebd0 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3410,14 +3410,13 @@ ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) /** * ice_vlan_rx_add_vid - Add a VLAN ID filter to HW offload * @netdev: network interface to be adjusted - * @proto: unused protocol + * @proto: VLAN TPID * @vid: VLAN ID to be added * * net_device_ops implementation for adding VLAN IDs */ static int -ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, - u16 vid) +ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; @@ -3438,7 +3437,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); ret = vsi->vlan_ops.add_vlan(vsi, &vlan); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3449,14 +3448,13 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /** * ice_vlan_rx_kill_vid - Remove a VLAN ID filter from HW offload * @netdev: network interface to be adjusted - * @proto: unused protocol + * @proto: VLAN TPID * @vid: VLAN ID to be removed * * net_device_ops implementation for removing VLAN IDs */ static int -ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, - u16 vid) +ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; @@ -3470,7 +3468,7 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, /* Make sure VLAN delete is successful before updating VLAN * information */ - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); ret = vsi->vlan_ops.del_vlan(vsi, &vlan); if (ret) return ret; @@ -5621,14 +5619,14 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vsi->vlan_ops.ena_stripping(vsi); + ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) ret = vsi->vlan_ops.dis_stripping(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_TX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vsi->vlan_ops.ena_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) ret = vsi->vlan_ops.dis_insertion(vsi); @@ -5674,9 +5672,9 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) int ret = 0; if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - ret = vsi->vlan_ops.ena_stripping(vsi); + ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - ret = vsi->vlan_ops.ena_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index f998fcddc789..f851a81a7240 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -1539,6 +1539,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc) { u16 vlan_id = ICE_MAX_VLAN_ID + 1; + u16 vlan_tpid = ETH_P_8021Q; void *daddr = NULL; u16 eth_hdr_sz; u8 *eth_hdr; @@ -1611,6 +1612,8 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, break; case ICE_SW_LKUP_VLAN: vlan_id = f_info->l_data.vlan.vlan_id; + if (f_info->l_data.vlan.tpid_valid) + vlan_tpid = f_info->l_data.vlan.tpid; if (f_info->fltr_act == ICE_FWD_TO_VSI || f_info->fltr_act == ICE_FWD_TO_VSI_LIST) { act |= ICE_SINGLE_ACT_PRUNE; @@ -1653,6 +1656,8 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, if (!(vlan_id > ICE_MAX_VLAN_ID)) { off = (__force __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET); *off = cpu_to_be16(vlan_id); + off = (__force __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET); + *off = cpu_to_be16(vlan_tpid); } /* Create the switch rule with the final dummy Ethernet header */ diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index 4fb1a7ae5dbb..5000cc8276cd 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -77,6 +77,8 @@ struct ice_fltr_info { } mac_vlan; struct { u16 vlan_id; + u16 tpid; + u8 tpid_valid; } vlan; /* Set lkup_type as ICE_SW_LKUP_ETHERTYPE * if just using ethertype as filter. Set lkup_type as diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 4971e547432c..e576cd201a48 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -4139,7 +4139,7 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, mutex_lock(&vf->cfg_lock); - vf->port_vlan_info = ICE_VLAN(vlan_id, qos); + vf->port_vlan_info = ICE_VLAN(ETH_P_8021Q, vlan_id, qos); if (ice_vf_is_port_vlan_ena(vf)) dev_info(dev, "Setting VLAN %u, QoS %u on VF %d\n", vlan_id, qos, vf_id); @@ -4260,7 +4260,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); status = vsi->vlan_ops.add_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4313,7 +4313,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); status = vsi->vlan_ops.del_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4392,7 +4392,7 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf) } vsi = ice_get_vf_vsi(vf); - if (vsi->vlan_ops.ena_stripping(vsi)) + if (vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4457,7 +4457,7 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) - return vsi->vlan_ops.ena_stripping(vsi); + return vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); else return vsi->vlan_ops.dis_stripping(vsi); } diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 5079a3b72698..b06ca1f97833 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -120,7 +120,7 @@ struct ice_vf { struct ice_time_mac legacy_last_added_umac; DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); - struct ice_vlan port_vlan_info; /* Port VLAN ID and QoS */ + struct ice_vlan port_vlan_info; /* Port VLAN ID, QoS, and TPID */ u8 pf_set_mac:1; /* VF MAC address set by VMM admin */ u8 trusted:1; u8 spoofchk:1; diff --git a/drivers/net/ethernet/intel/ice/ice_vlan.h b/drivers/net/ethernet/intel/ice/ice_vlan.h index 3fad0cba2da6..bc4550a03173 100644 --- a/drivers/net/ethernet/intel/ice/ice_vlan.h +++ b/drivers/net/ethernet/intel/ice/ice_vlan.h @@ -8,10 +8,11 @@ #include "ice_type.h" struct ice_vlan { + u16 tpid; u16 vid; u8 prio; }; -#define ICE_VLAN(vid, prio) ((struct ice_vlan){ vid, prio }) +#define ICE_VLAN(tpid, vid, prio) ((struct ice_vlan){ tpid, vid, prio }) #endif /* _ICE_VLAN_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 74b6dec0744b..6b7feab0b2a1 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -6,6 +6,31 @@ #include "ice_fltr.h" #include "ice.h" +static void print_invalid_tpid(struct ice_vsi *vsi, u16 tpid) +{ + dev_err(ice_pf_to_dev(vsi->back), "%s %d specified invalid VLAN tpid 0x%04x\n", + ice_vsi_type_str(vsi->type), vsi->idx, tpid); +} + +/** + * validate_vlan - check if the ice_vlan passed in is valid + * @vsi: VSI used for printing error message + * @vlan: ice_vlan structure to validate + * + * Return true if the VLAN TPID is valid or if the VLAN TPID is 0 and the VLAN + * VID is 0, which allows for non-zero VLAN filters with the specified VLAN TPID + * and untagged VLAN 0 filters to be added to the prune list respectively. + */ +static bool validate_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + if (vlan->tpid != ETH_P_8021Q && (vlan->tpid || vlan->vid)) { + print_invalid_tpid(vsi, vlan->tpid); + return false; + } + + return true; +} + /** * ice_vsi_add_vlan - default add VLAN implementation for all VSI types * @vsi: VSI being configured @@ -15,6 +40,9 @@ int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { int err = 0; + if (!validate_vlan(vsi, vlan)) + return -EINVAL; + if (!ice_fltr_add_vlan(vsi, vlan)) { vsi->num_vlan++; } else { @@ -37,6 +65,9 @@ int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) struct device *dev; int err; + if (!validate_vlan(vsi, vlan)) + return -EINVAL; + dev = ice_pf_to_dev(pf); err = ice_fltr_remove_vlan(vsi, vlan); @@ -143,8 +174,13 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) return err; } -int ice_vsi_ena_stripping(struct ice_vsi *vsi) +int ice_vsi_ena_stripping(struct ice_vsi *vsi, const u16 tpid) { + if (tpid != ETH_P_8021Q) { + print_invalid_tpid(vsi, tpid); + return -EINVAL; + } + return ice_vsi_manage_vlan_stripping(vsi, true); } @@ -153,8 +189,13 @@ int ice_vsi_dis_stripping(struct ice_vsi *vsi) return ice_vsi_manage_vlan_stripping(vsi, false); } -int ice_vsi_ena_insertion(struct ice_vsi *vsi) +int ice_vsi_ena_insertion(struct ice_vsi *vsi, const u16 tpid) { + if (tpid != ETH_P_8021Q) { + print_invalid_tpid(vsi, tpid); + return -EINVAL; + } + return ice_vsi_manage_vlan_insertion(vsi); } @@ -216,6 +257,9 @@ int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { u16 port_vlan_info; + if (vlan->tpid != ETH_P_8021Q) + return -EINVAL; + if (vlan->prio > 7) return -EINVAL; diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index a0305007896c..1bdbf585db7d 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -12,9 +12,9 @@ struct ice_vsi; int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); -int ice_vsi_ena_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_stripping(struct ice_vsi *vsi, u16 tpid); int ice_vsi_dis_stripping(struct ice_vsi *vsi); -int ice_vsi_ena_insertion(struct ice_vsi *vsi); +int ice_vsi_ena_insertion(struct ice_vsi *vsi, u16 tpid); int ice_vsi_dis_insertion(struct ice_vsi *vsi); int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index c944f04acd3c..76e55b259bc8 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -12,9 +12,9 @@ struct ice_vsi; struct ice_vsi_vlan_ops { int (*add_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); int (*del_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); - int (*ena_stripping)(struct ice_vsi *vsi); + int (*ena_stripping)(struct ice_vsi *vsi, const u16 tpid); int (*dis_stripping)(struct ice_vsi *vsi); - int (*ena_insertion)(struct ice_vsi *vsi); + int (*ena_insertion)(struct ice_vsi *vsi, const u16 tpid); int (*dis_insertion)(struct ice_vsi *vsi); int (*ena_rx_filtering)(struct ice_vsi *vsi); int (*dis_rx_filtering)(struct ice_vsi *vsi); -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:44 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:44 -0800 Subject: [Intel-wired-lan] [PATCH net-next 03/14] ice: Add new VSI VLAN ops In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-3-anthony.l.nguyen@intel.com> From: Brett Creeley Incoming changes to support 802.1Q and/or 802.1ad VLAN filtering and offloads require more flexibility when configuring VLANs. The VSI VLAN interface will allow flexibility for configuring VLANs for all VSI types. Add new files to separate the VSI VLAN ops and move functions to make the code more organized. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/Makefile | 2 + drivers/net/ethernet/intel/ice/ice.h | 2 + drivers/net/ethernet/intel/ice/ice_eswitch.c | 2 +- drivers/net/ethernet/intel/ice/ice_lib.c | 207 +---------- drivers/net/ethernet/intel/ice/ice_lib.h | 11 - drivers/net/ethernet/intel/ice/ice_main.c | 30 +- drivers/net/ethernet/intel/ice/ice_osdep.h | 1 + drivers/net/ethernet/intel/ice/ice_switch.h | 9 - drivers/net/ethernet/intel/ice/ice_type.h | 9 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 111 +----- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 326 ++++++++++++++++++ .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 27 ++ .../net/ethernet/intel/ice/ice_vsi_vlan_ops.c | 20 ++ .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 28 ++ 14 files changed, 450 insertions(+), 335 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index c22434a3ec4d..c40b3aa1d195 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -18,6 +18,8 @@ ice-y := ice_main.o \ ice_txrx_lib.o \ ice_txrx.o \ ice_fltr.o \ + ice_vsi_vlan_ops.o \ + ice_vsi_vlan_lib.o \ ice_fdir.o \ ice_ethtool_fdir.o \ ice_flex_pipe.o \ diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 6fa06b00c268..efcc713ba287 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -73,6 +73,7 @@ #include "ice_eswitch.h" #include "ice_lag.h" #include "ice_gnss.h" +#include "ice_vsi_vlan_ops.h" #define ICE_BAR0 0 #define ICE_REQ_DESC_MULTIPLE 32 @@ -370,6 +371,7 @@ struct ice_vsi { u8 irqs_ready:1; u8 current_isup:1; /* Sync 'link up' logging */ u8 stat_offsets_loaded:1; + struct ice_vsi_vlan_ops vlan_ops; u16 num_vlan; /* queue information */ diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 291748553800..0ff1a375f2aa 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -118,7 +118,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; bool rule_added = false; - ice_vsi_manage_vlan_stripping(ctrl_vsi, false); + ctrl_vsi->vlan_ops.dis_stripping(ctrl_vsi); ice_remove_vsi_fltr(&pf->hw, uplink_vsi->idx); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index cc135792834e..b50509584b31 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1694,62 +1694,6 @@ void ice_update_eth_stats(struct ice_vsi *vsi) vsi->stat_offsets_loaded = true; } -/** - * ice_vsi_add_vlan - Add VSI membership for given VLAN - * @vsi: the VSI being configured - * @vid: VLAN ID to be added - * @action: filter action to be performed on match - */ -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) -{ - struct ice_pf *pf = vsi->back; - struct device *dev; - int err = 0; - - dev = ice_pf_to_dev(pf); - - if (!ice_fltr_add_vlan(vsi, vid, action)) { - vsi->num_vlan++; - } else { - err = -ENODEV; - dev_err(dev, "Failure Adding VLAN %d on VSI %i\n", vid, - vsi->vsi_num); - } - - return err; -} - -/** - * ice_vsi_kill_vlan - Remove VSI membership for a given VLAN - * @vsi: the VSI being configured - * @vid: VLAN ID to be removed - * - * Returns 0 on success and negative on failure - */ -int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid) -{ - struct ice_pf *pf = vsi->back; - struct device *dev; - int err; - - dev = ice_pf_to_dev(pf); - - err = ice_fltr_remove_vlan(vsi, vid, ICE_FWD_TO_VSI); - if (!err) { - vsi->num_vlan--; - } else if (err == -ENOENT) { - dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist, error: %d\n", - vid, vsi->vsi_num, err); - err = 0; - } else { - dev_err(dev, "Error removing VLAN %d on vsi %i error: %d\n", - vid, vsi->vsi_num, err); - } - - return err; -} - /** * ice_vsi_cfg_frame_size - setup max frame size and Rx buffer length * @vsi: VSI @@ -2077,96 +2021,6 @@ void ice_vsi_cfg_msix(struct ice_vsi *vsi) } } -/** - * ice_vsi_manage_vlan_insertion - Manage VLAN insertion for the VSI for Tx - * @vsi: the VSI being changed - */ -int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) -{ - struct ice_hw *hw = &vsi->back->hw; - struct ice_vsi_ctx *ctxt; - int ret; - - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - /* Here we are configuring the VSI to let the driver add VLAN tags by - * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag - * insertion happens in the Tx hot path, in ice_tx_map. - */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL; - - /* Preserve existing VLAN strip setting */ - ctxt->info.vlan_flags |= (vsi->info.vlan_flags & - ICE_AQ_VSI_VLAN_EMOD_M); - - ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); - - ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); - if (ret) { - dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN insert failed, err %d aq_err %s\n", - ret, ice_aq_str(hw->adminq.sq_last_status)); - goto out; - } - - vsi->info.vlan_flags = ctxt->info.vlan_flags; -out: - kfree(ctxt); - return ret; -} - -/** - * ice_vsi_manage_vlan_stripping - Manage VLAN stripping for the VSI for Rx - * @vsi: the VSI being changed - * @ena: boolean value indicating if this is a enable or disable request - */ -int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) -{ - struct ice_hw *hw = &vsi->back->hw; - struct ice_vsi_ctx *ctxt; - int ret; - - /* do not allow modifying VLAN stripping when a port VLAN is configured - * on this VSI - */ - if (vsi->info.pvid) - return 0; - - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - /* Here we are configuring what the VSI should do with the VLAN tag in - * the Rx packet. We can either leave the tag in the packet or put it in - * the Rx descriptor. - */ - if (ena) - /* Strip VLAN tag from Rx packet and put it in the desc */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH; - else - /* Disable stripping. Leave tag in packet */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING; - - /* Allow all packets untagged/tagged */ - ctxt->info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL; - - ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); - - ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); - if (ret) { - dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN strip failed, ena = %d err %d aq_err %s\n", - ena, ret, ice_aq_str(hw->adminq.sq_last_status)); - ret = -EIO; - goto out; - } - - vsi->info.vlan_flags = ctxt->info.vlan_flags; -out: - kfree(ctxt); - return ret; -} - /** * ice_vsi_start_all_rx_rings - start/enable all of a VSI's Rx rings * @vsi: the VSI whose rings are to be enabled @@ -2260,61 +2114,6 @@ bool ice_vsi_is_vlan_pruning_ena(struct ice_vsi *vsi) return (vsi->info.sw_flags2 & ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA); } -/** - * ice_cfg_vlan_pruning - enable or disable VLAN pruning on the VSI - * @vsi: VSI to enable or disable VLAN pruning on - * @ena: set to true to enable VLAN pruning and false to disable it - * - * returns 0 if VSI is updated, negative otherwise - */ -int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena) -{ - struct ice_vsi_ctx *ctxt; - struct ice_pf *pf; - int status; - - if (!vsi) - return -EINVAL; - - /* Don't enable VLAN pruning if the netdev is currently in promiscuous - * mode. VLAN pruning will be enabled when the interface exits - * promiscuous mode if any VLAN filters are active. - */ - if (vsi->netdev && vsi->netdev->flags & IFF_PROMISC && ena) - return 0; - - pf = vsi->back; - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - ctxt->info = vsi->info; - - if (ena) - ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - else - ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - - ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SW_VALID); - - status = ice_update_vsi(&pf->hw, vsi->idx, ctxt, NULL); - if (status) { - netdev_err(vsi->netdev, "%sabling VLAN pruning on VSI handle: %d, VSI HW ID: %d failed, err = %d, aq_err = %s\n", - ena ? "En" : "Dis", vsi->idx, vsi->vsi_num, - status, ice_aq_str(pf->hw.adminq.sq_last_status)); - goto err_out; - } - - vsi->info.sw_flags2 = ctxt->info.sw_flags2; - - kfree(ctxt); - return 0; - -err_out: - kfree(ctxt); - return -EIO; -} - static void ice_vsi_set_tc_cfg(struct ice_vsi *vsi) { if (!test_bit(ICE_FLAG_DCB_ENA, vsi->back->flags)) { @@ -2594,6 +2393,8 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, if (ret) goto unroll_get_qs; + ice_vsi_init_vlan_ops(vsi); + switch (vsi->type) { case ICE_VSI_CTRL: case ICE_VSI_SWITCHDEV_CTRL: @@ -3257,6 +3058,8 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) if (vtype == ICE_VSI_VF) vf = &pf->vf[vsi->vf_id]; + ice_vsi_init_vlan_ops(vsi); + coalesce = kcalloc(vsi->num_q_vectors, sizeof(struct ice_coalesce_stored), GFP_KERNEL); if (!coalesce) @@ -4075,7 +3878,7 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) */ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { - return ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); + return vsi->vlan_ops.add_vlan(vsi, 0, ICE_FWD_TO_VSI); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 28e0f1147c82..427e5e4e9f17 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -22,15 +22,6 @@ int ice_vsi_cfg_lan_txqs(struct ice_vsi *vsi); void ice_vsi_cfg_msix(struct ice_vsi *vsi); -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); - -int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid); - -int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi); - -int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena); - int ice_vsi_start_all_rx_rings(struct ice_vsi *vsi); int ice_vsi_stop_all_rx_rings(struct ice_vsi *vsi); @@ -45,8 +36,6 @@ int ice_vsi_stop_xdp_tx_rings(struct ice_vsi *vsi); bool ice_vsi_is_vlan_pruning_ena(struct ice_vsi *vsi); -int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena); - void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create); int ice_set_link(struct ice_vsi *vsi, bool ena); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 18ecb1eb85a6..904571527e27 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -401,7 +401,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) ~IFF_PROMISC; goto out_promisc; } - ice_cfg_vlan_pruning(vsi, false); + vsi->vlan_ops.dis_rx_filtering(vsi); } } else { /* Clear Rx filter to remove traffic from wire */ @@ -415,7 +415,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) goto out_promisc; } if (vsi->num_vlan > 1) - ice_cfg_vlan_pruning(vsi, true); + vsi->vlan_ops.ena_rx_filtering(vsi); } } } @@ -3429,7 +3429,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Enable VLAN pruning when a VLAN other than 0 is added */ if (!ice_vsi_is_vlan_pruning_ena(vsi)) { - ret = ice_cfg_vlan_pruning(vsi, true); + ret = vsi->vlan_ops.ena_rx_filtering(vsi); if (ret) return ret; } @@ -3437,7 +3437,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ - ret = ice_vsi_add_vlan(vsi, vid, ICE_FWD_TO_VSI); + ret = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3464,16 +3464,16 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, if (!vid) return 0; - /* Make sure ice_vsi_kill_vlan is successful before updating VLAN + /* Make sure VLAN delete is successful before updating VLAN * information */ - ret = ice_vsi_kill_vlan(vsi, vid); + ret = vsi->vlan_ops.del_vlan(vsi, vid); if (ret) return ret; /* Disable pruning when VLAN 0 is the only VLAN rule */ if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - ret = ice_cfg_vlan_pruning(vsi, false); + vsi->vlan_ops.dis_rx_filtering(vsi); set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); return ret; @@ -5617,24 +5617,24 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = ice_vsi_manage_vlan_stripping(vsi, true); + ret = vsi->vlan_ops.ena_stripping(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = ice_vsi_manage_vlan_stripping(vsi, false); + ret = vsi->vlan_ops.dis_stripping(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_TX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = ice_vsi_manage_vlan_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = ice_vsi_manage_vlan_insertion(vsi); + ret = vsi->vlan_ops.dis_insertion(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = ice_cfg_vlan_pruning(vsi, true); + ret = vsi->vlan_ops.ena_rx_filtering(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = ice_cfg_vlan_pruning(vsi, false); + ret = vsi->vlan_ops.dis_rx_filtering(vsi); if ((features & NETIF_F_NTUPLE) && !(netdev->features & NETIF_F_NTUPLE)) { @@ -5670,9 +5670,9 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) int ret = 0; if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - ret = ice_vsi_manage_vlan_stripping(vsi, true); + ret = vsi->vlan_ops.ena_stripping(vsi); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - ret = ice_vsi_manage_vlan_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_osdep.h b/drivers/net/ethernet/intel/ice/ice_osdep.h index f57c414bc0a9..380e8ae94fc9 100644 --- a/drivers/net/ethernet/intel/ice/ice_osdep.h +++ b/drivers/net/ethernet/intel/ice/ice_osdep.h @@ -9,6 +9,7 @@ #ifndef CONFIG_64BIT #include #endif +#include #define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg))) #define rd32(a, reg) readl((a)->hw_addr + (reg)) diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index d8334beaaa8a..4fb1a7ae5dbb 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -33,15 +33,6 @@ struct ice_vsi_ctx { struct ice_q_ctx *rdma_q_ctx[ICE_MAX_TRAFFIC_CLASS]; }; -enum ice_sw_fwd_act_type { - ICE_FWD_TO_VSI = 0, - ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */ - ICE_FWD_TO_Q, - ICE_FWD_TO_QGRP, - ICE_DROP_PACKET, - ICE_INVAL_ACT -}; - /* Switch recipe ID enum values are specific to hardware */ enum ice_sw_lkup_type { ICE_SW_LKUP_ETHERTYPE = 0, diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index caf0a02b25f5..ef2ef064a74c 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -1007,6 +1007,15 @@ struct ice_hw_port_stats { u64 fd_sb_match; }; +enum ice_sw_fwd_act_type { + ICE_FWD_TO_VSI = 0, + ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */ + ICE_FWD_TO_Q, + ICE_FWD_TO_QGRP, + ICE_DROP_PACKET, + ICE_INVAL_ACT +}; + struct ice_aq_get_set_rss_lut_params { u16 vsi_handle; /* software VSI handle */ u16 lut_size; /* size of the LUT buffer */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index ab03010c822d..6fa0968f0912 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -642,55 +642,6 @@ static void ice_trigger_vf_reset(struct ice_vf *vf, bool is_vflr, bool is_pfr) } } -/** - * ice_vsi_manage_pvid - Enable or disable port VLAN for VSI - * @vsi: the VSI to update - * @pvid_info: VLAN ID and QoS used to set the PVID VSI context field - * @enable: true for enable PVID false for disable - */ -static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) -{ - struct ice_hw *hw = &vsi->back->hw; - struct ice_aqc_vsi_props *info; - struct ice_vsi_ctx *ctxt; - int ret; - - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - ctxt->info = vsi->info; - info = &ctxt->info; - if (enable) { - info->vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | - ICE_AQ_VSI_PVLAN_INSERT_PVID | - ICE_AQ_VSI_VLAN_EMOD_STR; - info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } else { - info->vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING | - ICE_AQ_VSI_VLAN_MODE_ALL; - info->sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } - - info->pvid = cpu_to_le16(pvid_info); - info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | - ICE_AQ_VSI_PROP_SW_VALID); - - ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); - if (ret) { - dev_info(ice_hw_to_dev(hw), "update VSI for port VLAN failed, err %d aq_err %s\n", - ret, ice_aq_str(hw->adminq.sq_last_status)); - goto out; - } - - vsi->info.vlan_flags = info->vlan_flags; - vsi->info.sw_flags2 = info->sw_flags2; - vsi->info.pvid = info->pvid; -out: - kfree(ctxt); - return ret; -} - /** * ice_vf_get_port_info - Get the VF's port info structure * @vf: VF used to get the port info structure for @@ -815,7 +766,7 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) int err; if (vf->port_vlan_info) { - err = ice_vsi_manage_pvid(vsi, vf->port_vlan_info, true); + err = vsi->vlan_ops.set_port_vlan(vsi, vf->port_vlan_info); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); @@ -826,7 +777,7 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) } /* vlan_id will either be 0 or the port VLAN number */ - err = ice_vsi_add_vlan(vsi, vlan_id, ICE_FWD_TO_VSI); + err = vsi->vlan_ops.add_vlan(vsi, vlan_id, ICE_FWD_TO_VSI); if (err) { dev_err(dev, "failed to add %s VLAN %u filter for VF %u, error %d\n", vf->port_vlan_info ? "port" : "", vlan_id, vf->vf_id, @@ -837,37 +788,6 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) return 0; } -static int ice_cfg_vlan_antispoof(struct ice_vsi *vsi, bool enable) -{ - struct ice_vsi_ctx *ctx; - int err; - - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); - if (!ctx) - return -ENOMEM; - - ctx->info.sec_flags = vsi->info.sec_flags; - ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); - - if (enable) - ctx->info.sec_flags |= ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S; - else - ctx->info.sec_flags &= ~(ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); - - err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); - if (err) - dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx VLAN anti-spoof %s for VSI %d, error %d\n", - enable ? "ON" : "OFF", vsi->vsi_num, err); - else - vsi->info.sec_flags = ctx->info.sec_flags; - - kfree(ctx); - - return err; -} - static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable) { struct ice_vsi_ctx *ctx; @@ -905,7 +825,7 @@ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) { int err; - err = ice_cfg_vlan_antispoof(vsi, true); + err = vsi->vlan_ops.ena_tx_filtering(vsi); if (err) return err; @@ -920,7 +840,7 @@ static int ice_vsi_dis_spoofchk(struct ice_vsi *vsi) { int err; - err = ice_cfg_vlan_antispoof(vsi, false); + err = vsi->vlan_ops.dis_tx_filtering(vsi); if (err) return err; @@ -3131,9 +3051,9 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) if (vsi->num_vlan || vf->port_vlan_info) { if (rm_promisc) - ret = ice_cfg_vlan_pruning(vsi, true); + ret = vsi->vlan_ops.ena_rx_filtering(vsi); else - ret = ice_cfg_vlan_pruning(vsi, false); + ret = vsi->vlan_ops.dis_rx_filtering(vsi); if (ret) { dev_err(dev, "Failed to configure VLAN pruning in promiscuous mode\n"); v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4330,7 +4250,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - status = ice_vsi_add_vlan(vsi, vid, ICE_FWD_TO_VSI); + status = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4339,7 +4259,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) /* Enable VLAN pruning when non-zero VLAN is added */ if (!vlan_promisc && vid && !ice_vsi_is_vlan_pruning_ena(vsi)) { - status = ice_cfg_vlan_pruning(vsi, true); + status = vsi->vlan_ops.ena_rx_filtering(vsi); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n", @@ -4381,10 +4301,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - /* Make sure ice_vsi_kill_vlan is successful before - * updating VLAN information - */ - status = ice_vsi_kill_vlan(vsi, vid); + status = vsi->vlan_ops.del_vlan(vsi, vid); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4393,7 +4310,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) /* Disable VLAN pruning when only VLAN 0 is left */ if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - ice_cfg_vlan_pruning(vsi, false); + status = vsi->vlan_ops.dis_rx_filtering(vsi); /* Disable Unicast/Multicast VLAN promiscuous mode */ if (vlan_promisc) { @@ -4462,7 +4379,7 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf) } vsi = ice_get_vf_vsi(vf); - if (ice_vsi_manage_vlan_stripping(vsi, true)) + if (vsi->vlan_ops.ena_stripping(vsi)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4497,7 +4414,7 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf) goto error_param; } - if (ice_vsi_manage_vlan_stripping(vsi, false)) + if (vsi->vlan_ops.dis_stripping(vsi)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4527,9 +4444,9 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) - return ice_vsi_manage_vlan_stripping(vsi, true); + return vsi->vlan_ops.ena_stripping(vsi); else - return ice_vsi_manage_vlan_stripping(vsi, false); + return vsi->vlan_ops.dis_stripping(vsi); } static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c new file mode 100644 index 000000000000..6b0a4bf28305 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -0,0 +1,326 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_lib.h" +#include "ice_lib.h" +#include "ice_fltr.h" +#include "ice.h" + +/** + * ice_vsi_add_vlan - default add VLAN implementation for all VSI types + * @vsi: VSI being configured + * @vid: VLAN ID to be added + * @action: filter action to be performed on match + */ +int +ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) +{ + int err = 0; + + if (!ice_fltr_add_vlan(vsi, vid, action)) { + vsi->num_vlan++; + } else { + err = -ENODEV; + dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i\n", + vid, vsi->vsi_num); + } + + return err; +} + +/** + * ice_vsi_del_vlan - default del VLAN implementation for all VSI types + * @vsi: VSI being configured + * @vid: VLAN ID to be removed + */ +int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid) +{ + struct ice_pf *pf = vsi->back; + struct device *dev; + int err; + + dev = ice_pf_to_dev(pf); + + err = ice_fltr_remove_vlan(vsi, vid, ICE_FWD_TO_VSI); + if (!err) { + vsi->num_vlan--; + } else if (err == -ENOENT) { + dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist\n", + vid, vsi->vsi_num); + err = 0; + } else { + dev_err(dev, "Error removing VLAN %d on VSI %i error: %d\n", + vid, vsi->vsi_num, err); + } + + return err; +} + +/** + * ice_vsi_manage_vlan_insertion - Manage VLAN insertion for the VSI for Tx + * @vsi: the VSI being changed + */ +static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + /* Here we are configuring the VSI to let the driver add VLAN tags by + * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag + * insertion happens in the Tx hot path, in ice_tx_map. + */ + ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL; + + /* Preserve existing VLAN strip setting */ + ctxt->info.vlan_flags |= (vsi->info.vlan_flags & + ICE_AQ_VSI_VLAN_EMOD_M); + + ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) { + dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN insert failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + goto out; + } + + vsi->info.vlan_flags = ctxt->info.vlan_flags; +out: + kfree(ctxt); + return err; +} + +/** + * ice_vsi_manage_vlan_stripping - Manage VLAN stripping for the VSI for Rx + * @vsi: the VSI being changed + * @ena: boolean value indicating if this is a enable or disable request + */ +static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + /* do not allow modifying VLAN stripping when a port VLAN is configured + * on this VSI + */ + if (vsi->info.pvid) + return 0; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + /* Here we are configuring what the VSI should do with the VLAN tag in + * the Rx packet. We can either leave the tag in the packet or put it in + * the Rx descriptor. + */ + if (ena) + /* Strip VLAN tag from Rx packet and put it in the desc */ + ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH; + else + /* Disable stripping. Leave tag in packet */ + ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING; + + /* Allow all packets untagged/tagged */ + ctxt->info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL; + + ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) { + dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN strip failed, ena = %d err %d aq_err %s\n", + ena, err, ice_aq_str(hw->adminq.sq_last_status)); + goto out; + } + + vsi->info.vlan_flags = ctxt->info.vlan_flags; +out: + kfree(ctxt); + return err; +} + +int ice_vsi_ena_stripping(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_stripping(vsi, true); +} + +int ice_vsi_dis_stripping(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_stripping(vsi, false); +} + +int ice_vsi_ena_insertion(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_insertion(vsi); +} + +int ice_vsi_dis_insertion(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_insertion(vsi); +} + +/** + * ice_vsi_manage_pvid - Enable or disable port VLAN for VSI + * @vsi: the VSI to update + * @pvid_info: VLAN ID and QoS used to set the PVID VSI context field + * @enable: true for enable PVID false for disable + */ +static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_aqc_vsi_props *info; + struct ice_vsi_ctx *ctxt; + int ret; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info = vsi->info; + info = &ctxt->info; + if (enable) { + info->vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | + ICE_AQ_VSI_PVLAN_INSERT_PVID | + ICE_AQ_VSI_VLAN_EMOD_STR; + info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + } else { + info->vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING | + ICE_AQ_VSI_VLAN_MODE_ALL; + info->sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + } + + info->pvid = cpu_to_le16(pvid_info); + info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | + ICE_AQ_VSI_PROP_SW_VALID); + + ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (ret) { + dev_info(ice_hw_to_dev(hw), "update VSI for port VLAN failed, err %d aq_err %s\n", + ret, ice_aq_str(hw->adminq.sq_last_status)); + goto out; + } + + vsi->info.vlan_flags = info->vlan_flags; + vsi->info.sw_flags2 = info->sw_flags2; + vsi->info.pvid = info->pvid; +out: + kfree(ctxt); + return ret; +} + +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info) +{ + return ice_vsi_manage_pvid(vsi, pvid_info, true); +} + +/** + * ice_cfg_vlan_pruning - enable or disable VLAN pruning on the VSI + * @vsi: VSI to enable or disable VLAN pruning on + * @ena: set to true to enable VLAN pruning and false to disable it + * + * returns 0 if VSI is updated, negative otherwise + */ +static int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena) +{ + struct ice_vsi_ctx *ctxt; + struct ice_pf *pf; + int status; + + if (!vsi) + return -EINVAL; + + /* Don't enable VLAN pruning if the netdev is currently in promiscuous + * mode. VLAN pruning will be enabled when the interface exits + * promiscuous mode if any VLAN filters are active. + */ + if (vsi->netdev && vsi->netdev->flags & IFF_PROMISC && ena) + return 0; + + pf = vsi->back; + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info = vsi->info; + + if (ena) + ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + else + ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + + ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SW_VALID); + + status = ice_update_vsi(&pf->hw, vsi->idx, ctxt, NULL); + if (status) { + netdev_err(vsi->netdev, "%sabling VLAN pruning on VSI handle: %d, VSI HW ID: %d failed, err = %d, aq_err = %s\n", + ena ? "En" : "Dis", vsi->idx, vsi->vsi_num, status, + ice_aq_str(pf->hw.adminq.sq_last_status)); + goto err_out; + } + + vsi->info.sw_flags2 = ctxt->info.sw_flags2; + + kfree(ctxt); + return 0; + +err_out: + kfree(ctxt); + return status; +} + +int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_pruning(vsi, true); +} + +int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_pruning(vsi, false); +} + +static int ice_cfg_vlan_antispoof(struct ice_vsi *vsi, bool enable) +{ + struct ice_vsi_ctx *ctx; + int err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->info.sec_flags = vsi->info.sec_flags; + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); + + if (enable) + ctx->info.sec_flags |= ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S; + else + ctx->info.sec_flags &= ~(ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); + + err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx VLAN anti-spoof %s for VSI %d, error %d\n", + enable ? "ON" : "OFF", vsi->vsi_num, err); + else + vsi->info.sec_flags = ctx->info.sec_flags; + + kfree(ctx); + + return err; +} + +int ice_vsi_ena_tx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_antispoof(vsi, true); +} + +int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_antispoof(vsi, false); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h new file mode 100644 index 000000000000..f9fe33026306 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VSI_VLAN_LIB_H_ +#define _ICE_VSI_VLAN_LIB_H_ + +#include +#include "ice_type.h" + +struct ice_vsi; + +int +ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); +int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid); + +int ice_vsi_ena_stripping(struct ice_vsi *vsi); +int ice_vsi_dis_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_insertion(struct ice_vsi *vsi); +int ice_vsi_dis_insertion(struct ice_vsi *vsi); +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info); + +int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_ena_tx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi); + +#endif /* _ICE_VSI_VLAN_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c new file mode 100644 index 000000000000..3bab6c025856 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_ops.h" +#include "ice.h" + +void ice_vsi_init_vlan_ops(struct ice_vsi *vsi) +{ + vsi->vlan_ops.add_vlan = ice_vsi_add_vlan; + vsi->vlan_ops.del_vlan = ice_vsi_del_vlan; + vsi->vlan_ops.ena_stripping = ice_vsi_ena_stripping; + vsi->vlan_ops.dis_stripping = ice_vsi_dis_stripping; + vsi->vlan_ops.ena_insertion = ice_vsi_ena_insertion; + vsi->vlan_ops.dis_insertion = ice_vsi_dis_insertion; + vsi->vlan_ops.ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vsi->vlan_ops.dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vsi->vlan_ops.ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vsi->vlan_ops.dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + vsi->vlan_ops.set_port_vlan = ice_vsi_set_port_vlan; +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h new file mode 100644 index 000000000000..522169742661 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VSI_VLAN_OPS_H_ +#define _ICE_VSI_VLAN_OPS_H_ + +#include "ice_type.h" +#include "ice_vsi_vlan_lib.h" + +struct ice_vsi; + +struct ice_vsi_vlan_ops { + int (*add_vlan)(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); + int (*del_vlan)(struct ice_vsi *vsi, u16 vid); + int (*ena_stripping)(struct ice_vsi *vsi); + int (*dis_stripping)(struct ice_vsi *vsi); + int (*ena_insertion)(struct ice_vsi *vsi); + int (*dis_insertion)(struct ice_vsi *vsi); + int (*ena_rx_filtering)(struct ice_vsi *vsi); + int (*dis_rx_filtering)(struct ice_vsi *vsi); + int (*ena_tx_filtering)(struct ice_vsi *vsi); + int (*dis_tx_filtering)(struct ice_vsi *vsi); + int (*set_port_vlan)(struct ice_vsi *vsi, u16 pvid_info); +}; + +void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); + +#endif /* _ICE_VSI_VLAN_OPS_H_ */ -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:46 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:46 -0800 Subject: [Intel-wired-lan] [PATCH net-next 05/14] ice: Refactor vf->port_vlan_info to use ice_vlan In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-5-anthony.l.nguyen@intel.com> From: Brett Creeley The current vf->port_vlan_info variable is a packed u16 that contains the port VLAN ID and QoS/prio value. This is fine, but changes are incoming that allow for an 802.1ad port VLAN. Add flexibility by changing the vf->port_vlan_info member to be an ice_vlan structure. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 76 ++++++++++--------- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 3 +- 2 files changed, 44 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index d580120dbb93..4971e547432c 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -751,6 +751,21 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf) return 0; } +static u16 ice_vf_get_port_vlan_id(struct ice_vf *vf) +{ + return vf->port_vlan_info.vid; +} + +static u8 ice_vf_get_port_vlan_prio(struct ice_vf *vf) +{ + return vf->port_vlan_info.prio; +} + +static bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) +{ + return (ice_vf_get_port_vlan_id(vf) || ice_vf_get_port_vlan_prio(vf)); +} + /** * ice_vf_rebuild_host_vlan_cfg - add VLAN 0 filter or rebuild the Port VLAN * @vf: VF to add MAC filters for @@ -760,16 +775,12 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf) */ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) { - u8 vlan_prio = (vf->port_vlan_info & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; - u16 vlan_id = vf->port_vlan_info & VLAN_VID_MASK; struct device *dev = ice_pf_to_dev(vf->pf); struct ice_vsi *vsi = ice_get_vf_vsi(vf); - struct ice_vlan vlan; int err; - vlan = ICE_VLAN(vlan_id, vlan_prio); - if (vf->port_vlan_info) { - err = vsi->vlan_ops.set_port_vlan(vsi, &vlan); + if (ice_vf_is_port_vlan_ena(vf)) { + err = vsi->vlan_ops.set_port_vlan(vsi, &vf->port_vlan_info); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); @@ -777,12 +788,11 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) } } - /* vlan_id will either be 0 or the port VLAN number */ - err = vsi->vlan_ops.add_vlan(vsi, &vlan); + err = vsi->vlan_ops.add_vlan(vsi, &vf->port_vlan_info); if (err) { - dev_err(dev, "failed to add %s VLAN %u filter for VF %u, error %d\n", - vf->port_vlan_info ? "port" : "", vlan_id, vf->vf_id, - err); + dev_err(dev, "failed to add VLAN %u filter for VF %u during VF rebuild, error %d\n", + ice_vf_is_port_vlan_ena(vf) ? + ice_vf_get_port_vlan_id(vf) : 0, vf->vf_id, err); return err; } @@ -1255,9 +1265,9 @@ static int ice_vf_set_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 pro struct ice_hw *hw = &vsi->back->hw; int status; - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_set_vsi_promisc(hw, vsi->idx, promisc_m, - vf->port_vlan_info & VLAN_VID_MASK); + ice_vf_get_port_vlan_id(vf)); else if (vsi->num_vlan > 1) status = ice_fltr_set_vlan_vsi_promisc(hw, vsi, promisc_m); else @@ -1277,9 +1287,9 @@ static int ice_vf_clear_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 p struct ice_hw *hw = &vsi->back->hw; int status; - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_clear_vsi_promisc(hw, vsi->idx, promisc_m, - vf->port_vlan_info & VLAN_VID_MASK); + ice_vf_get_port_vlan_id(vf)); else if (vsi->num_vlan > 1) status = ice_fltr_clear_vlan_vsi_promisc(hw, vsi, promisc_m); else @@ -1654,7 +1664,7 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr) */ if (test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states) || test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) { - if (vf->port_vlan_info || vsi->num_vlan) + if (ice_vf_is_port_vlan_ena(vf) || vsi->num_vlan) promisc_m = ICE_UCAST_VLAN_PROMISC_BITS; else promisc_m = ICE_UCAST_PROMISC_BITS; @@ -2277,7 +2287,7 @@ static u16 ice_vc_get_max_frame_size(struct ice_vf *vf) max_frame_size = pi->phy.link_info.max_frame_size; - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) max_frame_size -= VLAN_HLEN; return max_frame_size; @@ -2326,7 +2336,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) goto err; } - if (!vsi->info.pvid) + if (!ice_vf_is_port_vlan_ena(vf)) vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { @@ -3050,7 +3060,7 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) rm_promisc = !allmulti && !alluni; - if (vsi->num_vlan || vf->port_vlan_info) { + if (vsi->num_vlan || ice_vf_is_port_vlan_ena(vf)) { if (rm_promisc) ret = vsi->vlan_ops.ena_rx_filtering(vsi); else @@ -3086,7 +3096,7 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) } else { u8 mcast_m, ucast_m; - if (vf->port_vlan_info || vsi->num_vlan > 1) { + if (ice_vf_is_port_vlan_ena(vf) || vsi->num_vlan > 1) { mcast_m = ICE_MCAST_VLAN_PROMISC_BITS; ucast_m = ICE_UCAST_VLAN_PROMISC_BITS; } else { @@ -3669,7 +3679,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) /* add space for the port VLAN since the VF driver is not * expected to account for it in the MTU calculation */ - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) vsi->max_frame += VLAN_HLEN; if (ice_vsi_cfg_single_rxq(vsi, q_idx)) { @@ -4097,7 +4107,6 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, struct ice_pf *pf = ice_netdev_to_pf(netdev); struct device *dev; struct ice_vf *vf; - u16 vlanprio; int ret; dev = ice_pf_to_dev(pf); @@ -4120,20 +4129,19 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, if (ret) return ret; - vlanprio = vlan_id | (qos << VLAN_PRIO_SHIFT); - - if (vf->port_vlan_info == vlanprio) { + if (ice_vf_get_port_vlan_prio(vf) == qos && + ice_vf_get_port_vlan_id(vf) == vlan_id) { /* duplicate request, so just return success */ - dev_dbg(dev, "Duplicate pvid %d request\n", vlanprio); + dev_dbg(dev, "Duplicate port VLAN %u, QoS %u request\n", + vlan_id, qos); return 0; } mutex_lock(&vf->cfg_lock); - vf->port_vlan_info = vlanprio; - - if (vf->port_vlan_info) - dev_info(dev, "Setting VLAN %d, QoS 0x%x on VF %d\n", + vf->port_vlan_info = ICE_VLAN(vlan_id, qos); + if (ice_vf_is_port_vlan_ena(vf)) + dev_info(dev, "Setting VLAN %u, QoS %u on VF %d\n", vlan_id, qos, vf_id); else dev_info(dev, "Clearing port VLAN on VF %d\n", vf_id); @@ -4219,7 +4227,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - if (vsi->info.pvid) { + if (ice_vf_is_port_vlan_ena(vf)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; } @@ -4445,7 +4453,7 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return -EINVAL; /* don't modify stripping if port VLAN is configured */ - if (vsi->info.pvid) + if (ice_vf_is_port_vlan_ena(vf)) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) @@ -4815,8 +4823,8 @@ ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi) ether_addr_copy(ivi->mac, vf->hw_lan_addr.addr); /* VF configuration for VLAN and applicable QoS */ - ivi->vlan = vf->port_vlan_info & VLAN_VID_MASK; - ivi->qos = (vf->port_vlan_info & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + ivi->vlan = ice_vf_get_port_vlan_id(vf); + ivi->qos = ice_vf_get_port_vlan_prio(vf); ivi->trusted = vf->trusted; ivi->spoofchk = vf->spoofchk; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 752487a1bdd6..5079a3b72698 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -5,6 +5,7 @@ #define _ICE_VIRTCHNL_PF_H_ #include "ice.h" #include "ice_virtchnl_fdir.h" +#include "ice_vsi_vlan_ops.h" /* Restrict number of MAC Addr and VLAN that non-trusted VF can programmed */ #define ICE_MAX_VLAN_PER_VF 8 @@ -119,7 +120,7 @@ struct ice_vf { struct ice_time_mac legacy_last_added_umac; DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); - u16 port_vlan_info; /* Port VLAN ID and QoS */ + struct ice_vlan port_vlan_info; /* Port VLAN ID and QoS */ u8 pf_set_mac:1; /* VF MAC address set by VMM admin */ u8 trusted:1; u8 spoofchk:1; -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:45 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:45 -0800 Subject: [Intel-wired-lan] [PATCH net-next 04/14] ice: Introduce ice_vlan struct In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-4-anthony.l.nguyen@intel.com> From: Brett Creeley Add a new struct for VLAN related information. Currently this holds VLAN ID and priority values, but will be expanded to hold TPID value. This reduces the changes necessary if any other values are added in future. Remove the action argument from these calls as it's always ICE_FWD_VSI. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_fltr.c | 35 +++++++------------ drivers/net/ethernet/intel/ice/ice_fltr.h | 10 +++--- drivers/net/ethernet/intel/ice/ice_lib.c | 5 ++- drivers/net/ethernet/intel/ice/ice_lib.h | 1 + drivers/net/ethernet/intel/ice/ice_main.c | 8 +++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 19 ++++++---- drivers/net/ethernet/intel/ice/ice_vlan.h | 17 +++++++++ .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 31 +++++++++------- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 9 +++-- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 6 ++-- 10 files changed, 82 insertions(+), 59 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_vlan.h diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c index cf07eef39e9d..8f543851e39f 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.c +++ b/drivers/net/ethernet/intel/ice/ice_fltr.c @@ -206,21 +206,20 @@ ice_fltr_add_mac_to_list(struct ice_vsi *vsi, struct list_head *list, * ice_fltr_add_vlan_to_list - add VLAN filter info to exsisting list * @vsi: pointer to VSI struct * @list: list to add filter info to - * @vlan_id: VLAN ID to add - * @action: filter action + * @vlan: VLAN filter details */ static int ice_fltr_add_vlan_to_list(struct ice_vsi *vsi, struct list_head *list, - u16 vlan_id, enum ice_sw_fwd_act_type action) + struct ice_vlan *vlan) { struct ice_fltr_info info = { 0 }; info.flag = ICE_FLTR_TX; info.src_id = ICE_SRC_ID_VSI; info.lkup_type = ICE_SW_LKUP_VLAN; - info.fltr_act = action; + info.fltr_act = ICE_FWD_TO_VSI; info.vsi_handle = vsi->idx; - info.l_data.vlan.vlan_id = vlan_id; + info.l_data.vlan.vlan_id = vlan->vid; return ice_fltr_add_entry_to_list(ice_pf_to_dev(vsi->back), &info, list); @@ -313,19 +312,17 @@ ice_fltr_prepare_mac_and_broadcast(struct ice_vsi *vsi, const u8 *mac, /** * ice_fltr_prepare_vlan - add or remove VLAN filter * @vsi: pointer to VSI struct - * @vlan_id: VLAN ID to add - * @action: action to be performed on filter match + * @vlan: VLAN filter details * @vlan_action: pointer to add or remove VLAN function */ static int -ice_fltr_prepare_vlan(struct ice_vsi *vsi, u16 vlan_id, - enum ice_sw_fwd_act_type action, +ice_fltr_prepare_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan, int (*vlan_action)(struct ice_vsi *, struct list_head *)) { LIST_HEAD(tmp_list); int result; - if (ice_fltr_add_vlan_to_list(vsi, &tmp_list, vlan_id, action)) + if (ice_fltr_add_vlan_to_list(vsi, &tmp_list, vlan)) return -ENOMEM; result = vlan_action(vsi, &tmp_list); @@ -398,27 +395,21 @@ int ice_fltr_remove_mac(struct ice_vsi *vsi, const u8 *mac, /** * ice_fltr_add_vlan - add single VLAN filter * @vsi: pointer to VSI struct - * @vlan_id: VLAN ID to add - * @action: action to be performed on filter match + * @vlan: VLAN filter details */ -int ice_fltr_add_vlan(struct ice_vsi *vsi, u16 vlan_id, - enum ice_sw_fwd_act_type action) +int ice_fltr_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - return ice_fltr_prepare_vlan(vsi, vlan_id, action, - ice_fltr_add_vlan_list); + return ice_fltr_prepare_vlan(vsi, vlan, ice_fltr_add_vlan_list); } /** * ice_fltr_remove_vlan - remove VLAN filter * @vsi: pointer to VSI struct - * @vlan_id: filter VLAN to remove - * @action: action to remove + * @vlan: VLAN filter details */ -int ice_fltr_remove_vlan(struct ice_vsi *vsi, u16 vlan_id, - enum ice_sw_fwd_act_type action) +int ice_fltr_remove_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - return ice_fltr_prepare_vlan(vsi, vlan_id, action, - ice_fltr_remove_vlan_list); + return ice_fltr_prepare_vlan(vsi, vlan, ice_fltr_remove_vlan_list); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.h b/drivers/net/ethernet/intel/ice/ice_fltr.h index d271f61e0d34..4f7fe09d10e9 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.h +++ b/drivers/net/ethernet/intel/ice/ice_fltr.h @@ -4,6 +4,8 @@ #ifndef _ICE_FLTR_H_ #define _ICE_FLTR_H_ +#include "ice_vlan.h" + void ice_fltr_free_list(struct device *dev, struct list_head *h); int ice_fltr_set_vlan_vsi_promisc(struct ice_hw *hw, struct ice_vsi *vsi, u8 promisc_mask); @@ -32,12 +34,8 @@ ice_fltr_remove_mac(struct ice_vsi *vsi, const u8 *mac, int ice_fltr_remove_mac_list(struct ice_vsi *vsi, struct list_head *list); -int -ice_fltr_add_vlan(struct ice_vsi *vsi, u16 vid, - enum ice_sw_fwd_act_type action); -int -ice_fltr_remove_vlan(struct ice_vsi *vsi, u16 vid, - enum ice_sw_fwd_act_type action); +int ice_fltr_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_fltr_remove_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_fltr_add_eth(struct ice_vsi *vsi, u16 ethertype, u16 flag, diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index b50509584b31..55a2aef54922 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3878,7 +3878,10 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) */ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { - return vsi->vlan_ops.add_vlan(vsi, 0, ICE_FWD_TO_VSI); + struct ice_vlan vlan; + + vlan = ICE_VLAN(0, 0); + return vsi->vlan_ops.add_vlan(vsi, &vlan); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 427e5e4e9f17..8f42a3f3a949 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -5,6 +5,7 @@ #define _ICE_LIB_H_ #include "ice.h" +#include "ice_vlan.h" const char *ice_vsi_type_str(enum ice_vsi_type vsi_type); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 904571527e27..8669858d104c 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3421,6 +3421,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; + struct ice_vlan vlan; int ret; /* VLAN 0 is added by default during load/reset */ @@ -3437,7 +3438,8 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ - ret = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); + vlan = ICE_VLAN(vid, 0); + ret = vsi->vlan_ops.add_vlan(vsi, &vlan); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3458,6 +3460,7 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; + struct ice_vlan vlan; int ret; /* don't allow removal of VLAN 0 */ @@ -3467,7 +3470,8 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, /* Make sure VLAN delete is successful before updating VLAN * information */ - ret = vsi->vlan_ops.del_vlan(vsi, vid); + vlan = ICE_VLAN(vid, 0); + ret = vsi->vlan_ops.del_vlan(vsi, &vlan); if (ret) return ret; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 6fa0968f0912..d580120dbb93 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -760,24 +760,25 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf) */ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) { + u8 vlan_prio = (vf->port_vlan_info & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + u16 vlan_id = vf->port_vlan_info & VLAN_VID_MASK; struct device *dev = ice_pf_to_dev(vf->pf); struct ice_vsi *vsi = ice_get_vf_vsi(vf); - u16 vlan_id = 0; + struct ice_vlan vlan; int err; + vlan = ICE_VLAN(vlan_id, vlan_prio); if (vf->port_vlan_info) { - err = vsi->vlan_ops.set_port_vlan(vsi, vf->port_vlan_info); + err = vsi->vlan_ops.set_port_vlan(vsi, &vlan); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); return err; } - - vlan_id = vf->port_vlan_info & VLAN_VID_MASK; } /* vlan_id will either be 0 or the port VLAN number */ - err = vsi->vlan_ops.add_vlan(vsi, vlan_id, ICE_FWD_TO_VSI); + err = vsi->vlan_ops.add_vlan(vsi, &vlan); if (err) { dev_err(dev, "failed to add %s VLAN %u filter for VF %u, error %d\n", vf->port_vlan_info ? "port" : "", vlan_id, vf->vf_id, @@ -4231,6 +4232,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (add_v) { for (i = 0; i < vfl->num_elements; i++) { u16 vid = vfl->vlan_id[i]; + struct ice_vlan vlan; if (!ice_is_vf_trusted(vf) && vsi->num_vlan >= ICE_MAX_VLAN_PER_VF) { @@ -4250,7 +4252,8 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - status = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); + vlan = ICE_VLAN(vid, 0); + status = vsi->vlan_ops.add_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4293,6 +4296,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) num_vf_vlan = vsi->num_vlan; for (i = 0; i < vfl->num_elements && i < num_vf_vlan; i++) { u16 vid = vfl->vlan_id[i]; + struct ice_vlan vlan; /* we add VLAN 0 by default for each VF so we can enable * Tx VLAN anti-spoof without triggering MDD events so @@ -4301,7 +4305,8 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - status = vsi->vlan_ops.del_vlan(vsi, vid); + vlan = ICE_VLAN(vid, 0); + status = vsi->vlan_ops.del_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; diff --git a/drivers/net/ethernet/intel/ice/ice_vlan.h b/drivers/net/ethernet/intel/ice/ice_vlan.h new file mode 100644 index 000000000000..3fad0cba2da6 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vlan.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VLAN_H_ +#define _ICE_VLAN_H_ + +#include +#include "ice_type.h" + +struct ice_vlan { + u16 vid; + u8 prio; +}; + +#define ICE_VLAN(vid, prio) ((struct ice_vlan){ vid, prio }) + +#endif /* _ICE_VLAN_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 6b0a4bf28305..74b6dec0744b 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -9,20 +9,18 @@ /** * ice_vsi_add_vlan - default add VLAN implementation for all VSI types * @vsi: VSI being configured - * @vid: VLAN ID to be added - * @action: filter action to be performed on match + * @vlan: VLAN filter to add */ -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) +int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { int err = 0; - if (!ice_fltr_add_vlan(vsi, vid, action)) { + if (!ice_fltr_add_vlan(vsi, vlan)) { vsi->num_vlan++; } else { err = -ENODEV; dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i\n", - vid, vsi->vsi_num); + vlan->vid, vsi->vsi_num); } return err; @@ -31,9 +29,9 @@ ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) /** * ice_vsi_del_vlan - default del VLAN implementation for all VSI types * @vsi: VSI being configured - * @vid: VLAN ID to be removed + * @vlan: VLAN filter to delete */ -int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid) +int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { struct ice_pf *pf = vsi->back; struct device *dev; @@ -41,16 +39,16 @@ int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid) dev = ice_pf_to_dev(pf); - err = ice_fltr_remove_vlan(vsi, vid, ICE_FWD_TO_VSI); + err = ice_fltr_remove_vlan(vsi, vlan); if (!err) { vsi->num_vlan--; } else if (err == -ENOENT) { dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist\n", - vid, vsi->vsi_num); + vlan->vid, vsi->vsi_num); err = 0; } else { dev_err(dev, "Error removing VLAN %d on VSI %i error: %d\n", - vid, vsi->vsi_num, err); + vlan->vid, vsi->vsi_num, err); } return err; @@ -214,9 +212,16 @@ static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) return ret; } -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info) +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - return ice_vsi_manage_pvid(vsi, pvid_info, true); + u16 port_vlan_info; + + if (vlan->prio > 7) + return -EINVAL; + + port_vlan_info = vlan->vid | (vlan->prio << VLAN_PRIO_SHIFT); + + return ice_vsi_manage_pvid(vsi, port_vlan_info, true); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index f9fe33026306..a0305007896c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -5,19 +5,18 @@ #define _ICE_VSI_VLAN_LIB_H_ #include -#include "ice_type.h" +#include "ice_vlan.h" struct ice_vsi; -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); -int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid); +int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_ena_stripping(struct ice_vsi *vsi); int ice_vsi_dis_stripping(struct ice_vsi *vsi); int ice_vsi_ena_insertion(struct ice_vsi *vsi); int ice_vsi_dis_insertion(struct ice_vsi *vsi); -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info); +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 522169742661..c944f04acd3c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -10,8 +10,8 @@ struct ice_vsi; struct ice_vsi_vlan_ops { - int (*add_vlan)(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); - int (*del_vlan)(struct ice_vsi *vsi, u16 vid); + int (*add_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); + int (*del_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); int (*ena_stripping)(struct ice_vsi *vsi); int (*dis_stripping)(struct ice_vsi *vsi); int (*ena_insertion)(struct ice_vsi *vsi); @@ -20,7 +20,7 @@ struct ice_vsi_vlan_ops { int (*dis_rx_filtering)(struct ice_vsi *vsi); int (*ena_tx_filtering)(struct ice_vsi *vsi); int (*dis_tx_filtering)(struct ice_vsi *vsi); - int (*set_port_vlan)(struct ice_vsi *vsi, u16 pvid_info); + int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); }; void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:55 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:55 -0800 Subject: [Intel-wired-lan] [PATCH net-next 14/14] ice: Add ability for PF admin to enable VF VLAN pruning In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-14-anthony.l.nguyen@intel.com> From: Brett Creeley VFs by default are able to see all tagged traffic regardless of trust and VLAN filters. Based on legacy devices (i.e. ixgbe, i40e), customers expect VFs to receive all VLAN tagged traffic with a matching destination MAC. Add an ethtool private flag 'vf-vlan-pruning' and set the default to off so VFs will receive all VLAN traffic directed towards them. When the flag is turned on, VF will only be able to receive untagged traffic or traffic with VLAN tags it has created interfaces for. Also, the flag cannot be changed while any VFs are allocated. This was done to simplify the implementation. So, if this flag is needed, then the PF admin must enable it. If the user tries to enable the flag while VFs are active, then print an unsupported message with the vf-vlan-pruning flag included. In case multiple flags were specified, this makes it clear to the user which flag failed. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice.h | 1 + drivers/net/ethernet/intel/ice/ice_ethtool.c | 9 +++++++++ .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 18 ++++++++++++++++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 14 ++++++++++++++ 4 files changed, 40 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 14aaca8dbbb7..dc86f2562e0f 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -486,6 +486,7 @@ enum ice_pf_flags { ICE_FLAG_LEGACY_RX, ICE_FLAG_VF_TRUE_PROMISC_ENA, ICE_FLAG_MDD_AUTO_RESET_VF, + ICE_FLAG_VF_VLAN_PRUNING, ICE_FLAG_LINK_LENIENT_MODE_ENA, ICE_FLAG_GNSS, /* GNSS successfully initialized */ ICE_PF_FLAGS_NBITS /* must be last */ diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index e2e3ef7fba7f..28ead0b4712f 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -164,6 +164,7 @@ static const struct ice_priv_flag ice_gstrings_priv_flags[] = { ICE_PRIV_FLAG("vf-true-promisc-support", ICE_FLAG_VF_TRUE_PROMISC_ENA), ICE_PRIV_FLAG("mdd-auto-reset-vf", ICE_FLAG_MDD_AUTO_RESET_VF), + ICE_PRIV_FLAG("vf-vlan-pruning", ICE_FLAG_VF_VLAN_PRUNING), ICE_PRIV_FLAG("legacy-rx", ICE_FLAG_LEGACY_RX), }; @@ -1295,6 +1296,14 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags) change_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags); ret = -EAGAIN; } + + if (test_bit(ICE_FLAG_VF_VLAN_PRUNING, change_flags) && + pf->num_alloc_vfs) { + dev_err(dev, "vf-vlan-pruning: VLAN pruning cannot be changed while VFs are active.\n"); + /* toggle bit back to previous state */ + change_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags); + ret = -EOPNOTSUPP; + } ethtool_exit: clear_bit(ICE_FLAG_ETHTOOL_CTXT, pf->flags); return ret; diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index 4be29f97365c..39f2d36cabba 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -43,7 +43,6 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) /* outer VLAN ops regardless of port VLAN config */ vlan_ops->add_vlan = ice_vsi_add_vlan; - vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; @@ -51,6 +50,8 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) if (ice_vf_is_port_vlan_ena(vf)) { /* setup outer VLAN ops */ vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; /* setup inner VLAN ops */ vlan_ops = &vsi->inner_vlan_ops; @@ -61,6 +62,12 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; } else { + if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) + vlan_ops->ena_rx_filtering = noop_vlan; + else + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; + vlan_ops->del_vlan = ice_vsi_del_vlan; vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; @@ -80,14 +87,21 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) /* inner VLAN ops regardless of port VLAN config */ vlan_ops->add_vlan = ice_vsi_add_vlan; - vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; if (ice_vf_is_port_vlan_ena(vf)) { vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; } else { + if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) + vlan_ops->ena_rx_filtering = noop_vlan; + else + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; + vlan_ops->del_vlan = ice_vsi_del_vlan; vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index f1802de98b82..674d27c1a81d 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -807,6 +807,11 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf, struct ice_vsi *vsi) return err; } + err = vlan_ops->ena_rx_filtering(vsi); + if (err) + dev_warn(dev, "failed to enable Rx VLAN filtering for VF %d VSI %d during VF rebuild, error %d\n", + vf->vf_id, vsi->idx, err); + return 0; } @@ -1791,6 +1796,7 @@ static void ice_vc_notify_vf_reset(struct ice_vf *vf) */ static int ice_init_vf_vsi_res(struct ice_vf *vf) { + struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; u8 broadcast[ETH_ALEN]; struct ice_vsi *vsi; @@ -1811,6 +1817,14 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf) goto release_vsi; } + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + err = vlan_ops->ena_rx_filtering(vsi); + if (err) { + dev_warn(dev, "Failed to enable Rx VLAN filtering for VF %d\n", + vf->vf_id); + goto release_vsi; + } + eth_broadcast_addr(broadcast); err = ice_fltr_add_mac(vsi, broadcast, ICE_FWD_TO_VSI); if (err) { -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:54 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:54 -0800 Subject: [Intel-wired-lan] [PATCH net-next 13/14] ice: Add support for 802.1ad port VLANs VF In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-13-anthony.l.nguyen@intel.com> From: Brett Creeley Currently there is only support for 802.1Q port VLANs on SR-IOV VFs. Add support to also allow 802.1ad port VLANs when double VLAN mode is enabled. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 51 ++++++++++++++++--- 1 file changed, 44 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index de74a2b4f846..f1802de98b82 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -768,6 +768,11 @@ bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) return (ice_vf_get_port_vlan_id(vf) || ice_vf_get_port_vlan_prio(vf)); } +static u16 ice_vf_get_port_vlan_tpid(struct ice_vf *vf) +{ + return vf->port_vlan_info.tpid; +} + /** * ice_vf_rebuild_host_vlan_cfg - add VLAN 0 filter or rebuild the Port VLAN * @vf: VF to add MAC filters for @@ -4129,6 +4134,33 @@ static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg) v_ret, (u8 *)vfres, sizeof(*vfres)); } +/** + * ice_is_supported_port_vlan_proto - make sure the vlan_proto is supported + * @hw: hardware structure used to check the VLAN mode + * @vlan_proto: VLAN TPID being checked + * + * If the device is configured in Double VLAN Mode (DVM), then both ETH_P_8021Q + * and ETH_P_8021AD are supported. If the device is configured in Single VLAN + * Mode (SVM), then only ETH_P_8021Q is supported. + */ +static bool +ice_is_supported_port_vlan_proto(struct ice_hw *hw, u16 vlan_proto) +{ + bool is_supported = false; + + switch (vlan_proto) { + case ETH_P_8021Q: + is_supported = true; + break; + case ETH_P_8021AD: + if (ice_is_dvm_ena(hw)) + is_supported = true; + break; + } + + return is_supported; +} + /** * ice_set_vf_port_vlan * @netdev: network interface device structure @@ -4144,6 +4176,7 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, __be16 vlan_proto) { struct ice_pf *pf = ice_netdev_to_pf(netdev); + u16 local_vlan_proto = ntohs(vlan_proto); struct device *dev; struct ice_vf *vf; int ret; @@ -4158,8 +4191,9 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, return -EINVAL; } - if (vlan_proto != htons(ETH_P_8021Q)) { - dev_err(dev, "VF VLAN protocol is not supported\n"); + if (!ice_is_supported_port_vlan_proto(&pf->hw, local_vlan_proto)) { + dev_err(dev, "VF VLAN protocol 0x%04x is not supported\n", + local_vlan_proto); return -EPROTONOSUPPORT; } @@ -4169,19 +4203,20 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, return ret; if (ice_vf_get_port_vlan_prio(vf) == qos && + ice_vf_get_port_vlan_tpid(vf) == local_vlan_proto && ice_vf_get_port_vlan_id(vf) == vlan_id) { /* duplicate request, so just return success */ - dev_dbg(dev, "Duplicate port VLAN %u, QoS %u request\n", - vlan_id, qos); + dev_dbg(dev, "Duplicate port VLAN %u, QoS %u, TPID 0x%04x request\n", + vlan_id, qos, local_vlan_proto); return 0; } mutex_lock(&vf->cfg_lock); - vf->port_vlan_info = ICE_VLAN(ETH_P_8021Q, vlan_id, qos); + vf->port_vlan_info = ICE_VLAN(local_vlan_proto, vlan_id, qos); if (ice_vf_is_port_vlan_ena(vf)) - dev_info(dev, "Setting VLAN %u, QoS %u on VF %d\n", - vlan_id, qos, vf_id); + dev_info(dev, "Setting VLAN %u, QoS %u, TPID 0x%04x on VF %d\n", + vlan_id, qos, local_vlan_proto, vf_id); else dev_info(dev, "Clearing port VLAN on VF %d\n", vf_id); @@ -5904,6 +5939,8 @@ ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi) /* VF configuration for VLAN and applicable QoS */ ivi->vlan = ice_vf_get_port_vlan_id(vf); ivi->qos = ice_vf_get_port_vlan_prio(vf); + if (ice_vf_is_port_vlan_ena(vf)) + ivi->vlan_proto = cpu_to_be16(ice_vf_get_port_vlan_tpid(vf)); ivi->trusted = vf->trusted; ivi->spoofchk = vf->spoofchk; -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:49 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:49 -0800 Subject: [Intel-wired-lan] [PATCH net-next 08/14] ice: Add outer_vlan_ops and VSI specific VLAN ops implementations In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-8-anthony.l.nguyen@intel.com> From: Brett Creeley Add a new outer_vlan_ops member to the ice_vsi structure as outer VLAN ops are only available when the device is in Double VLAN Mode (DVM). Depending on the VSI type, the requirements for what operations to use/allow differ. By default all VSI's have unsupported inner and outer VSI VLAN ops. This implementation was chosen to prevent unexpected crashes due to null pointer dereferences. Instead, if a VSI calls an unsupported op, it will just return -EOPNOTSUPP. Add implementations to support modifying outer VLAN fields for VSI context. This includes the ability to modify VLAN stripping, insertion, and the port VLAN based on the outer VLAN handling fields of the VSI context. These functions should only ever be used if DVM is enabled because that means the firmware supports the outer VLAN fields in the VSI context. If the device is in DVM, then always use the outer_vlan_ops, else use the vlan_ops since the device is in Single VLAN Mode (SVM). Also, move adding the untagged VLAN 0 filter from ice_vsi_setup() to ice_vsi_vlan_setup() as the latter function is specific to the PF and all other VSI types that need an untagged VLAN 0 filter already do this in their specific flows. Without this change, Flow Director is failing to initialize because it does not implement any VSI VLAN ops. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/Makefile | 3 +- drivers/net/ethernet/intel/ice/ice.h | 3 +- drivers/net/ethernet/intel/ice/ice_eswitch.c | 5 +- drivers/net/ethernet/intel/ice/ice_lib.c | 111 +++++- drivers/net/ethernet/intel/ice/ice_lib.h | 3 + drivers/net/ethernet/intel/ice/ice_main.c | 60 +-- .../ethernet/intel/ice/ice_pf_vsi_vlan_ops.c | 37 ++ .../ethernet/intel/ice/ice_pf_vsi_vlan_ops.h | 13 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 72 ++++ .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.h | 16 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 101 +++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 6 + .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 344 +++++++++++++++++- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 6 + .../net/ethernet/intel/ice/ice_vsi_vlan_ops.c | 107 +++++- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 6 + 16 files changed, 808 insertions(+), 85 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c create mode 100644 drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h create mode 100644 drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index c40b3aa1d195..3ece1df919f8 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -18,6 +18,7 @@ ice-y := ice_main.o \ ice_txrx_lib.o \ ice_txrx.o \ ice_fltr.o \ + ice_pf_vsi_vlan_ops.o \ ice_vsi_vlan_ops.o \ ice_vsi_vlan_lib.o \ ice_fdir.o \ @@ -32,7 +33,7 @@ ice-y := ice_main.o \ ice_repr.o \ ice_tc_lib.o ice-$(CONFIG_PCI_IOV) += ice_virtchnl_allowlist.o -ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o ice_virtchnl_fdir.o +ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o ice_virtchnl_fdir.o ice_vf_vsi_vlan_ops.o ice-$(CONFIG_PTP_1588_CLOCK) += ice_ptp.o ice_ptp_hw.o ice_gnss.o ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index efcc713ba287..14aaca8dbbb7 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -371,7 +371,8 @@ struct ice_vsi { u8 irqs_ready:1; u8 current_isup:1; /* Sync 'link up' logging */ u8 stat_offsets_loaded:1; - struct ice_vsi_vlan_ops vlan_ops; + struct ice_vsi_vlan_ops inner_vlan_ops; + struct ice_vsi_vlan_ops outer_vlan_ops; u16 num_vlan; /* queue information */ diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 0ff1a375f2aa..30a00fe59c52 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -116,9 +116,12 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) struct ice_vsi *uplink_vsi = pf->switchdev.uplink_vsi; struct net_device *uplink_netdev = uplink_vsi->netdev; struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi_vlan_ops *vlan_ops; bool rule_added = false; - ctrl_vsi->vlan_ops.dis_stripping(ctrl_vsi); + vlan_ops = ice_get_compat_vsi_vlan_ops(ctrl_vsi); + if (vlan_ops->dis_stripping(ctrl_vsi)) + return -ENODEV; ice_remove_vsi_fltr(&pf->hw, uplink_vsi->idx); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index c8991711b754..6a7f107a43c5 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -8,6 +8,7 @@ #include "ice_fltr.h" #include "ice_dcb_lib.h" #include "ice_devlink.h" +#include "ice_vsi_vlan_ops.h" /** * ice_vsi_type_str - maps VSI type enum to string equivalents @@ -2415,17 +2416,6 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, if (ret) goto unroll_vector_base; - /* Always add VLAN ID 0 switch rule by default. This is needed - * in order to allow all untagged and 0 tagged priority traffic - * if Rx VLAN pruning is enabled. Also there are cases where we - * don't get the call to add VLAN 0 via ice_vlan_rx_add_vid() - * so this handles those cases (i.e. adding the PF to a bridge - * without the 8021q module loaded). - */ - ret = ice_vsi_add_vlan_zero(vsi); - if (ret) - goto unroll_clear_rings; - ice_vsi_map_rings_to_vectors(vsi); /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ @@ -3875,13 +3865,110 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) /** * ice_vsi_add_vlan_zero - add VLAN 0 filter(s) for this VSI * @vsi: VSI used to add VLAN filters + * + * In Single VLAN Mode (SVM), single VLAN filters via ICE_SW_LKUP_VLAN are based + * on the inner VLAN ID, so the VLAN TPID (i.e. 0x8100 or 0x888a8) doesn't + * matter. In Double VLAN Mode (DVM), outer/single VLAN filters via + * ICE_SW_LKUP_VLAN are based on the outer/single VLAN ID + VLAN TPID. + * + * For both modes add a VLAN 0 + no VLAN TPID filter to handle untagged traffic + * when VLAN pruning is enabled. Also, this handles VLAN 0 priority tagged + * traffic in SVM, since the VLAN TPID isn't part of filtering. + * + * If DVM is enabled then an explicit VLAN 0 + VLAN TPID filter needs to be + * added to allow VLAN 0 priority tagged traffic in DVM, since the VLAN TPID is + * part of filtering. */ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + struct ice_vlan vlan; + int err; + + vlan = ICE_VLAN(0, 0, 0); + err = vlan_ops->add_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + /* in SVM both VLAN 0 filters are identical */ + if (!ice_is_dvm_ena(&vsi->back->hw)) + return 0; + + vlan = ICE_VLAN(ETH_P_8021Q, 0, 0); + err = vlan_ops->add_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + return 0; +} + +/** + * ice_vsi_del_vlan_zero - delete VLAN 0 filter(s) for this VSI + * @vsi: VSI used to add VLAN filters + * + * Delete the VLAN 0 filters in the same manner that they were added in + * ice_vsi_add_vlan_zero. + */ +int ice_vsi_del_vlan_zero(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); struct ice_vlan vlan; + int err; vlan = ICE_VLAN(0, 0, 0); - return vsi->vlan_ops.add_vlan(vsi, &vlan); + err = vlan_ops->del_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + /* in SVM both VLAN 0 filters are identical */ + if (!ice_is_dvm_ena(&vsi->back->hw)) + return 0; + + vlan = ICE_VLAN(ETH_P_8021Q, 0, 0); + err = vlan_ops->del_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + return 0; +} + +/** + * ice_vsi_num_zero_vlans - get number of VLAN 0 filters based on VLAN mode + * @vsi: VSI used to get the VLAN mode + * + * If DVM is enabled then 2 VLAN 0 filters are added, else if SVM is enabled + * then 1 VLAN 0 filter is added. See ice_vsi_add_vlan_zero for more details. + */ +static u16 ice_vsi_num_zero_vlans(struct ice_vsi *vsi) +{ +#define ICE_DVM_NUM_ZERO_VLAN_FLTRS 2 +#define ICE_SVM_NUM_ZERO_VLAN_FLTRS 1 + /* no VLAN 0 filter is created when a port VLAN is active */ + if (vsi->type == ICE_VSI_VF && + ice_vf_is_port_vlan_ena(&vsi->back->vf[vsi->vf_id])) + return 0; + if (ice_is_dvm_ena(&vsi->back->hw)) + return ICE_DVM_NUM_ZERO_VLAN_FLTRS; + else + return ICE_SVM_NUM_ZERO_VLAN_FLTRS; +} + +/** + * ice_vsi_has_non_zero_vlans - check if VSI has any non-zero VLANs + * @vsi: VSI used to determine if any non-zero VLANs have been added + */ +bool ice_vsi_has_non_zero_vlans(struct ice_vsi *vsi) +{ + return (vsi->num_vlan > ice_vsi_num_zero_vlans(vsi)); +} + +/** + * ice_vsi_num_non_zero_vlans - get the number of non-zero VLANs for this VSI + * @vsi: VSI used to get the number of non-zero VLANs added + */ +u16 ice_vsi_num_non_zero_vlans(struct ice_vsi *vsi) +{ + return (vsi->num_vlan - ice_vsi_num_zero_vlans(vsi)); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 8f42a3f3a949..0d61f1772ae3 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -124,6 +124,9 @@ void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx); int ice_vsi_add_vlan_zero(struct ice_vsi *vsi); +int ice_vsi_del_vlan_zero(struct ice_vsi *vsi); +bool ice_vsi_has_non_zero_vlans(struct ice_vsi *vsi); +u16 ice_vsi_num_non_zero_vlans(struct ice_vsi *vsi); bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f); void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_init_feature_support(struct ice_pf *pf); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 6843b8e87441..ff2b721e0e45 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -21,6 +21,7 @@ #include "ice_trace.h" #include "ice_eswitch.h" #include "ice_tc_lib.h" +#include "ice_vsi_vlan_ops.h" #define DRV_SUMMARY "Intel(R) Ethernet Connection E800 Series Linux Driver" static const char ice_driver_string[] = DRV_SUMMARY; @@ -249,7 +250,7 @@ static int ice_set_promisc(struct ice_vsi *vsi, u8 promisc_m) if (vsi->type != ICE_VSI_PF) return 0; - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_set_vlan_vsi_promisc(&vsi->back->hw, vsi, promisc_m); else status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, 0); @@ -270,7 +271,7 @@ static int ice_clear_promisc(struct ice_vsi *vsi, u8 promisc_m) if (vsi->type != ICE_VSI_PF) return 0; - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_clear_vlan_vsi_promisc(&vsi->back->hw, vsi, promisc_m); else status = ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, 0); @@ -286,6 +287,7 @@ static int ice_clear_promisc(struct ice_vsi *vsi, u8 promisc_m) */ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); struct device *dev = ice_pf_to_dev(vsi->back); struct net_device *netdev = vsi->netdev; bool promisc_forced_on = false; @@ -358,7 +360,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) /* check for changes in promiscuous modes */ if (changed_flags & IFF_ALLMULTI) { if (vsi->current_netdev_flags & IFF_ALLMULTI) { - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) promisc_m = ICE_MCAST_VLAN_PROMISC_BITS; else promisc_m = ICE_MCAST_PROMISC_BITS; @@ -372,7 +374,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) } } else { /* !(vsi->current_netdev_flags & IFF_ALLMULTI) */ - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) promisc_m = ICE_MCAST_VLAN_PROMISC_BITS; else promisc_m = ICE_MCAST_PROMISC_BITS; @@ -401,7 +403,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) ~IFF_PROMISC; goto out_promisc; } - vsi->vlan_ops.dis_rx_filtering(vsi); + vlan_ops->dis_rx_filtering(vsi); } } else { /* Clear Rx filter to remove traffic from wire */ @@ -415,7 +417,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) goto out_promisc; } if (vsi->num_vlan > 1) - vsi->vlan_ops.ena_rx_filtering(vsi); + vlan_ops->ena_rx_filtering(vsi); } } } @@ -3419,6 +3421,7 @@ static int ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_vlan vlan; int ret; @@ -3427,9 +3430,11 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) if (!vid) return 0; + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + /* Enable VLAN pruning when a VLAN other than 0 is added */ if (!ice_vsi_is_vlan_pruning_ena(vsi)) { - ret = vsi->vlan_ops.ena_rx_filtering(vsi); + ret = vlan_ops->ena_rx_filtering(vsi); if (ret) return ret; } @@ -3438,7 +3443,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) * packets aren't pruned by the device's internal switch on Rx */ vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); - ret = vsi->vlan_ops.add_vlan(vsi, &vlan); + ret = vlan_ops->add_vlan(vsi, &vlan); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3457,6 +3462,7 @@ static int ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_vlan vlan; int ret; @@ -3465,17 +3471,19 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) if (!vid) return 0; + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + /* Make sure VLAN delete is successful before updating VLAN * information */ vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); - ret = vsi->vlan_ops.del_vlan(vsi, &vlan); + ret = vlan_ops->del_vlan(vsi, &vlan); if (ret) return ret; /* Disable pruning when VLAN 0 is the only VLAN rule */ if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - vsi->vlan_ops.dis_rx_filtering(vsi); + vlan_ops->dis_rx_filtering(vsi); set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); return ret; @@ -5592,6 +5600,7 @@ static int ice_set_features(struct net_device *netdev, netdev_features_t features) { struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; int ret = 0; @@ -5608,6 +5617,8 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) return -EBUSY; } + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + /* Multiple features can be changed in one call so keep features in * separate if/else statements to guarantee each feature is checked */ @@ -5619,24 +5630,24 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); + ret = vlan_ops->ena_stripping(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vsi->vlan_ops.dis_stripping(vsi); + ret = vlan_ops->dis_stripping(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_TX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); + ret = vlan_ops->ena_insertion(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vsi->vlan_ops.dis_insertion(vsi); + ret = vlan_ops->dis_insertion(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vsi->vlan_ops.ena_rx_filtering(vsi); + ret = vlan_ops->ena_rx_filtering(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vsi->vlan_ops.dis_rx_filtering(vsi); + ret = vlan_ops->dis_rx_filtering(vsi); if ((features & NETIF_F_NTUPLE) && !(netdev->features & NETIF_F_NTUPLE)) { @@ -5664,19 +5675,21 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) } /** - * ice_vsi_vlan_setup - Setup VLAN offload properties on a VSI + * ice_vsi_vlan_setup - Setup VLAN offload properties on a PF VSI * @vsi: VSI to setup VLAN properties for */ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) { - int ret = 0; + struct ice_vsi_vlan_ops *vlan_ops; + + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); + vlan_ops->ena_stripping(vsi, ETH_P_8021Q); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); + vlan_ops->ena_insertion(vsi, ETH_P_8021Q); - return ret; + return ice_vsi_add_vlan_zero(vsi); } /** @@ -6279,11 +6292,12 @@ static void ice_napi_disable_all(struct ice_vsi *vsi) */ int ice_down(struct ice_vsi *vsi) { - int i, tx_err, rx_err, link_err = 0; + int i, tx_err, rx_err, link_err = 0, vlan_err = 0; WARN_ON(!test_bit(ICE_VSI_DOWN, vsi->state)); if (vsi->netdev && vsi->type == ICE_VSI_PF) { + vlan_err = ice_vsi_del_vlan_zero(vsi); if (!ice_is_e810(&vsi->back->hw)) ice_ptp_link_change(vsi->back, vsi->back->hw.pf_id, false); netif_carrier_off(vsi->netdev); @@ -6325,7 +6339,7 @@ int ice_down(struct ice_vsi *vsi) ice_for_each_rxq(vsi, i) ice_clean_rx_ring(vsi->rx_rings[i]); - if (tx_err || rx_err || link_err) { + if (tx_err || rx_err || link_err || vlan_err) { netdev_err(vsi->netdev, "Failed to close VSI 0x%04X on switch 0x%04X\n", vsi->vsi_num, vsi->vsw->sw_id); return -EIO; diff --git a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c new file mode 100644 index 000000000000..b00360ca6e92 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_ops.h" +#include "ice_vsi_vlan_lib.h" +#include "ice.h" +#include "ice_pf_vsi_vlan_ops.h" + +void ice_pf_vsi_init_vlan_ops(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops; + + if (ice_is_dvm_ena(&vsi->back->hw)) { + vlan_ops = &vsi->outer_vlan_ops; + + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_outer_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + } else { + vlan_ops = &vsi->inner_vlan_ops; + + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + } +} + diff --git a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h new file mode 100644 index 000000000000..6741ec8c5f6b --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_PF_VSI_VLAN_OPS_H_ +#define _ICE_PF_VSI_VLAN_OPS_H_ + +#include "ice_vsi_vlan_ops.h" + +struct ice_vsi; + +void ice_pf_vsi_init_vlan_ops(struct ice_vsi *vsi); + +#endif /* _ICE_PF_VSI_VLAN_OPS_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c new file mode 100644 index 000000000000..741b041606a2 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -0,0 +1,72 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_ops.h" +#include "ice_vsi_vlan_lib.h" +#include "ice.h" +#include "ice_vf_vsi_vlan_ops.h" +#include "ice_virtchnl_pf.h" + +static int +noop_vlan_arg(struct ice_vsi __always_unused *vsi, + struct ice_vlan __always_unused *vlan) +{ + return 0; +} + +/** + * ice_vf_vsi_init_vlan_ops - Initialize default VSI VLAN ops for VF VSI + * @vsi: VF's VSI being configured + */ +void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops; + struct ice_pf *pf = vsi->back; + struct ice_vf *vf; + + vf = &pf->vf[vsi->vf_id]; + + if (ice_is_dvm_ena(&pf->hw)) { + vlan_ops = &vsi->outer_vlan_ops; + + /* outer VLAN ops regardless of port VLAN config */ + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + + if (ice_vf_is_port_vlan_ena(vf)) { + /* setup outer VLAN ops */ + vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; + + /* setup inner VLAN ops */ + vlan_ops = &vsi->inner_vlan_ops; + vlan_ops->add_vlan = noop_vlan_arg; + vlan_ops->del_vlan = noop_vlan_arg; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } + } else { + vlan_ops = &vsi->inner_vlan_ops; + + /* inner VLAN ops regardless of port VLAN config */ + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + + if (ice_vf_is_port_vlan_ena(vf)) { + vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; + } else { + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } + } +} diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h new file mode 100644 index 000000000000..8ea13628a5e1 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VF_VSI_VLAN_OPS_H_ +#define _ICE_VF_VSI_VLAN_OPS_H_ + +#include "ice_vsi_vlan_ops.h" + +struct ice_vsi; + +#ifdef CONFIG_PCI_IOV +void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi); +#else +static inline void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) { } +#endif /* CONFIG_PCI_IOV */ +#endif /* _ICE_PF_VSI_VLAN_OPS_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index e576cd201a48..100c86c8ad9a 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -10,6 +10,7 @@ #include "ice_eswitch.h" #include "ice_virtchnl_allowlist.h" #include "ice_flex_pipe.h" +#include "ice_vf_vsi_vlan_ops.h" #define FIELD_SELECTOR(proto_hdr_field) \ BIT((proto_hdr_field) & PROTO_HDR_FIELD_MASK) @@ -761,7 +762,7 @@ static u8 ice_vf_get_port_vlan_prio(struct ice_vf *vf) return vf->port_vlan_info.prio; } -static bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) +bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) { return (ice_vf_get_port_vlan_id(vf) || ice_vf_get_port_vlan_prio(vf)); } @@ -769,26 +770,30 @@ static bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) /** * ice_vf_rebuild_host_vlan_cfg - add VLAN 0 filter or rebuild the Port VLAN * @vf: VF to add MAC filters for + * @vsi: Pointer to VSI * * Called after a VF VSI has been re-added/rebuilt during reset. The PF driver * always re-adds either a VLAN 0 or port VLAN based filter after reset. */ -static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) +static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf, struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); struct device *dev = ice_pf_to_dev(vf->pf); - struct ice_vsi *vsi = ice_get_vf_vsi(vf); int err; if (ice_vf_is_port_vlan_ena(vf)) { - err = vsi->vlan_ops.set_port_vlan(vsi, &vf->port_vlan_info); + err = vlan_ops->set_port_vlan(vsi, &vf->port_vlan_info); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); return err; } + + err = vlan_ops->add_vlan(vsi, &vf->port_vlan_info); + } else { + err = ice_vsi_add_vlan_zero(vsi); } - err = vsi->vlan_ops.add_vlan(vsi, &vf->port_vlan_info); if (err) { dev_err(dev, "failed to add VLAN %u filter for VF %u during VF rebuild, error %d\n", ice_vf_is_port_vlan_ena(vf) ? @@ -834,9 +839,12 @@ static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable) */ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops; int err; - err = vsi->vlan_ops.ena_tx_filtering(vsi); + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + + err = vlan_ops->ena_tx_filtering(vsi); if (err) return err; @@ -849,9 +857,12 @@ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) */ static int ice_vsi_dis_spoofchk(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops; int err; - err = vsi->vlan_ops.dis_tx_filtering(vsi); + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + + err = vlan_ops->dis_tx_filtering(vsi); if (err) return err; @@ -1268,7 +1279,7 @@ static int ice_vf_set_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 pro if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_set_vsi_promisc(hw, vsi->idx, promisc_m, ice_vf_get_port_vlan_id(vf)); - else if (vsi->num_vlan > 1) + else if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_set_vlan_vsi_promisc(hw, vsi, promisc_m); else status = ice_fltr_set_vsi_promisc(hw, vsi->idx, promisc_m, 0); @@ -1290,7 +1301,7 @@ static int ice_vf_clear_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 p if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_clear_vsi_promisc(hw, vsi->idx, promisc_m, ice_vf_get_port_vlan_id(vf)); - else if (vsi->num_vlan > 1) + else if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_clear_vlan_vsi_promisc(hw, vsi, promisc_m); else status = ice_fltr_clear_vsi_promisc(hw, vsi->idx, promisc_m, 0); @@ -1375,7 +1386,7 @@ static void ice_vf_rebuild_host_cfg(struct ice_vf *vf) dev_err(dev, "failed to rebuild default MAC configuration for VF %d\n", vf->vf_id); - if (ice_vf_rebuild_host_vlan_cfg(vf)) + if (ice_vf_rebuild_host_vlan_cfg(vf, vsi)) dev_err(dev, "failed to rebuild VLAN configuration for VF %u\n", vf->vf_id); @@ -3022,6 +3033,7 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) bool rm_promisc, alluni = false, allmulti = false; struct virtchnl_promisc_info *info = (struct virtchnl_promisc_info *)msg; + struct ice_vsi_vlan_ops *vlan_ops; int mcast_err = 0, ucast_err = 0; struct ice_pf *pf = vf->pf; struct ice_vsi *vsi; @@ -3060,16 +3072,15 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) rm_promisc = !allmulti && !alluni; - if (vsi->num_vlan || ice_vf_is_port_vlan_ena(vf)) { - if (rm_promisc) - ret = vsi->vlan_ops.ena_rx_filtering(vsi); - else - ret = vsi->vlan_ops.dis_rx_filtering(vsi); - if (ret) { - dev_err(dev, "Failed to configure VLAN pruning in promiscuous mode\n"); - v_ret = VIRTCHNL_STATUS_ERR_PARAM; - goto error_param; - } + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + if (rm_promisc) + ret = vlan_ops->ena_rx_filtering(vsi); + else + ret = vlan_ops->dis_rx_filtering(vsi); + if (ret) { + dev_err(dev, "Failed to configure VLAN pruning in promiscuous mode\n"); + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto error_param; } if (!test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags)) { @@ -3096,7 +3107,8 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) } else { u8 mcast_m, ucast_m; - if (ice_vf_is_port_vlan_ena(vf) || vsi->num_vlan > 1) { + if (ice_vf_is_port_vlan_ena(vf) || + ice_vsi_has_non_zero_vlans(vsi)) { mcast_m = ICE_MCAST_VLAN_PROMISC_BITS; ucast_m = ICE_UCAST_VLAN_PROMISC_BITS; } else { @@ -4163,6 +4175,27 @@ static bool ice_vf_vlan_offload_ena(u32 caps) return !!(caps & VIRTCHNL_VF_OFFLOAD_VLAN); } +/** + * ice_vf_has_max_vlans - check if VF already has the max allowed VLAN filters + * @vf: VF to check against + * @vsi: VF's VSI + * + * If the VF is trusted then the VF is allowed to add as many VLANs as it + * wants to, so return false. + * + * When the VF is untrusted compare the number of non-zero VLANs + 1 to the max + * allowed VLANs for an untrusted VF. Return the result of this comparison. + */ +static bool ice_vf_has_max_vlans(struct ice_vf *vf, struct ice_vsi *vsi) +{ + if (ice_is_vf_trusted(vf)) + return false; + +#define ICE_VF_ADDED_VLAN_ZERO_FLTRS 1 + return ((ice_vsi_num_non_zero_vlans(vsi) + + ICE_VF_ADDED_VLAN_ZERO_FLTRS) >= ICE_MAX_VLAN_PER_VF); +} + /** * ice_vc_process_vlan_msg * @vf: pointer to the VF info @@ -4176,6 +4209,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; struct virtchnl_vlan_filter_list *vfl = (struct virtchnl_vlan_filter_list *)msg; + struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; bool vlan_promisc = false; struct ice_vsi *vsi; @@ -4217,8 +4251,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - if (add_v && !ice_is_vf_trusted(vf) && - vsi->num_vlan >= ICE_MAX_VLAN_PER_VF) { + if (add_v && ice_vf_has_max_vlans(vf, vsi)) { dev_info(dev, "VF-%d is not trusted, switch the VF to trusted mode, in order to add more VLAN addresses\n", vf->vf_id); /* There is no need to let VF know about being not trusted, @@ -4237,13 +4270,13 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags)) vlan_promisc = true; + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); if (add_v) { for (i = 0; i < vfl->num_elements; i++) { u16 vid = vfl->vlan_id[i]; struct ice_vlan vlan; - if (!ice_is_vf_trusted(vf) && - vsi->num_vlan >= ICE_MAX_VLAN_PER_VF) { + if (ice_vf_has_max_vlans(vf, vsi)) { dev_info(dev, "VF-%d is not trusted, switch the VF to trusted mode, in order to add more VLAN addresses\n", vf->vf_id); /* There is no need to let VF know about being @@ -4261,7 +4294,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) continue; vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); - status = vsi->vlan_ops.add_vlan(vsi, &vlan); + status = vsi->inner_vlan_ops.add_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4270,7 +4303,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) /* Enable VLAN pruning when non-zero VLAN is added */ if (!vlan_promisc && vid && !ice_vsi_is_vlan_pruning_ena(vsi)) { - status = vsi->vlan_ops.ena_rx_filtering(vsi); + status = vlan_ops->ena_rx_filtering(vsi); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n", @@ -4314,16 +4347,16 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) continue; vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); - status = vsi->vlan_ops.del_vlan(vsi, &vlan); + status = vsi->inner_vlan_ops.del_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; } /* Disable VLAN pruning when only VLAN 0 is left */ - if (vsi->num_vlan == 1 && + if (!ice_vsi_has_non_zero_vlans(vsi) && ice_vsi_is_vlan_pruning_ena(vsi)) - status = vsi->vlan_ops.dis_rx_filtering(vsi); + status = vlan_ops->dis_rx_filtering(vsi); /* Disable Unicast/Multicast VLAN promiscuous mode */ if (vlan_promisc) { @@ -4392,7 +4425,7 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf) } vsi = ice_get_vf_vsi(vf); - if (vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q)) + if (vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4427,7 +4460,7 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf) goto error_param; } - if (vsi->vlan_ops.dis_stripping(vsi)) + if (vsi->inner_vlan_ops.dis_stripping(vsi)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4457,9 +4490,9 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) - return vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); + return vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q); else - return vsi->vlan_ops.dis_stripping(vsi); + return vsi->inner_vlan_ops.dis_stripping(vsi); } static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index b06ca1f97833..4110847e0699 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -211,6 +211,7 @@ int ice_vc_send_msg_to_vf(struct ice_vf *vf, u32 v_opcode, enum virtchnl_status_code v_retval, u8 *msg, u16 msglen); bool ice_vc_isvalid_vsi_id(struct ice_vf *vf, u16 vsi_id); +bool ice_vf_is_port_vlan_ena(struct ice_vf *vf); #else /* CONFIG_PCI_IOV */ static inline void ice_process_vflr_event(struct ice_pf *pf) { } static inline void ice_free_vfs(struct ice_pf *pf) { } @@ -343,5 +344,10 @@ static inline bool ice_is_any_vf_in_promisc(struct ice_pf __always_unused *pf) { return false; } + +static inline bool ice_vf_is_port_vlan_ena(struct ice_vf __always_unused *vf) +{ + return false; +} #endif /* CONFIG_PCI_IOV */ #endif /* _ICE_VIRTCHNL_PF_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 0b130505b68a..62a2630d6fab 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -23,7 +23,8 @@ static void print_invalid_tpid(struct ice_vsi *vsi, u16 tpid) */ static bool validate_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - if (vlan->tpid != ETH_P_8021Q && (vlan->tpid || vlan->vid)) { + if (vlan->tpid != ETH_P_8021Q && vlan->tpid != ETH_P_8021AD && + vlan->tpid != ETH_P_QINQ1 && (vlan->tpid || vlan->vid)) { print_invalid_tpid(vsi, vlan->tpid); return false; } @@ -366,3 +367,344 @@ int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi) { return ice_cfg_vlan_antispoof(vsi, false); } + +/** + * tpid_to_vsi_outer_vlan_type - convert from TPID to VSI context based tag_type + * @tpid: tpid used to translate into VSI context based tag_type + * @tag_type: output variable to hold the VSI context based tag type + */ +static int tpid_to_vsi_outer_vlan_type(u16 tpid, u8 *tag_type) +{ + switch (tpid) { + case ETH_P_8021Q: + *tag_type = ICE_AQ_VSI_OUTER_TAG_VLAN_8100; + break; + case ETH_P_8021AD: + *tag_type = ICE_AQ_VSI_OUTER_TAG_STAG; + break; + case ETH_P_QINQ1: + *tag_type = ICE_AQ_VSI_OUTER_TAG_VLAN_9100; + break; + default: + *tag_type = 0; + return -EINVAL; + } + + return 0; +} + +/** + * ice_vsi_ena_outer_stripping - enable outer VLAN stripping + * @vsi: VSI to configure + * @tpid: TPID to enable outer VLAN stripping for + * + * Enable outer VLAN stripping via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Since the VSI context only supports a single TPID for insertion and + * stripping, setting the TPID for stripping will affect the TPID for insertion. + * Callers need to be aware of this limitation. + * + * Only modify outer VLAN stripping settings and the VLAN TPID. Outer VLAN + * insertion settings are unmodified. + * + * This enables hardware to strip a VLAN tag with the specified TPID to be + * stripped from the packet and placed in the receive descriptor. + */ +int ice_vsi_ena_outer_stripping(struct ice_vsi *vsi, u16 tpid) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + u8 tag_type; + int err; + + /* do not allow modifying VLAN stripping when a port VLAN is configured + * on this VSI + */ + if (vsi->info.port_based_outer_vlan) + return 0; + + if (tpid_to_vsi_outer_vlan_type(tpid, &tag_type)) + return -EINVAL; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN strip settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~(ICE_AQ_VSI_OUTER_VLAN_EMODE_M | ICE_AQ_VSI_OUTER_TAG_TYPE_M); + ctxt->info.outer_vlan_flags |= + ((ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH << + ICE_AQ_VSI_OUTER_VLAN_EMODE_S) | + ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M)); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for enabling outer VLAN stripping failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_dis_outer_stripping - disable outer VLAN stripping + * @vsi: VSI to configure + * + * Disable outer VLAN stripping via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Only modify the outer VLAN stripping settings. The VLAN TPID and outer VLAN + * insertion settings are unmodified. + * + * This tells the hardware to not strip any VLAN tagged packets, thus leaving + * them in the packet. This enables software offloaded VLAN stripping and + * disables hardware offloaded VLAN stripping. + */ +int ice_vsi_dis_outer_stripping(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + if (vsi->info.port_based_outer_vlan) + return 0; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN strip settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~ICE_AQ_VSI_OUTER_VLAN_EMODE_M; + ctxt->info.outer_vlan_flags |= ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING << + ICE_AQ_VSI_OUTER_VLAN_EMODE_S; + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for disabling outer VLAN stripping failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_ena_outer_insertion - enable outer VLAN insertion + * @vsi: VSI to configure + * @tpid: TPID to enable outer VLAN insertion for + * + * Enable outer VLAN insertion via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Since the VSI context only supports a single TPID for insertion and + * stripping, setting the TPID for insertion will affect the TPID for stripping. + * Callers need to be aware of this limitation. + * + * Only modify outer VLAN insertion settings and the VLAN TPID. Outer VLAN + * stripping settings are unmodified. + * + * This allows a VLAN tag with the specified TPID to be inserted in the transmit + * descriptor. + */ +int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + u8 tag_type; + int err; + + if (vsi->info.port_based_outer_vlan) + return 0; + + if (tpid_to_vsi_outer_vlan_type(tpid, &tag_type)) + return -EINVAL; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN insertion settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT | + ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M | + ICE_AQ_VSI_OUTER_TAG_TYPE_M); + ctxt->info.outer_vlan_flags |= + ((ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M) | + ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for enabling outer VLAN insertion failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_dis_outer_insertion - disable outer VLAN insertion + * @vsi: VSI to configure + * + * Disable outer VLAN insertion via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Only modify the outer VLAN insertion settings. The VLAN TPID and outer VLAN + * settings are unmodified. + * + * This tells the hardware to not allow any VLAN tagged packets in the transmit + * descriptor. This enables software offloaded VLAN insertion and disables + * hardware offloaded VLAN insertion. + */ +int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + if (vsi->info.port_based_outer_vlan) + return 0; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN insertion settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT | + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M); + ctxt->info.outer_vlan_flags |= + ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | + ((ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for disabling outer VLAN insertion failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * __ice_vsi_set_outer_port_vlan - set the outer port VLAN and related settings + * @vsi: VSI to configure + * @vlan_info: packed u16 that contains the VLAN prio and ID + * @tpid: TPID of the port VLAN + * + * Set the port VLAN prio, ID, and TPID. + * + * Enable VLAN pruning so the VSI doesn't receive any traffic that doesn't match + * a VLAN prune rule. The caller should take care to add a VLAN prune rule that + * matches the port VLAN ID and TPID. + * + * Tell hardware to strip outer VLAN tagged packets on receive and don't put + * them in the receive descriptor. VSI(s) in port VLANs should not be aware of + * the port VLAN ID or TPID they are assigned to. + * + * Tell hardware to prevent outer VLAN tag insertion on transmit and only allow + * untagged outer packets from the transmit descriptor. + * + * Also, tell the hardware to insert the port VLAN on transmit. + */ +static int +__ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, u16 vlan_info, u16 tpid) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + u8 tag_type; + int err; + + if (tpid_to_vsi_outer_vlan_type(tpid, &tag_type)) + return -EINVAL; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info = vsi->info; + + ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + + ctxt->info.port_based_outer_vlan = cpu_to_le16(vlan_info); + ctxt->info.outer_vlan_flags = + (ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW << + ICE_AQ_VSI_OUTER_VLAN_EMODE_S) | + ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M) | + ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | + (ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) | + ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID | + ICE_AQ_VSI_PROP_SW_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) { + dev_err(ice_pf_to_dev(vsi->back), "update VSI for setting outer port based VLAN failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + } else { + vsi->info.port_based_outer_vlan = ctxt->info.port_based_outer_vlan; + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + vsi->info.sw_flags2 = ctxt->info.sw_flags2; + } + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_set_outer_port_vlan - public version of __ice_vsi_set_outer_port_vlan + * @vsi: VSI to configure + * @vlan: ice_vlan structure used to set the port VLAN + * + * Set the outer port VLAN via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * This function does not support clearing the port VLAN as there is currently + * no use case for this. + * + * Use the ice_vlan structure passed in to set this VSI in a port VLAN. + */ +int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + u16 port_vlan_info; + + if (vlan->prio > (VLAN_PRIO_MASK >> VLAN_PRIO_SHIFT)) + return -EINVAL; + + port_vlan_info = vlan->vid | (vlan->prio << VLAN_PRIO_SHIFT); + + return __ice_vsi_set_outer_port_vlan(vsi, port_vlan_info, vlan->tpid); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index a10671133e36..f459909490ec 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -23,4 +23,10 @@ int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_ena_tx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_ena_outer_stripping(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_outer_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi); +int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); + #endif /* _ICE_VSI_VLAN_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c index 6a6b49581c70..4a6c850d83ac 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c @@ -1,20 +1,103 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (C) 2019-2021, Intel Corporation. */ -#include "ice_vsi_vlan_ops.h" +#include "ice_pf_vsi_vlan_ops.h" +#include "ice_vf_vsi_vlan_ops.h" +#include "ice_lib.h" #include "ice.h" +static int +op_unsupported_vlan_arg(struct ice_vsi * __always_unused vsi, + struct ice_vlan * __always_unused vlan) +{ + return -EOPNOTSUPP; +} + +static int +op_unsupported_tpid_arg(struct ice_vsi *__always_unused vsi, + u16 __always_unused tpid) +{ + return -EOPNOTSUPP; +} + +static int op_unsupported(struct ice_vsi *__always_unused vsi) +{ + return -EOPNOTSUPP; +} + +/* If any new ops are added to the VSI VLAN ops interface then an unsupported + * implementation should be set here. + */ +static struct ice_vsi_vlan_ops ops_unsupported = { + .add_vlan = op_unsupported_vlan_arg, + .del_vlan = op_unsupported_vlan_arg, + .ena_stripping = op_unsupported_tpid_arg, + .dis_stripping = op_unsupported, + .ena_insertion = op_unsupported_tpid_arg, + .dis_insertion = op_unsupported, + .ena_rx_filtering = op_unsupported, + .dis_rx_filtering = op_unsupported, + .ena_tx_filtering = op_unsupported, + .dis_tx_filtering = op_unsupported, + .set_port_vlan = op_unsupported_vlan_arg, +}; + +/** + * ice_vsi_init_unsupported_vlan_ops - init all VSI VLAN ops to unsupported + * @vsi: VSI to initialize VSI VLAN ops to unsupported for + * + * By default all inner and outer VSI VLAN ops return -EOPNOTSUPP. This was done + * as oppsed to leaving the ops null to prevent unexpected crashes. Instead if + * an unsupported VSI VLAN op is called it will just return -EOPNOTSUPP. + * + */ +static void ice_vsi_init_unsupported_vlan_ops(struct ice_vsi *vsi) +{ + vsi->outer_vlan_ops = ops_unsupported; + vsi->inner_vlan_ops = ops_unsupported; +} + +/** + * ice_vsi_init_vlan_ops - initialize type specific VSI VLAN ops + * @vsi: VSI to initialize ops for + * + * If any VSI types are added and/or require different ops than the PF or VF VSI + * then they will have to add a case here to handle that. Also, VSI type + * specific files should be added in the same manner that was done for PF VSI. + */ void ice_vsi_init_vlan_ops(struct ice_vsi *vsi) { - vsi->vlan_ops.add_vlan = ice_vsi_add_vlan; - vsi->vlan_ops.del_vlan = ice_vsi_del_vlan; - vsi->vlan_ops.ena_stripping = ice_vsi_ena_inner_stripping; - vsi->vlan_ops.dis_stripping = ice_vsi_dis_inner_stripping; - vsi->vlan_ops.ena_insertion = ice_vsi_ena_inner_insertion; - vsi->vlan_ops.dis_insertion = ice_vsi_dis_inner_insertion; - vsi->vlan_ops.ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; - vsi->vlan_ops.dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; - vsi->vlan_ops.ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; - vsi->vlan_ops.dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; - vsi->vlan_ops.set_port_vlan = ice_vsi_set_inner_port_vlan; + /* Initialize all VSI types to have unsupported VSI VLAN ops */ + ice_vsi_init_unsupported_vlan_ops(vsi); + + switch (vsi->type) { + case ICE_VSI_PF: + case ICE_VSI_SWITCHDEV_CTRL: + ice_pf_vsi_init_vlan_ops(vsi); + break; + case ICE_VSI_VF: + ice_vf_vsi_init_vlan_ops(vsi); + break; + default: + dev_dbg(ice_pf_to_dev(vsi->back), "%s does not support VLAN operations\n", + ice_vsi_type_str(vsi->type)); + break; + } +} + +/** + * ice_get_compat_vsi_vlan_ops - Get VSI VLAN ops based on VLAN mode + * @vsi: VSI used to get the VSI VLAN ops + * + * This function is meant to be used when the caller doesn't know which VLAN ops + * to use (i.e. inner or outer). This allows backward compatibility for VLANs + * since most of the Outer VSI VLAN functins are not supported when + * the device is configured in Single VLAN Mode (SVM). + */ +struct ice_vsi_vlan_ops *ice_get_compat_vsi_vlan_ops(struct ice_vsi *vsi) +{ + if (ice_is_dvm_ena(&vsi->back->hw)) + return &vsi->outer_vlan_ops; + else + return &vsi->inner_vlan_ops; } diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 76e55b259bc8..30d02d2b8e5f 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -23,6 +23,12 @@ struct ice_vsi_vlan_ops { int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); }; +static inline bool ice_is_dvm_ena(struct ice_hw __always_unused *hw) +{ + return false; +} + void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); +struct ice_vsi_vlan_ops *ice_get_compat_vsi_vlan_ops(struct ice_vsi *vsi); #endif /* _ICE_VSI_VLAN_OPS_H_ */ -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:50 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:50 -0800 Subject: [Intel-wired-lan] [PATCH net-next 09/14] ice: Add hot path support for 802.1Q and 802.1ad VLAN offloads In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-9-anthony.l.nguyen@intel.com> From: Brett Creeley Currently the driver only supports 802.1Q VLAN insertion and stripping. However, once Double VLAN Mode (DVM) is fully supported, then both 802.1Q and 802.1ad VLAN insertion and stripping will be supported. Unfortunately the VSI context parameters only allow for one VLAN ethertype at a time for VLAN offloads so only one or the other VLAN ethertype offload can be supported at once. To support this, multiple changes are needed. Rx path changes: [1] In DVM, the Rx queue context l2tagsel field needs to be cleared so the outermost tag shows up in the l2tag2_2nd field of the Rx flex descriptor. In Single VLAN Mode (SVM), the l2tagsel field should remain 1 to support SVM configurations. [2] Modify the ice_test_staterr() function to take a __le16 instead of the ice_32b_rx_flex_desc union pointer so this function can be used for both rx_desc->wb.status_error0 and rx_desc->wb.status_error1. [3] Add the new inline function ice_get_vlan_tag_from_rx_desc() that checks if there is a VLAN tag in l2tag1 or l2tag2_2nd. [4] In ice_receive_skb(), add a check to see if NETIF_F_HW_VLAN_STAG_RX is enabled in netdev->features. If it is, then this is the VLAN ethertype that needs to be added to the stripping VLAN tag. Since ice_fix_features() prevents CTAG_RX and STAG_RX from being enabled simultaneously, the VLAN ethertype will only ever be 802.1Q or 802.1ad. Tx path changes: [1] In DVM, the VLAN tag needs to be placed in the l2tag2 field of the Tx context descriptor. The new define ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN was added to the list of tx_flags to handle this case. [2] When the stack requests the VLAN tag to be offloaded on Tx, the driver needs to set either ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN or ICE_TX_FLAGS_HW_VLAN, so the tag is inserted in l2tag2 or l2tag1 respectively. To determine which location to use, set a bit in the Tx ring flags field during ring allocation that can be used to determine which field to use in the Tx descriptor. In DVM, always use l2tag2, and in SVM, always use l2tag1. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_base.c | 18 +++++++++-- drivers/net/ethernet/intel/ice/ice_dcb_lib.c | 8 +++-- .../net/ethernet/intel/ice/ice_lan_tx_rx.h | 2 ++ drivers/net/ethernet/intel/ice/ice_lib.c | 5 ++++ drivers/net/ethernet/intel/ice/ice_txrx.c | 28 +++++++++++------ drivers/net/ethernet/intel/ice/ice_txrx.h | 3 ++ drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 9 ++++-- drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 30 +++++++++++++++++-- drivers/net/ethernet/intel/ice/ice_xsk.c | 6 ++-- 9 files changed, 87 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 44bdd0ed1629..9ca0ae2bb1dc 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -406,8 +406,22 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring) */ rlan_ctx.crcstrip = 1; - /* L2TSEL flag defines the reported L2 Tags in the receive descriptor */ - rlan_ctx.l2tsel = 1; + /* L2TSEL flag defines the reported L2 Tags in the receive descriptor + * and it needs to remain 1 for non-DVM capable configurations to not + * break backward compatibility for VF drivers. Setting this field to 0 + * will cause the single/outer VLAN tag to be stripped to the L2TAG2_2ND + * field in the Rx descriptor. Setting it to 1 allows the VLAN tag to + * be stripped in L2TAG1 of the Rx descriptor, which is where VFs will + * check for the tag + */ + if (ice_is_dvm_ena(hw)) + if (vsi->type == ICE_VSI_VF && + ice_vf_is_port_vlan_ena(&vsi->back->vf[vsi->vf_id])) + rlan_ctx.l2tsel = 1; + else + rlan_ctx.l2tsel = 0; + else + rlan_ctx.l2tsel = 1; rlan_ctx.dtype = ICE_RX_DTYPE_NO_SPLIT; rlan_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_NO_SPLIT; diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c index b94d8daeaa58..add90e75f05c 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c @@ -916,7 +916,8 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, return; /* Insert 802.1p priority into VLAN header */ - if ((first->tx_flags & ICE_TX_FLAGS_HW_VLAN) || + if ((first->tx_flags & ICE_TX_FLAGS_HW_VLAN || + first->tx_flags & ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN) || skb->priority != TC_PRIO_CONTROL) { first->tx_flags &= ~ICE_TX_FLAGS_VLAN_PR_M; /* Mask the lower 3 bits to set the 802.1p priority */ @@ -925,7 +926,10 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, /* if this is not already set it means a VLAN 0 + priority needs * to be offloaded */ - first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; + if (tx_ring->flags & ICE_TX_FLAGS_RING_VLAN_L2TAG2) + first->tx_flags |= ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN; + else + first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; } } diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index d981dc6f2323..a1fc676a4665 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -424,6 +424,8 @@ enum ice_rx_flex_desc_status_error_0_bits { enum ice_rx_flex_desc_status_error_1_bits { /* Note: These are predefined bit offsets */ ICE_RX_FLEX_DESC_STATUS1_NAT_S = 4, + /* [10:5] reserved */ + ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11, ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */ }; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 6a7f107a43c5..36507f0dc04e 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1370,6 +1370,7 @@ static void ice_vsi_clear_rings(struct ice_vsi *vsi) */ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) { + bool dvm_ena = ice_is_dvm_ena(&vsi->back->hw); struct ice_pf *pf = vsi->back; struct device *dev; u16 i; @@ -1391,6 +1392,10 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) ring->tx_tstamps = &pf->ptp.port.tx; ring->dev = dev; ring->count = vsi->num_tx_desc; + if (dvm_ena) + ring->flags |= ICE_TX_FLAGS_RING_VLAN_L2TAG2; + else + ring->flags |= ICE_TX_FLAGS_RING_VLAN_L2TAG1; WRITE_ONCE(vsi->tx_rings[i], ring); } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index d21f1c946767..3461aa21641a 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1073,7 +1073,7 @@ ice_is_non_eop(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) { /* if we are the last buffer then there is nothing else to do */ #define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S) - if (likely(ice_test_staterr(rx_desc, ICE_RXD_EOF))) + if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF))) return false; rx_ring->rx_stats.non_eop_descs++; @@ -1135,7 +1135,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) * hardware wrote DD then it will be non-zero */ stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S); - if (!ice_test_staterr(rx_desc, stat_err_bits)) + if (!ice_test_staterr(rx_desc->wb.status_error0, stat_err_bits)) break; /* This memory barrier is needed to keep us from reading @@ -1221,14 +1221,13 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) continue; stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S); - if (unlikely(ice_test_staterr(rx_desc, stat_err_bits))) { + if (unlikely(ice_test_staterr(rx_desc->wb.status_error0, + stat_err_bits))) { dev_kfree_skb_any(skb); continue; } - stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S); - if (ice_test_staterr(rx_desc, stat_err_bits)) - vlan_tag = le16_to_cpu(rx_desc->wb.l2tag1); + vlan_tag = ice_get_vlan_tag_from_rx_desc(rx_desc); /* pad the skb if needed, to make a valid ethernet frame */ if (eth_skb_pad(skb)) { @@ -1910,12 +1909,16 @@ ice_tx_prepare_vlan_flags(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first) if (!skb_vlan_tag_present(skb) && eth_type_vlan(skb->protocol)) return; - /* currently, we always assume 802.1Q for VLAN insertion as VLAN - * insertion for 802.1AD is not supported + /* the VLAN ethertype/tpid is determined by VSI configuration and netdev + * feature flags, which the driver only allows either 802.1Q or 802.1ad + * VLAN offloads exclusively so we only care about the VLAN ID here */ if (skb_vlan_tag_present(skb)) { first->tx_flags |= skb_vlan_tag_get(skb) << ICE_TX_FLAGS_VLAN_S; - first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; + if (tx_ring->flags & ICE_TX_FLAGS_RING_VLAN_L2TAG2) + first->tx_flags |= ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN; + else + first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; } ice_tx_prepare_vlan_flags_dcb(tx_ring, first); @@ -2288,6 +2291,13 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring) /* prepare the VLAN tagging flags for Tx */ ice_tx_prepare_vlan_flags(tx_ring, first); + if (first->tx_flags & ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN) { + offload.cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX | + (ICE_TX_CTX_DESC_IL2TAG2 << + ICE_TXD_CTX_QW1_CMD_S)); + offload.cd_l2tag2 = (first->tx_flags & ICE_TX_FLAGS_VLAN_M) >> + ICE_TX_FLAGS_VLAN_S; + } /* set up TSO offload */ tso = ice_tso(first, &offload); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index c56dd1749903..03bbae035de8 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -123,6 +123,7 @@ static inline int ice_skb_pad(void) #define ICE_TX_FLAGS_IPV4 BIT(5) #define ICE_TX_FLAGS_IPV6 BIT(6) #define ICE_TX_FLAGS_TUNNEL BIT(7) +#define ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN BIT(8) #define ICE_TX_FLAGS_VLAN_M 0xffff0000 #define ICE_TX_FLAGS_VLAN_PR_M 0xe0000000 #define ICE_TX_FLAGS_VLAN_PR_S 29 @@ -334,6 +335,8 @@ struct ice_tx_ring { spinlock_t tx_lock; u32 txq_teid; /* Added Tx queue TEID */ #define ICE_TX_FLAGS_RING_XDP BIT(0) +#define ICE_TX_FLAGS_RING_VLAN_L2TAG1 BIT(1) +#define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2) u8 flags; u8 dcb_tc; /* Traffic class of ring */ u8 ptp_tx; diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 84a6a3f9d624..9c37d827ed28 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -207,9 +207,14 @@ ice_process_skb_fields(struct ice_rx_ring *rx_ring, void ice_receive_skb(struct ice_rx_ring *rx_ring, struct sk_buff *skb, u16 vlan_tag) { - if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) && - (vlan_tag & VLAN_VID_MASK)) + netdev_features_t features = rx_ring->netdev->features; + bool non_zero_vlan = !!(vlan_tag & VLAN_VID_MASK); + + if ((features & NETIF_F_HW_VLAN_CTAG_RX) && non_zero_vlan) __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag); + else if ((features & NETIF_F_HW_VLAN_STAG_RX) && non_zero_vlan) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021AD), vlan_tag); + napi_gro_receive(&rx_ring->q_vector->napi, skb); } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h index 11b6c1601986..46e723f196fd 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h @@ -7,7 +7,7 @@ /** * ice_test_staterr - tests bits in Rx descriptor status and error fields - * @rx_desc: pointer to receive descriptor (in le64 format) + * @status_err_n: Rx descriptor status_error0 or status_error1 bits * @stat_err_bits: value to mask * * This function does some fast chicanery in order to return the @@ -16,9 +16,9 @@ * at offset zero. */ static inline bool -ice_test_staterr(union ice_32b_rx_flex_desc *rx_desc, const u16 stat_err_bits) +ice_test_staterr(__le16 status_err_n, const u16 stat_err_bits) { - return !!(rx_desc->wb.status_error0 & cpu_to_le16(stat_err_bits)); + return !!(status_err_n & cpu_to_le16(stat_err_bits)); } static inline __le64 @@ -31,6 +31,30 @@ ice_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, u64 td_tag) (td_tag << ICE_TXD_QW1_L2TAG1_S)); } +/** + * ice_get_vlan_from_rx_desc - get VLAN from Rx flex descriptor + * @rx_desc: Rx 32b flex descriptor with RXDID=2 + * + * The OS and current PF implementation only support stripping a single VLAN tag + * at a time, so there should only ever be 0 or 1 tags in the l2tag* fields. If + * one is found return the tag, else return 0 to mean no VLAN tag was found. + */ +static inline u16 +ice_get_vlan_tag_from_rx_desc(union ice_32b_rx_flex_desc *rx_desc) +{ + u16 stat_err_bits; + + stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S); + if (ice_test_staterr(rx_desc->wb.status_error0, stat_err_bits)) + return le16_to_cpu(rx_desc->wb.l2tag1); + + stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S); + if (ice_test_staterr(rx_desc->wb.status_error1, stat_err_bits)) + return le16_to_cpu(rx_desc->wb.l2tag2_2nd); + + return 0; +} + /** * ice_xdp_ring_update_tail - Updates the XDP Tx ring tail register * @xdp_ring: XDP Tx ring diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index ff55cb415b11..5b5fa3df29d5 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -530,7 +530,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) rx_desc = ICE_RX_DESC(rx_ring, rx_ring->next_to_clean); stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S); - if (!ice_test_staterr(rx_desc, stat_err_bits)) + if (!ice_test_staterr(rx_desc->wb.status_error0, stat_err_bits)) break; /* This memory barrier is needed to keep us from reading @@ -582,9 +582,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) total_rx_bytes += skb->len; total_rx_packets++; - stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S); - if (ice_test_staterr(rx_desc, stat_err_bits)) - vlan_tag = le16_to_cpu(rx_desc->wb.l2tag1); + vlan_tag = ice_get_vlan_tag_from_rx_desc(rx_desc); rx_ptype = le16_to_cpu(rx_desc->wb.ptype_flex_flags0) & ICE_RX_FLEX_DESC_PTYPE_M; -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:52 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:52 -0800 Subject: [Intel-wired-lan] [PATCH net-next 11/14] ice: Support configuring the device to Double VLAN Mode In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-11-anthony.l.nguyen@intel.com> In order to support configuring the device in Double VLAN Mode (DVM), the DDP and FW have to support DVM. If both support DVM, the PF that downloads the package needs to update the default recipes, set the VLAN mode, and update boost TCAM entries. To support updating the default recipes in DVM, add support for updating an existing switch recipe's lkup_idx and mask. This is done by first calling the get recipe AQ (0x0292) with the desired recipe ID. Then, if that is successful update one of the lookup indices (lkup_idx) and its associated mask if the mask is valid otherwise the already existing mask will be used. The VLAN mode of the device has to be configured while the global configuration lock is held while downloading the DDP, specifically after the DDP has been downloaded. If supported, the device will default to DVM. Co-developed-by: Dan Nowlin Signed-off-by: Dan Nowlin Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/Makefile | 1 + .../net/ethernet/intel/ice/ice_adminq_cmd.h | 64 ++- drivers/net/ethernet/intel/ice/ice_common.c | 49 +- drivers/net/ethernet/intel/ice/ice_common.h | 3 + .../net/ethernet/intel/ice/ice_flex_pipe.c | 290 ++++++++++-- .../net/ethernet/intel/ice/ice_flex_pipe.h | 13 + .../net/ethernet/intel/ice/ice_flex_type.h | 40 ++ drivers/net/ethernet/intel/ice/ice_main.c | 12 + .../ethernet/intel/ice/ice_pf_vsi_vlan_ops.c | 1 + drivers/net/ethernet/intel/ice/ice_switch.c | 75 +++ drivers/net/ethernet/intel/ice/ice_switch.h | 13 + drivers/net/ethernet/intel/ice/ice_type.h | 5 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 1 + .../net/ethernet/intel/ice/ice_vlan_mode.c | 439 ++++++++++++++++++ .../net/ethernet/intel/ice/ice_vlan_mode.h | 13 + .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 25 +- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 5 - 17 files changed, 990 insertions(+), 59 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_vlan_mode.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vlan_mode.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index 3ece1df919f8..606ff3522bd4 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -23,6 +23,7 @@ ice-y := ice_main.o \ ice_vsi_vlan_lib.o \ ice_fdir.o \ ice_ethtool_fdir.o \ + ice_vlan_mode.o \ ice_flex_pipe.o \ ice_flow.o \ ice_idc.o \ diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index b638f9e9ecd9..a23a9ea10751 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -226,6 +226,15 @@ struct ice_aqc_get_sw_cfg_resp_elem { #define ICE_AQC_GET_SW_CONF_RESP_IS_VF BIT(15) }; +/* Set Port parameters, (direct, 0x0203) */ +struct ice_aqc_set_port_params { + __le16 cmd_flags; +#define ICE_AQC_SET_P_PARAMS_DOUBLE_VLAN_ENA BIT(2) + __le16 bad_frame_vsi; + __le16 swid; + u8 reserved[10]; +}; + /* These resource type defines are used for all switch resource * commands where a resource type is required, such as: * Get Resource Allocation command (indirect 0x0204) @@ -283,6 +292,40 @@ struct ice_aqc_alloc_free_res_elem { struct ice_aqc_res_elem elem[]; }; +/* Request buffer for Set VLAN Mode AQ command (indirect 0x020C) */ +struct ice_aqc_set_vlan_mode { + u8 reserved; + u8 l2tag_prio_tagging; +#define ICE_AQ_VLAN_PRIO_TAG_S 0 +#define ICE_AQ_VLAN_PRIO_TAG_M (0x7 << ICE_AQ_VLAN_PRIO_TAG_S) +#define ICE_AQ_VLAN_PRIO_TAG_NOT_SUPPORTED 0x0 +#define ICE_AQ_VLAN_PRIO_TAG_STAG 0x1 +#define ICE_AQ_VLAN_PRIO_TAG_OUTER_CTAG 0x2 +#define ICE_AQ_VLAN_PRIO_TAG_OUTER_VLAN 0x3 +#define ICE_AQ_VLAN_PRIO_TAG_INNER_CTAG 0x4 +#define ICE_AQ_VLAN_PRIO_TAG_MAX 0x4 +#define ICE_AQ_VLAN_PRIO_TAG_ERROR 0x7 + u8 l2tag_reserved[64]; + u8 rdma_packet; +#define ICE_AQ_VLAN_RDMA_TAG_S 0 +#define ICE_AQ_VLAN_RDMA_TAG_M (0x3F << ICE_AQ_VLAN_RDMA_TAG_S) +#define ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING 0x10 +#define ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING 0x1A + u8 rdma_reserved[2]; + u8 mng_vlan_prot_id; +#define ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER 0x10 +#define ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER 0x11 + u8 prot_id_reserved[30]; +}; + +/* Response buffer for Get VLAN Mode AQ command (indirect 0x020D) */ +struct ice_aqc_get_vlan_mode { + u8 vlan_mode; +#define ICE_AQ_VLAN_MODE_DVM_ENA BIT(0) + u8 l2tag_prio_tagging; + u8 reserved[98]; +}; + /* Add VSI (indirect 0x0210) * Update VSI (indirect 0x0211) * Get VSI (indirect 0x0212) @@ -494,9 +537,13 @@ struct ice_aqc_add_get_recipe { struct ice_aqc_recipe_content { u8 rid; +#define ICE_AQ_RECIPE_ID_S 0 +#define ICE_AQ_RECIPE_ID_M (0x3F << ICE_AQ_RECIPE_ID_S) #define ICE_AQ_RECIPE_ID_IS_ROOT BIT(7) #define ICE_AQ_SW_ID_LKUP_IDX 0 u8 lkup_indx[5]; +#define ICE_AQ_RECIPE_LKUP_DATA_S 0 +#define ICE_AQ_RECIPE_LKUP_DATA_M (0x3F << ICE_AQ_RECIPE_LKUP_DATA_S) #define ICE_AQ_RECIPE_LKUP_IGNORE BIT(7) #define ICE_AQ_SW_ID_LKUP_MASK 0x00FF __le16 mask[5]; @@ -507,15 +554,25 @@ struct ice_aqc_recipe_content { u8 rsvd0[3]; u8 act_ctrl_join_priority; u8 act_ctrl_fwd_priority; +#define ICE_AQ_RECIPE_FWD_PRIORITY_S 0 +#define ICE_AQ_RECIPE_FWD_PRIORITY_M (0xF << ICE_AQ_RECIPE_FWD_PRIORITY_S) u8 act_ctrl; +#define ICE_AQ_RECIPE_ACT_NEED_PASS_L2 BIT(0) +#define ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2 BIT(1) #define ICE_AQ_RECIPE_ACT_INV_ACT BIT(2) +#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_S 4 +#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_M (0x3 << ICE_AQ_RECIPE_ACT_PRUNE_INDX_S) u8 rsvd1; __le32 dflt_act; +#define ICE_AQ_RECIPE_DFLT_ACT_S 0 +#define ICE_AQ_RECIPE_DFLT_ACT_M (0x7FFFF << ICE_AQ_RECIPE_DFLT_ACT_S) +#define ICE_AQ_RECIPE_DFLT_ACT_VALID BIT(31) }; struct ice_aqc_recipe_data_elem { u8 recipe_indx; u8 resp_bits; +#define ICE_AQ_RECIPE_WAS_UPDATED BIT(0) u8 rsvd0[2]; u8 recipe_bitmap[8]; u8 rsvd1[4]; @@ -1906,7 +1963,7 @@ struct ice_aqc_get_clear_fw_log { }; /* Download Package (indirect 0x0C40) */ -/* Also used for Update Package (indirect 0x0C42) */ +/* Also used for Update Package (indirect 0x0C41 and 0x0C42) */ struct ice_aqc_download_pkg { u8 flags; #define ICE_AQC_DOWNLOAD_PKG_LAST_BUF 0x01 @@ -2032,6 +2089,7 @@ struct ice_aq_desc { struct ice_aqc_sff_eeprom read_write_sff_param; struct ice_aqc_set_port_id_led set_port_id_led; struct ice_aqc_get_sw_cfg get_sw_conf; + struct ice_aqc_set_port_params set_port_params; struct ice_aqc_sw_rules sw_rules; struct ice_aqc_add_get_recipe add_get_recipe; struct ice_aqc_recipe_to_profile recipe_to_profile; @@ -2135,10 +2193,13 @@ enum ice_adminq_opc { /* internal switch commands */ ice_aqc_opc_get_sw_cfg = 0x0200, + ice_aqc_opc_set_port_params = 0x0203, /* Alloc/Free/Get Resources */ ice_aqc_opc_alloc_res = 0x0208, ice_aqc_opc_free_res = 0x0209, + ice_aqc_opc_set_vlan_mode_parameters = 0x020C, + ice_aqc_opc_get_vlan_mode_parameters = 0x020D, /* VSI commands */ ice_aqc_opc_add_vsi = 0x0210, @@ -2230,6 +2291,7 @@ enum ice_adminq_opc { /* package commands */ ice_aqc_opc_download_pkg = 0x0C40, + ice_aqc_opc_upload_section = 0x0C41, ice_aqc_opc_update_pkg = 0x0C42, ice_aqc_opc_get_pkg_info_list = 0x0C43, diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 44ed1c9161dc..ede131189a8f 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1518,16 +1518,27 @@ ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf, /* When a package download is in process (i.e. when the firmware's * Global Configuration Lock resource is held), only the Download - * Package, Get Version, Get Package Info List and Release Resource - * (with resource ID set to Global Config Lock) AdminQ commands are - * allowed; all others must block until the package download completes - * and the Global Config Lock is released. See also - * ice_acquire_global_cfg_lock(). + * Package, Get Version, Get Package Info List, Upload Section, + * Update Package, Set Port Parameters, Get/Set VLAN Mode Parameters, + * Add Recipe, Set Recipes to Profile Association, Get Recipe, and Get + * Recipes to Profile Association, and Release Resource (with resource + * ID set to Global Config Lock) AdminQ commands are allowed; all others + * must block until the package download completes and the Global Config + * Lock is released. See also ice_acquire_global_cfg_lock(). */ switch (le16_to_cpu(desc->opcode)) { case ice_aqc_opc_download_pkg: case ice_aqc_opc_get_pkg_info_list: case ice_aqc_opc_get_ver: + case ice_aqc_opc_upload_section: + case ice_aqc_opc_update_pkg: + case ice_aqc_opc_set_port_params: + case ice_aqc_opc_get_vlan_mode_parameters: + case ice_aqc_opc_set_vlan_mode_parameters: + case ice_aqc_opc_add_recipe: + case ice_aqc_opc_recipe_to_profile: + case ice_aqc_opc_get_recipe: + case ice_aqc_opc_get_recipe_to_profile: break; case ice_aqc_opc_release_res: if (le16_to_cpu(cmd->res_id) == ICE_AQC_RES_ID_GLBL_LOCK) @@ -2737,6 +2748,34 @@ void ice_clear_pxe_mode(struct ice_hw *hw) ice_aq_clear_pxe_mode(hw); } +/** + * ice_aq_set_port_params - set physical port parameters. + * @pi: pointer to the port info struct + * @double_vlan: if set double VLAN is enabled + * @cd: pointer to command details structure or NULL + * + * Set Physical port parameters (0x0203) + */ +int +ice_aq_set_port_params(struct ice_port_info *pi, bool double_vlan, + struct ice_sq_cd *cd) + +{ + struct ice_aqc_set_port_params *cmd; + struct ice_hw *hw = pi->hw; + struct ice_aq_desc desc; + u16 cmd_flags = 0; + + cmd = &desc.params.set_port_params; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_params); + if (double_vlan) + cmd_flags |= ICE_AQC_SET_P_PARAMS_DOUBLE_VLAN_ENA; + cmd->cmd_flags = cpu_to_le16(cmd_flags); + + return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); +} + /** * ice_get_link_speed_based_on_phy_type - returns link speed * @phy_type_low: lower part of phy_type diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h index 209a3cc113d4..893333b8b738 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.h +++ b/drivers/net/ethernet/intel/ice/ice_common.h @@ -85,6 +85,9 @@ int ice_aq_send_driver_ver(struct ice_hw *hw, struct ice_driver_ver *dv, struct ice_sq_cd *cd); int +ice_aq_set_port_params(struct ice_port_info *pi, bool double_vlan, + struct ice_sq_cd *cd); +int ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode, struct ice_aqc_get_phy_caps_data *caps, struct ice_sq_cd *cd); diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c index b197d3a72014..434169351052 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c @@ -5,9 +5,17 @@ #include "ice_flex_pipe.h" #include "ice_flow.h" +/* For supporting double VLAN mode, it is necessary to enable or disable certain + * boost tcam entries. The metadata labels names that match the following + * prefixes will be saved to allow enabling double VLAN mode. + */ +#define ICE_DVM_PRE "BOOST_MAC_VLAN_DVM" /* enable these entries */ +#define ICE_SVM_PRE "BOOST_MAC_VLAN_SVM" /* disable these entries */ + /* To support tunneling entries by PF, the package will append the PF number to * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc. */ +#define ICE_TNL_PRE "TNL_" static const struct ice_tunnel_type_scan tnls[] = { { TNL_VXLAN, "TNL_VXLAN_PF" }, { TNL_GENEVE, "TNL_GENEVE_PF" }, @@ -525,6 +533,55 @@ ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state, return label->name; } +/** + * ice_add_tunnel_hint + * @hw: pointer to the HW structure + * @label_name: label text + * @val: value of the tunnel port boost entry + */ +static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val) +{ + if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { + u16 i; + + for (i = 0; tnls[i].type != TNL_LAST; i++) { + size_t len = strlen(tnls[i].label_prefix); + + /* Look for matching label start, before continuing */ + if (strncmp(label_name, tnls[i].label_prefix, len)) + continue; + + /* Make sure this label matches our PF. Note that the PF + * character ('0' - '7') will be located where our + * prefix string's null terminator is located. + */ + if ((label_name[len] - '0') == hw->pf_id) { + hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; + hw->tnl.tbl[hw->tnl.count].valid = false; + hw->tnl.tbl[hw->tnl.count].boost_addr = val; + hw->tnl.tbl[hw->tnl.count].port = 0; + hw->tnl.count++; + break; + } + } + } +} + +/** + * ice_add_dvm_hint + * @hw: pointer to the HW structure + * @val: value of the boost entry + * @enable: true if entry needs to be enabled, or false if needs to be disabled + */ +static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable) +{ + if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) { + hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr = val; + hw->dvm_upd.tbl[hw->dvm_upd.count].enable = enable; + hw->dvm_upd.count++; + } +} + /** * ice_init_pkg_hints * @hw: pointer to the HW structure @@ -551,32 +608,23 @@ static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) label_name = ice_enum_labels(ice_seg, ICE_SID_LBL_RXPARSER_TMEM, &state, &val); - while (label_name && hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { - for (i = 0; tnls[i].type != TNL_LAST; i++) { - size_t len = strlen(tnls[i].label_prefix); + while (label_name) { + if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE))) + /* check for a tunnel entry */ + ice_add_tunnel_hint(hw, label_name, val); - /* Look for matching label start, before continuing */ - if (strncmp(label_name, tnls[i].label_prefix, len)) - continue; + /* check for a dvm mode entry */ + else if (!strncmp(label_name, ICE_DVM_PRE, strlen(ICE_DVM_PRE))) + ice_add_dvm_hint(hw, val, true); - /* Make sure this label matches our PF. Note that the PF - * character ('0' - '7') will be located where our - * prefix string's null terminator is located. - */ - if ((label_name[len] - '0') == hw->pf_id) { - hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; - hw->tnl.tbl[hw->tnl.count].valid = false; - hw->tnl.tbl[hw->tnl.count].boost_addr = val; - hw->tnl.tbl[hw->tnl.count].port = 0; - hw->tnl.count++; - break; - } - } + /* check for a svm mode entry */ + else if (!strncmp(label_name, ICE_SVM_PRE, strlen(ICE_SVM_PRE))) + ice_add_dvm_hint(hw, val, false); label_name = ice_enum_labels(NULL, 0, &state, &val); } - /* Cache the appropriate boost TCAM entry pointers */ + /* Cache the appropriate boost TCAM entry pointers for tunnels */ for (i = 0; i < hw->tnl.count; i++) { ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr, &hw->tnl.tbl[i].boost_entry); @@ -586,6 +634,11 @@ static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) hw->tnl.valid_count[hw->tnl.tbl[i].type]++; } } + + /* Cache the appropriate boost TCAM entry pointers for DVM and SVM */ + for (i = 0; i < hw->dvm_upd.count; i++) + ice_find_boost_entry(ice_seg, hw->dvm_upd.tbl[i].boost_addr, + &hw->dvm_upd.tbl[i].boost_entry); } /* Key creation */ @@ -876,6 +929,27 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, return status; } +/** + * ice_aq_upload_section + * @hw: pointer to the hardware structure + * @pkg_buf: the package buffer which will receive the section + * @buf_size: the size of the package buffer + * @cd: pointer to command details structure or NULL + * + * Upload Section (0x0C41) + */ +int +ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, struct ice_sq_cd *cd) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); +} + /** * ice_aq_update_pkg * @hw: pointer to the hardware structure @@ -960,26 +1034,21 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, } /** - * ice_update_pkg + * ice_update_pkg_no_lock * @hw: pointer to the hardware structure * @bufs: pointer to an array of buffers * @count: the number of buffers in the array - * - * Obtains change lock and updates package. */ static int -ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count) { - u32 offset, info, i; - int status; - - status = ice_acquire_change_lock(hw, ICE_RES_WRITE); - if (status) - return status; + int status = 0; + u32 i; for (i = 0; i < count; i++) { struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i); bool last = ((i + 1) == count); + u32 offset, info; status = ice_aq_update_pkg(hw, bh, le16_to_cpu(bh->data_end), last, &offset, &info, NULL); @@ -991,6 +1060,27 @@ ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) } } + return status; +} + +/** + * ice_update_pkg + * @hw: pointer to the hardware structure + * @bufs: pointer to an array of buffers + * @count: the number of buffers in the array + * + * Obtains change lock and updates package. + */ +static int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +{ + int status; + + status = ice_acquire_change_lock(hw, ICE_RES_WRITE); + if (status) + return status; + + status = ice_update_pkg_no_lock(hw, bufs, count); + ice_release_change_lock(hw); return status; @@ -1085,6 +1175,13 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) break; } + if (!status) { + status = ice_set_vlan_mode(hw); + if (status) + ice_debug(hw, ICE_DBG_PKG, "Failed to set VLAN mode: err %d\n", + status); + } + ice_release_global_cfg_lock(hw); return state; @@ -1122,6 +1219,7 @@ static enum ice_ddp_state ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg) { struct ice_buf_table *ice_buf_tbl; + int status; ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n", ice_seg->hdr.seg_format_ver.major, @@ -1138,8 +1236,12 @@ ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg) ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n", le32_to_cpu(ice_buf_tbl->buf_count)); - return ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, - le32_to_cpu(ice_buf_tbl->buf_count)); + status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, + le32_to_cpu(ice_buf_tbl->buf_count)); + + ice_post_pkg_dwnld_vlan_mode_cfg(hw); + + return status; } /** @@ -1902,7 +2004,7 @@ void ice_init_prof_result_bm(struct ice_hw *hw) * * Frees a package buffer */ -static void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) +void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) { devm_kfree(ice_hw_to_dev(hw), bld); } @@ -2001,6 +2103,43 @@ ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size) return NULL; } +/** + * ice_pkg_buf_alloc_single_section + * @hw: pointer to the HW structure + * @type: the section type value + * @size: the size of the section to reserve (in bytes) + * @section: returns pointer to the section + * + * Allocates a package buffer with a single section. + * Note: all package contents must be in Little Endian form. + */ +struct ice_buf_build * +ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, + void **section) +{ + struct ice_buf_build *buf; + + if (!section) + return NULL; + + buf = ice_pkg_buf_alloc(hw); + if (!buf) + return NULL; + + if (ice_pkg_buf_reserve_section(buf, 1)) + goto ice_pkg_buf_alloc_single_section_err; + + *section = ice_pkg_buf_alloc_section(buf, type, size); + if (!*section) + goto ice_pkg_buf_alloc_single_section_err; + + return buf; + +ice_pkg_buf_alloc_single_section_err: + ice_pkg_buf_free(hw, buf); + return NULL; +} + /** * ice_pkg_buf_get_active_sections * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) @@ -2028,7 +2167,7 @@ static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld) * * Return a pointer to the buffer's header */ -static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) +struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) { if (!bld) return NULL; @@ -2064,6 +2203,89 @@ ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, return res; } +/** + * ice_upd_dvm_boost_entry + * @hw: pointer to the HW structure + * @entry: pointer to double vlan boost entry info + */ +static int +ice_upd_dvm_boost_entry(struct ice_hw *hw, struct ice_dvm_entry *entry) +{ + struct ice_boost_tcam_section *sect_rx, *sect_tx; + int status = -ENOSPC; + struct ice_buf_build *bld; + u8 val, dc, nm; + + bld = ice_pkg_buf_alloc(hw); + if (!bld) + return -ENOMEM; + + /* allocate 2 sections, one for Rx parser, one for Tx parser */ + if (ice_pkg_buf_reserve_section(bld, 2)) + goto ice_upd_dvm_boost_entry_err; + + sect_rx = ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM, + struct_size(sect_rx, tcam, 1)); + if (!sect_rx) + goto ice_upd_dvm_boost_entry_err; + sect_rx->count = cpu_to_le16(1); + + sect_tx = ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM, + struct_size(sect_tx, tcam, 1)); + if (!sect_tx) + goto ice_upd_dvm_boost_entry_err; + sect_tx->count = cpu_to_le16(1); + + /* copy original boost entry to update package buffer */ + memcpy(sect_rx->tcam, entry->boost_entry, sizeof(*sect_rx->tcam)); + + /* re-write the don't care and never match bits accordingly */ + if (entry->enable) { + /* all bits are don't care */ + val = 0x00; + dc = 0xFF; + nm = 0x00; + } else { + /* disable, one never match bit, the rest are don't care */ + val = 0x00; + dc = 0xF7; + nm = 0x08; + } + + ice_set_key((u8 *)§_rx->tcam[0].key, sizeof(sect_rx->tcam[0].key), + &val, NULL, &dc, &nm, 0, sizeof(u8)); + + /* exact copy of entry to Tx section entry */ + memcpy(sect_tx->tcam, sect_rx->tcam, sizeof(*sect_tx->tcam)); + + status = ice_update_pkg_no_lock(hw, ice_pkg_buf(bld), 1); + +ice_upd_dvm_boost_entry_err: + ice_pkg_buf_free(hw, bld); + + return status; +} + +/** + * ice_set_dvm_boost_entries + * @hw: pointer to the HW structure + * + * Enable double vlan by updating the appropriate boost tcam entries. + */ +int ice_set_dvm_boost_entries(struct ice_hw *hw) +{ + int status; + u16 i; + + for (i = 0; i < hw->dvm_upd.count; i++) { + status = ice_upd_dvm_boost_entry(hw, &hw->dvm_upd.tbl[i]); + if (status) + return status; + } + + return 0; +} + /** * ice_tunnel_idx_to_entry - convert linear index to the sparse one * @hw: pointer to the HW structure diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h index dd602285c78e..4f0b151e9e9c 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h @@ -89,6 +89,12 @@ ice_init_prof_result_bm(struct ice_hw *hw); int ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, unsigned long *bm, struct list_head *fv_list); +int +ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count); +u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld); +int +ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, struct ice_sq_cd *cd); bool ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, enum ice_tunnel_type type); @@ -96,6 +102,7 @@ int ice_udp_tunnel_set_port(struct net_device *netdev, unsigned int table, unsigned int idx, struct udp_tunnel_info *ti); int ice_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table, unsigned int idx, struct udp_tunnel_info *ti); +int ice_set_dvm_boost_entries(struct ice_hw *hw); /* Rx parser PTYPE functions */ bool ice_hw_ptype_ena(struct ice_hw *hw, u16 ptype); @@ -120,4 +127,10 @@ void ice_clear_hw_tbls(struct ice_hw *hw); void ice_free_hw_tbls(struct ice_hw *hw); int ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id); +struct ice_buf_build * +ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, + void **section); +struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld); +void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld); + #endif /* _ICE_FLEX_PIPE_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_flex_type.h b/drivers/net/ethernet/intel/ice/ice_flex_type.h index fc087e0b5292..5735e9542a49 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_type.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_type.h @@ -162,6 +162,7 @@ struct ice_meta_sect { #define ICE_SID_RXPARSER_MARKER_PTYPE 55 #define ICE_SID_RXPARSER_BOOST_TCAM 56 +#define ICE_SID_RXPARSER_METADATA_INIT 58 #define ICE_SID_TXPARSER_BOOST_TCAM 66 #define ICE_SID_XLT0_PE 80 @@ -442,6 +443,19 @@ struct ice_tunnel_table { u16 valid_count[__TNL_TYPE_CNT]; }; +struct ice_dvm_entry { + u16 boost_addr; + u16 enable; + struct ice_boost_tcam_entry *boost_entry; +}; + +#define ICE_DVM_MAX_ENTRIES 48 + +struct ice_dvm_table { + struct ice_dvm_entry tbl[ICE_DVM_MAX_ENTRIES]; + u16 count; +}; + struct ice_pkg_es { __le16 count; __le16 offset; @@ -662,4 +676,30 @@ enum ice_prof_type { ICE_PROF_TUN_ALL = 0x6, ICE_PROF_ALL = 0xFF, }; + +/* Number of bits/bytes contained in meta init entry. Note, this should be a + * multiple of 32 bits. + */ +#define ICE_META_INIT_BITS 192 +#define ICE_META_INIT_DW_CNT (ICE_META_INIT_BITS / (sizeof(__le32) * \ + BITS_PER_BYTE)) + +/* The meta init Flag field starts at this bit */ +#define ICE_META_FLAGS_ST 123 + +/* The entry and bit to check for Double VLAN Mode (DVM) support */ +#define ICE_META_VLAN_MODE_ENTRY 0 +#define ICE_META_FLAG_VLAN_MODE 60 +#define ICE_META_VLAN_MODE_BIT (ICE_META_FLAGS_ST + \ + ICE_META_FLAG_VLAN_MODE) + +struct ice_meta_init_entry { + __le32 bm[ICE_META_INIT_DW_CNT]; +}; + +struct ice_meta_init_section { + __le16 count; + __le16 offset; + struct ice_meta_init_entry entry; +}; #endif /* _ICE_FLEX_TYPE_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index ff2b721e0e45..563b597b0a85 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3555,12 +3555,17 @@ static int ice_tc_indir_block_register(struct ice_vsi *vsi) static int ice_setup_pf_sw(struct ice_pf *pf) { struct device *dev = ice_pf_to_dev(pf); + bool dvm = ice_is_dvm_ena(&pf->hw); struct ice_vsi *vsi; int status; if (ice_is_reset_in_progress(pf->state)) return -EBUSY; + status = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); + if (status) + return -EIO; + vsi = ice_pf_vsi_setup(pf, pf->hw.port_info); if (!vsi) return -ENOMEM; @@ -6649,6 +6654,7 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) { struct device *dev = ice_pf_to_dev(pf); struct ice_hw *hw = &pf->hw; + bool dvm; int err; if (test_bit(ICE_DOWN, pf->state)) @@ -6712,6 +6718,12 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) goto err_init_ctrlq; } + dvm = ice_is_dvm_ena(hw); + + err = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); + if (err) + goto err_init_ctrlq; + err = ice_sched_init_port(hw->port_info); if (err) goto err_sched_init_port; diff --git a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c index b00360ca6e92..976a03d3bdd5 100644 --- a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c @@ -3,6 +3,7 @@ #include "ice_vsi_vlan_ops.h" #include "ice_vsi_vlan_lib.h" +#include "ice_vlan_mode.h" #include "ice.h" #include "ice_pf_vsi_vlan_ops.h" diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index f851a81a7240..04308e5fa224 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -1096,6 +1096,64 @@ ice_aq_get_recipe(struct ice_hw *hw, return status; } +/** + * ice_update_recipe_lkup_idx - update a default recipe based on the lkup_idx + * @hw: pointer to the HW struct + * @params: parameters used to update the default recipe + * + * This function only supports updating default recipes and it only supports + * updating a single recipe based on the lkup_idx at a time. + * + * This is done as a read-modify-write operation. First, get the current recipe + * contents based on the recipe's ID. Then modify the field vector index and + * mask if it's valid at the lkup_idx. Finally, use the add recipe AQ to update + * the pre-existing recipe with the modifications. + */ +int +ice_update_recipe_lkup_idx(struct ice_hw *hw, + struct ice_update_recipe_lkup_idx_params *params) +{ + struct ice_aqc_recipe_data_elem *rcp_list; + u16 num_recps = ICE_MAX_NUM_RECIPES; + int status; + + rcp_list = kcalloc(num_recps, sizeof(*rcp_list), GFP_KERNEL); + if (!rcp_list) + return -ENOMEM; + + /* read current recipe list from firmware */ + rcp_list->recipe_indx = params->rid; + status = ice_aq_get_recipe(hw, rcp_list, &num_recps, params->rid, NULL); + if (status) { + ice_debug(hw, ICE_DBG_SW, "Failed to get recipe %d, status %d\n", + params->rid, status); + goto error_out; + } + + /* only modify existing recipe's lkup_idx and mask if valid, while + * leaving all other fields the same, then update the recipe firmware + */ + rcp_list->content.lkup_indx[params->lkup_idx] = params->fv_idx; + if (params->mask_valid) + rcp_list->content.mask[params->lkup_idx] = + cpu_to_le16(params->mask); + + if (params->ignore_valid) + rcp_list->content.lkup_indx[params->lkup_idx] |= + ICE_AQ_RECIPE_LKUP_IGNORE; + + status = ice_aq_add_recipe(hw, &rcp_list[0], 1, NULL); + if (status) + ice_debug(hw, ICE_DBG_SW, "Failed to update recipe %d lkup_idx %d fv_idx %d mask %d mask_valid %s, status %d\n", + params->rid, params->lkup_idx, params->fv_idx, + params->mask, params->mask_valid ? "true" : "false", + status); + +error_out: + kfree(rcp_list); + return status; +} + /** * ice_aq_map_recipe_to_profile - Map recipe to packet profile * @hw: pointer to the HW struct @@ -3873,6 +3931,23 @@ ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, return ICE_MAX_NUM_RECIPES; } +/** + * ice_change_proto_id_to_dvm - change proto id in prot_id_tbl + * + * As protocol id for outer vlan is different in dvm and svm, if dvm is + * supported protocol array record for outer vlan has to be modified to + * reflect the value proper for DVM. + */ +void ice_change_proto_id_to_dvm(void) +{ + u8 i; + + for (i = 0; i < ARRAY_SIZE(ice_prot_id_tbl); i++) + if (ice_prot_id_tbl[i].type == ICE_VLAN_OFOS && + ice_prot_id_tbl[i].protocol_id != ICE_VLAN_OF_HW) + ice_prot_id_tbl[i].protocol_id = ICE_VLAN_OF_HW; +} + /** * ice_prot_type_to_id - get protocol ID from protocol type * @type: protocol type diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index 5000cc8276cd..7b42c51a3eb0 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -118,6 +118,15 @@ struct ice_fltr_info { u8 lan_en; /* Indicate if packet can be forwarded to the uplink */ }; +struct ice_update_recipe_lkup_idx_params { + u16 rid; + u16 fv_idx; + bool ignore_valid; + u16 mask; + bool mask_valid; + u8 lkup_idx; +}; + struct ice_adv_lkup_elem { enum ice_protocol_type type; union ice_prot_hdr h_u; /* Header values */ @@ -360,4 +369,8 @@ void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw); int ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz, u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd); +int +ice_update_recipe_lkup_idx(struct ice_hw *hw, + struct ice_update_recipe_lkup_idx_params *params); +void ice_change_proto_id_to_dvm(void); #endif /* _ICE_SWITCH_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index ef2ef064a74c..1800aee88b9b 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -14,6 +14,7 @@ #include "ice_flex_type.h" #include "ice_protocol_type.h" #include "ice_sbq_cmd.h" +#include "ice_vlan_mode.h" static inline bool ice_is_tc_ena(unsigned long bitmap, u8 tc) { @@ -919,6 +920,9 @@ struct ice_hw { struct udp_tunnel_nic_shared udp_tunnel_shared; struct udp_tunnel_nic_info udp_tunnel_nic; + /* dvm boost update information */ + struct ice_dvm_table dvm_upd; + /* HW block tables */ struct ice_blk_info blk[ICE_BLK_COUNT]; struct mutex fl_profs_locks[ICE_BLK_COUNT]; /* lock fltr profiles */ @@ -942,6 +946,7 @@ struct ice_hw { struct list_head rss_list_head; struct ice_mbx_snapshot mbx_snapshot; DECLARE_BITMAP(hw_ptype, ICE_FLOW_PTYPE_MAX); + u8 dvm_ena; u16 io_expander_handle; }; diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index d89577843d68..4be29f97365c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -3,6 +3,7 @@ #include "ice_vsi_vlan_ops.h" #include "ice_vsi_vlan_lib.h" +#include "ice_vlan_mode.h" #include "ice.h" #include "ice_vf_vsi_vlan_ops.h" #include "ice_virtchnl_pf.h" diff --git a/drivers/net/ethernet/intel/ice/ice_vlan_mode.c b/drivers/net/ethernet/intel/ice/ice_vlan_mode.c new file mode 100644 index 000000000000..1b618de592b7 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vlan_mode.c @@ -0,0 +1,439 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_common.h" + +/** + * ice_pkg_get_supported_vlan_mode - determine if DDP supports Double VLAN mode + * @hw: pointer to the HW struct + * @dvm: output variable to determine if DDP supports DVM(true) or SVM(false) + */ +static int +ice_pkg_get_supported_vlan_mode(struct ice_hw *hw, bool *dvm) +{ + u16 meta_init_size = sizeof(struct ice_meta_init_section); + struct ice_meta_init_section *sect; + struct ice_buf_build *bld; + int status; + + /* if anything fails, we assume there is no DVM support */ + *dvm = false; + + bld = ice_pkg_buf_alloc_single_section(hw, + ICE_SID_RXPARSER_METADATA_INIT, + meta_init_size, (void **)§); + if (!bld) + return -ENOMEM; + + /* only need to read a single section */ + sect->count = cpu_to_le16(1); + sect->offset = cpu_to_le16(ICE_META_VLAN_MODE_ENTRY); + + status = ice_aq_upload_section(hw, + (struct ice_buf_hdr *)ice_pkg_buf(bld), + ICE_PKG_BUF_SIZE, NULL); + if (!status) { + DECLARE_BITMAP(entry, ICE_META_INIT_BITS); + u32 arr[ICE_META_INIT_DW_CNT]; + u16 i; + + /* convert to host bitmap format */ + for (i = 0; i < ICE_META_INIT_DW_CNT; i++) + arr[i] = le32_to_cpu(sect->entry.bm[i]); + + bitmap_from_arr32(entry, arr, (u16)ICE_META_INIT_BITS); + + /* check if DVM is supported */ + *dvm = test_bit(ICE_META_VLAN_MODE_BIT, entry); + } + + ice_pkg_buf_free(hw, bld); + + return status; +} + +/** + * ice_aq_get_vlan_mode - get the VLAN mode of the device + * @hw: pointer to the HW structure + * @get_params: structure FW fills in based on the current VLAN mode config + * + * Get VLAN Mode Parameters (0x020D) + */ +static int +ice_aq_get_vlan_mode(struct ice_hw *hw, + struct ice_aqc_get_vlan_mode *get_params) +{ + struct ice_aq_desc desc; + + if (!get_params) + return -EINVAL; + + ice_fill_dflt_direct_cmd_desc(&desc, + ice_aqc_opc_get_vlan_mode_parameters); + + return ice_aq_send_cmd(hw, &desc, get_params, sizeof(*get_params), + NULL); +} + +/** + * ice_aq_is_dvm_ena - query FW to check if double VLAN mode is enabled + * @hw: pointer to the HW structure + * + * Returns true if the hardware/firmware is configured in double VLAN mode, + * else return false signaling that the hardware/firmware is configured in + * single VLAN mode. + * + * Also, return false if this call fails for any reason (i.e. firmware doesn't + * support this AQ call). + */ +static bool ice_aq_is_dvm_ena(struct ice_hw *hw) +{ + struct ice_aqc_get_vlan_mode get_params = { 0 }; + int status; + + status = ice_aq_get_vlan_mode(hw, &get_params); + if (status) { + ice_debug(hw, ICE_DBG_AQ, "Failed to get VLAN mode, status %d\n", + status); + return false; + } + + return (get_params.vlan_mode & ICE_AQ_VLAN_MODE_DVM_ENA); +} + +/** + * ice_is_dvm_ena - check if double VLAN mode is enabled + * @hw: pointer to the HW structure + * + * The device is configured in single or double VLAN mode on initialization and + * this cannot be dynamically changed during runtime. Based on this there is no + * need to make an AQ call every time the driver needs to know the VLAN mode. + * Instead, use the cached VLAN mode. + */ +bool ice_is_dvm_ena(struct ice_hw *hw) +{ + return hw->dvm_ena; +} + +/** + * ice_cache_vlan_mode - cache VLAN mode after DDP is downloaded + * @hw: pointer to the HW structure + * + * This is only called after downloading the DDP and after the global + * configuration lock has been released because all ports on a device need to + * cache the VLAN mode. + */ +static void ice_cache_vlan_mode(struct ice_hw *hw) +{ + hw->dvm_ena = ice_aq_is_dvm_ena(hw) ? true : false; +} + +/** + * ice_pkg_supports_dvm - find out if DDP supports DVM + * @hw: pointer to the HW structure + */ +static bool ice_pkg_supports_dvm(struct ice_hw *hw) +{ + bool pkg_supports_dvm; + int status; + + status = ice_pkg_get_supported_vlan_mode(hw, &pkg_supports_dvm); + if (status) { + ice_debug(hw, ICE_DBG_PKG, "Failed to get supported VLAN mode, status %d\n", + status); + return false; + } + + return pkg_supports_dvm; +} + +/** + * ice_fw_supports_dvm - find out if FW supports DVM + * @hw: pointer to the HW structure + */ +static bool ice_fw_supports_dvm(struct ice_hw *hw) +{ + struct ice_aqc_get_vlan_mode get_vlan_mode = { 0 }; + int status; + + /* If firmware returns success, then it supports DVM, else it only + * supports SVM + */ + status = ice_aq_get_vlan_mode(hw, &get_vlan_mode); + if (status) { + ice_debug(hw, ICE_DBG_NVM, "Failed to get VLAN mode, status %d\n", + status); + return false; + } + + return true; +} + +/** + * ice_is_dvm_supported - check if Double VLAN Mode is supported + * @hw: pointer to the hardware structure + * + * Returns true if Double VLAN Mode (DVM) is supported and false if only Single + * VLAN Mode (SVM) is supported. In order for DVM to be supported the DDP and + * firmware must support it, otherwise only SVM is supported. This function + * should only be called while the global config lock is held and after the + * package has been successfully downloaded. + */ +static bool ice_is_dvm_supported(struct ice_hw *hw) +{ + if (!ice_pkg_supports_dvm(hw)) { + ice_debug(hw, ICE_DBG_PKG, "DDP doesn't support DVM\n"); + return false; + } + + if (!ice_fw_supports_dvm(hw)) { + ice_debug(hw, ICE_DBG_PKG, "FW doesn't support DVM\n"); + return false; + } + + return true; +} + +#define ICE_EXTERNAL_VLAN_ID_FV_IDX 11 +#define ICE_SW_LKUP_VLAN_LOC_LKUP_IDX 1 +#define ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX 2 +#define ICE_SW_LKUP_PROMISC_VLAN_LOC_LKUP_IDX 2 +#define ICE_PKT_FLAGS_0_TO_15_FV_IDX 1 +#define ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK 0xD000 +static struct ice_update_recipe_lkup_idx_params ice_dvm_dflt_recipes[] = { + { + /* Update recipe ICE_SW_LKUP_VLAN to filter based on the + * outer/single VLAN in DVM + */ + .rid = ICE_SW_LKUP_VLAN, + .fv_idx = ICE_EXTERNAL_VLAN_ID_FV_IDX, + .ignore_valid = true, + .mask = 0, + .mask_valid = false, /* use pre-existing mask */ + .lkup_idx = ICE_SW_LKUP_VLAN_LOC_LKUP_IDX, + }, + { + /* Update recipe ICE_SW_LKUP_VLAN to filter based on the VLAN + * packet flags to support VLAN filtering on multiple VLAN + * ethertypes (i.e. 0x8100 and 0x88a8) in DVM + */ + .rid = ICE_SW_LKUP_VLAN, + .fv_idx = ICE_PKT_FLAGS_0_TO_15_FV_IDX, + .ignore_valid = false, + .mask = ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK, + .mask_valid = true, + .lkup_idx = ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX, + }, + { + /* Update recipe ICE_SW_LKUP_PROMISC_VLAN to filter based on the + * outer/single VLAN in DVM + */ + .rid = ICE_SW_LKUP_PROMISC_VLAN, + .fv_idx = ICE_EXTERNAL_VLAN_ID_FV_IDX, + .ignore_valid = true, + .mask = 0, + .mask_valid = false, /* use pre-existing mask */ + .lkup_idx = ICE_SW_LKUP_PROMISC_VLAN_LOC_LKUP_IDX, + }, +}; + +/** + * ice_dvm_update_dflt_recipes - update default switch recipes in DVM + * @hw: hardware structure used to update the recipes + */ +static int ice_dvm_update_dflt_recipes(struct ice_hw *hw) +{ + unsigned long i; + + for (i = 0; i < ARRAY_SIZE(ice_dvm_dflt_recipes); i++) { + struct ice_update_recipe_lkup_idx_params *params; + int status; + + params = &ice_dvm_dflt_recipes[i]; + + status = ice_update_recipe_lkup_idx(hw, params); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to update RID %d lkup_idx %d fv_idx %d mask_valid %s mask 0x%04x\n", + params->rid, params->lkup_idx, params->fv_idx, + params->mask_valid ? "true" : "false", + params->mask); + return status; + } + } + + return 0; +} + +/** + * ice_aq_set_vlan_mode - set the VLAN mode of the device + * @hw: pointer to the HW structure + * @set_params: requested VLAN mode configuration + * + * Set VLAN Mode Parameters (0x020C) + */ +static int +ice_aq_set_vlan_mode(struct ice_hw *hw, + struct ice_aqc_set_vlan_mode *set_params) +{ + u8 rdma_packet, mng_vlan_prot_id; + struct ice_aq_desc desc; + + if (!set_params) + return -EINVAL; + + if (set_params->l2tag_prio_tagging > ICE_AQ_VLAN_PRIO_TAG_MAX) + return -EINVAL; + + rdma_packet = set_params->rdma_packet; + if (rdma_packet != ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING && + rdma_packet != ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING) + return -EINVAL; + + mng_vlan_prot_id = set_params->mng_vlan_prot_id; + if (mng_vlan_prot_id != ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER && + mng_vlan_prot_id != ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER) + return -EINVAL; + + ice_fill_dflt_direct_cmd_desc(&desc, + ice_aqc_opc_set_vlan_mode_parameters); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + return ice_aq_send_cmd(hw, &desc, set_params, sizeof(*set_params), + NULL); +} + +/** + * ice_set_dvm - sets up software and hardware for double VLAN mode + * @hw: pointer to the hardware structure + */ +static int ice_set_dvm(struct ice_hw *hw) +{ + struct ice_aqc_set_vlan_mode params = { 0 }; + int status; + + params.l2tag_prio_tagging = ICE_AQ_VLAN_PRIO_TAG_OUTER_CTAG; + params.rdma_packet = ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING; + params.mng_vlan_prot_id = ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER; + + status = ice_aq_set_vlan_mode(hw, ¶ms); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set double VLAN mode parameters, status %d\n", + status); + return status; + } + + status = ice_dvm_update_dflt_recipes(hw); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to update default recipes for double VLAN mode, status %d\n", + status); + return status; + } + + status = ice_aq_set_port_params(hw->port_info, true, NULL); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set port in double VLAN mode, status %d\n", + status); + return status; + } + + status = ice_set_dvm_boost_entries(hw); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set boost TCAM entries for double VLAN mode, status %d\n", + status); + return status; + } + + return 0; +} + +/** + * ice_set_svm - set single VLAN mode + * @hw: pointer to the HW structure + */ +static int ice_set_svm(struct ice_hw *hw) +{ + struct ice_aqc_set_vlan_mode *set_params; + int status; + + status = ice_aq_set_port_params(hw->port_info, false, NULL); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set port parameters for single VLAN mode\n"); + return status; + } + + set_params = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*set_params), + GFP_KERNEL); + if (!set_params) + return -ENOMEM; + + /* default configuration for SVM configurations */ + set_params->l2tag_prio_tagging = ICE_AQ_VLAN_PRIO_TAG_INNER_CTAG; + set_params->rdma_packet = ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING; + set_params->mng_vlan_prot_id = ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER; + + status = ice_aq_set_vlan_mode(hw, set_params); + if (status) + ice_debug(hw, ICE_DBG_INIT, "Failed to configure port in single VLAN mode\n"); + + devm_kfree(ice_hw_to_dev(hw), set_params); + return status; +} + +/** + * ice_set_vlan_mode + * @hw: pointer to the HW structure + */ +int ice_set_vlan_mode(struct ice_hw *hw) +{ + if (!ice_is_dvm_supported(hw)) + return 0; + + if (!ice_set_dvm(hw)) + return 0; + + return ice_set_svm(hw); +} + +/** + * ice_print_dvm_not_supported - print if DDP and/or FW doesn't support DVM + * @hw: pointer to the HW structure + * + * The purpose of this function is to print that QinQ is not supported due to + * incompatibilty from the DDP and/or FW. This will give a hint to the user to + * update one and/or both components if they expect QinQ functionality. + */ +static void ice_print_dvm_not_supported(struct ice_hw *hw) +{ + bool pkg_supports_dvm = ice_pkg_supports_dvm(hw); + bool fw_supports_dvm = ice_fw_supports_dvm(hw); + + if (!fw_supports_dvm && !pkg_supports_dvm) + dev_info(ice_hw_to_dev(hw), "QinQ functionality cannot be enabled on this device. Update your DDP package and NVM to versions that support QinQ.\n"); + else if (!pkg_supports_dvm) + dev_info(ice_hw_to_dev(hw), "QinQ functionality cannot be enabled on this device. Update your DDP package to a version that supports QinQ.\n"); + else if (!fw_supports_dvm) + dev_info(ice_hw_to_dev(hw), "QinQ functionality cannot be enabled on this device. Update your NVM to a version that supports QinQ.\n"); +} + +/** + * ice_post_pkg_dwnld_vlan_mode_cfg - configure VLAN mode after DDP download + * @hw: pointer to the HW structure + * + * This function is meant to configure any VLAN mode specific functionality + * after the global configuration lock has been released and the DDP has been + * downloaded. + * + * Since only one PF downloads the DDP and configures the VLAN mode there needs + * to be a way to configure the other PFs after the DDP has been downloaded and + * the global configuration lock has been released. All such code should go in + * this function. + */ +void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw) +{ + ice_cache_vlan_mode(hw); + + if (ice_is_dvm_ena(hw)) + ice_change_proto_id_to_dvm(); + else + ice_print_dvm_not_supported(hw); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vlan_mode.h b/drivers/net/ethernet/intel/ice/ice_vlan_mode.h new file mode 100644 index 000000000000..a0fb743d08e2 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vlan_mode.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VLAN_MODE_H_ +#define _ICE_VLAN_MODE_H_ + +struct ice_hw; + +bool ice_is_dvm_ena(struct ice_hw *hw); +int ice_set_vlan_mode(struct ice_hw *hw); +void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw); + +#endif /* _ICE_VLAN_MODE_H */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 62a2630d6fab..5b4a0abb4607 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -39,20 +39,20 @@ static bool validate_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) */ int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - int err = 0; + int err; if (!validate_vlan(vsi, vlan)) return -EINVAL; - if (!ice_fltr_add_vlan(vsi, vlan)) { - vsi->num_vlan++; - } else { - err = -ENODEV; - dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i\n", - vlan->vid, vsi->vsi_num); + err = ice_fltr_add_vlan(vsi, vlan); + if (err && err != -EEXIST) { + dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i, status %d\n", + vlan->vid, vsi->vsi_num, err); + return err; } - return err; + vsi->num_vlan++; + return 0; } /** @@ -72,16 +72,13 @@ int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) dev = ice_pf_to_dev(pf); err = ice_fltr_remove_vlan(vsi, vlan); - if (!err) { + if (!err) vsi->num_vlan--; - } else if (err == -ENOENT) { - dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist\n", - vlan->vid, vsi->vsi_num); + else if (err == -ENOENT || err == -EBUSY) err = 0; - } else { + else dev_err(dev, "Error removing VLAN %d on VSI %i error: %d\n", vlan->vid, vsi->vsi_num, err); - } return err; } diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 30d02d2b8e5f..5b47568f6256 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -23,11 +23,6 @@ struct ice_vsi_vlan_ops { int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); }; -static inline bool ice_is_dvm_ena(struct ice_hw __always_unused *hw) -{ - return false; -} - void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); struct ice_vsi_vlan_ops *ice_get_compat_vsi_vlan_ops(struct ice_vsi *vsi); -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:51 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:51 -0800 Subject: [Intel-wired-lan] [PATCH net-next 10/14] ice: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-10-anthony.l.nguyen@intel.com> From: Brett Creeley Add support for the VF driver to be able to request VIRTCHNL_VF_OFFLOAD_VLAN_V2, negotiate its VLAN capabilities via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, add/delete VLAN filters, and enable/disable VLAN offloads. VFs supporting VIRTCHNL_OFFLOAD_VLAN_V2 will be able to use the following virtchnl opcodes: VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS VIRTCHNL_OP_ADD_VLAN_V2 VIRTCHNL_OP_DEL_VLAN_V2 VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 Legacy VF drivers may expect the initial VLAN stripping settings to be configured by the PF, so the PF initializes VLAN stripping based on the VIRTCHNL_OP_GET_VF_RESOURCES opcode. However, with VLAN support via VIRTCHNL_VF_OFFLOAD_VLAN_V2, this function is only expected to be used for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN, which will only be supported when a port VLAN is configured. Update the function based on the new expectations. Also, change the message when the PF can't enable/disable VLAN stripping to a dev_dbg() as this isn't fatal. When a VF isn't in a port VLAN and it only supports VIRTCHNL_VF_OFFLOAD_VLAN when Double VLAN Mode (DVM) is enabled, then the PF needs to reject the VIRTCHNL_VF_OFFLOAD_VLAN capability and configure the VF in software only VLAN mode. To do this add the new function ice_vf_vsi_cfg_legacy_vlan_mode(), which updates the VF's inner and outer ice_vsi_vlan_ops functions and sets up software only VLAN mode. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_base.c | 1 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 115 ++ .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.h | 3 + .../intel/ice/ice_virtchnl_allowlist.c | 10 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 1132 ++++++++++++++++- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 8 + 6 files changed, 1226 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 9ca0ae2bb1dc..0dec7c5463eb 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -5,6 +5,7 @@ #include "ice_base.h" #include "ice_lib.h" #include "ice_dcb_lib.h" +#include "ice_virtchnl_pf.h" /** * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index 741b041606a2..d89577843d68 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -14,9 +14,20 @@ noop_vlan_arg(struct ice_vsi __always_unused *vsi, return 0; } +static int +noop_vlan(struct ice_vsi __always_unused *vsi) +{ + return 0; +} + /** * ice_vf_vsi_init_vlan_ops - Initialize default VSI VLAN ops for VF VSI * @vsi: VF's VSI being configured + * + * If Double VLAN Mode (DVM) is enabled, assume that the VF supports the new + * VIRTCHNL_VF_VLAN_OFFLOAD_V2 capability and set up the VLAN ops accordingly. + * If SVM is enabled maintain the same level of VLAN support previous to + * VIRTCHNL_VF_VLAN_OFFLOAD_V2. */ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) { @@ -44,6 +55,20 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) vlan_ops = &vsi->inner_vlan_ops; vlan_ops->add_vlan = noop_vlan_arg; vlan_ops->del_vlan = noop_vlan_arg; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } else { + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_outer_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + + /* setup inner VLAN ops */ + vlan_ops = &vsi->inner_vlan_ops; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; @@ -70,3 +95,93 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) } } } + +/** + * ice_vf_vsi_cfg_dvm_legacy_vlan_mode - Config VLAN mode for old VFs in DVM + * @vsi: VF's VSI being configured + * + * This should only be called when Double VLAN Mode (DVM) is enabled, there + * is not a port VLAN enabled on this VF, and the VF negotiates + * VIRTCHNL_VF_OFFLOAD_VLAN. + * + * This function sets up the VF VSI's inner and outer ice_vsi_vlan_ops and also + * initializes software only VLAN mode (i.e. allow all VLANs). Also, use no-op + * implementations for any functions that may be called during the lifetime of + * the VF so these methods do nothing and succeed. + */ +void ice_vf_vsi_cfg_dvm_legacy_vlan_mode(struct ice_vsi *vsi) +{ + struct ice_vf *vf = &vsi->back->vf[vsi->vf_id]; + struct device *dev = ice_pf_to_dev(vf->pf); + struct ice_vsi_vlan_ops *vlan_ops; + + if (!ice_is_dvm_ena(&vsi->back->hw) || ice_vf_is_port_vlan_ena(vf)) + return; + + vlan_ops = &vsi->outer_vlan_ops; + + /* Rx VLAN filtering always disabled to allow software offloaded VLANs + * for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN and don't have a + * port VLAN configured + */ + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + /* Don't fail when attempting to enable Rx VLAN filtering */ + vlan_ops->ena_rx_filtering = noop_vlan; + + /* Tx VLAN filtering always disabled to allow software offloaded VLANs + * for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN and don't have a + * port VLAN configured + */ + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + /* Don't fail when attempting to enable Tx VLAN filtering */ + vlan_ops->ena_tx_filtering = noop_vlan; + + if (vlan_ops->dis_rx_filtering(vsi)) + dev_dbg(dev, "Failed to disable Rx VLAN filtering for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + if (vlan_ops->dis_tx_filtering(vsi)) + dev_dbg(dev, "Failed to disable Tx VLAN filtering for old VF without VIRTHCNL_VF_OFFLOAD_VLAN_V2 support\n"); + + /* All outer VLAN offloads must be disabled */ + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + + if (vlan_ops->dis_stripping(vsi)) + dev_dbg(dev, "Failed to disable outer VLAN stripping for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + + if (vlan_ops->dis_insertion(vsi)) + dev_dbg(dev, "Failed to disable outer VLAN insertion for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + + /* All inner VLAN offloads must be disabled */ + vlan_ops = &vsi->inner_vlan_ops; + + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + + if (vlan_ops->dis_stripping(vsi)) + dev_dbg(dev, "Failed to disable inner VLAN stripping for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + + if (vlan_ops->dis_insertion(vsi)) + dev_dbg(dev, "Failed to disable inner VLAN insertion for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); +} + +/** + * ice_vf_vsi_cfg_svm_legacy_vlan_mode - Config VLAN mode for old VFs in SVM + * @vsi: VF's VSI being configured + * + * This should only be called when Single VLAN Mode (SVM) is enabled, there is + * not a port VLAN enabled on this VF, and the VF negotiates + * VIRTCHNL_VF_OFFLOAD_VLAN. + * + * All of the normal SVM VLAN ops are identical for this case. However, by + * default Rx VLAN filtering should be turned off by default in this case. + */ +void ice_vf_vsi_cfg_svm_legacy_vlan_mode(struct ice_vsi *vsi) +{ + struct ice_vf *vf = &vsi->back->vf[vsi->vf_id]; + + if (ice_is_dvm_ena(&vsi->back->hw) || ice_vf_is_port_vlan_ena(vf)) + return; + + if (vsi->inner_vlan_ops.dis_rx_filtering(vsi)) + dev_dbg(ice_pf_to_dev(vf->pf), "Failed to disable Rx VLAN filtering for old VF with VIRTCHNL_VF_OFFLOAD_VLAN support\n"); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h index 8ea13628a5e1..875a4e615f39 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h @@ -8,6 +8,9 @@ struct ice_vsi; +void ice_vf_vsi_cfg_dvm_legacy_vlan_mode(struct ice_vsi *vsi); +void ice_vf_vsi_cfg_svm_legacy_vlan_mode(struct ice_vsi *vsi); + #ifdef CONFIG_PCI_IOV void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi); #else diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c index 9feebe5f556c..5a82216e7d03 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c @@ -55,6 +55,15 @@ static const u32 vlan_allowlist_opcodes[] = { VIRTCHNL_OP_ENABLE_VLAN_STRIPPING, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING, }; +/* VIRTCHNL_VF_OFFLOAD_VLAN_V2 */ +static const u32 vlan_v2_allowlist_opcodes[] = { + VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, VIRTCHNL_OP_ADD_VLAN_V2, + VIRTCHNL_OP_DEL_VLAN_V2, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2, + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2, + VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2, + VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2, +}; + /* VIRTCHNL_VF_OFFLOAD_RSS_PF */ static const u32 rss_pf_allowlist_opcodes[] = { VIRTCHNL_OP_CONFIG_RSS_KEY, VIRTCHNL_OP_CONFIG_RSS_LUT, @@ -89,6 +98,7 @@ static const struct allowlist_opcode_info allowlist_opcodes[] = { ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_RSS_PF, rss_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF, adv_rss_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_FDIR_PF, fdir_pf_allowlist_opcodes), + ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_VLAN_V2, vlan_v2_allowlist_opcodes), }; /** diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 100c86c8ad9a..de74a2b4f846 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -11,6 +11,7 @@ #include "ice_virtchnl_allowlist.h" #include "ice_flex_pipe.h" #include "ice_vf_vsi_vlan_ops.h" +#include "ice_vlan.h" #define FIELD_SELECTOR(proto_hdr_field) \ BIT((proto_hdr_field) & PROTO_HDR_FIELD_MASK) @@ -1458,6 +1459,7 @@ static void ice_vf_set_initialized(struct ice_vf *vf) clear_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states); clear_bit(ICE_VF_STATE_DIS, vf->vf_states); set_bit(ICE_VF_STATE_INIT, vf->vf_states); + memset(&vf->vlan_v2_caps, 0, sizeof(vf->vlan_v2_caps)); } /** @@ -2347,8 +2349,33 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) goto err; } - if (!ice_vf_is_port_vlan_ena(vf)) - vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; + if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN_V2) { + /* VLAN offloads based on current device configuration */ + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN_V2; + } else if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN) { + /* allow VF to negotiate VIRTCHNL_VF_OFFLOAD explicitly for + * these two conditions, which amounts to guest VLAN filtering + * and offloads being based on the inner VLAN or the + * inner/single VLAN respectively and don't allow VF to + * negotiate VIRTCHNL_VF_OFFLOAD in any other cases + */ + if (ice_is_dvm_ena(&pf->hw) && ice_vf_is_port_vlan_ena(vf)) { + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; + } else if (!ice_is_dvm_ena(&pf->hw) && + !ice_vf_is_port_vlan_ena(vf)) { + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; + /* configure backward compatible support for VFs that + * only support VIRTCHNL_VF_OFFLOAD_VLAN, the PF is + * configured in SVM, and no port VLAN is configured + */ + ice_vf_vsi_cfg_svm_legacy_vlan_mode(vsi); + } else if (ice_is_dvm_ena(&pf->hw)) { + /* configure software offloaded VLAN support when DVM + * is enabled, but no port VLAN is enabled + */ + ice_vf_vsi_cfg_dvm_legacy_vlan_mode(vsi); + } + } if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_PF; @@ -4175,6 +4202,62 @@ static bool ice_vf_vlan_offload_ena(u32 caps) return !!(caps & VIRTCHNL_VF_OFFLOAD_VLAN); } +/** + * ice_is_vlan_promisc_allowed - check if VLAN promiscuous config is allowed + * @vf: VF used to determine if VLAN promiscuous config is allowed + */ +static bool ice_is_vlan_promisc_allowed(struct ice_vf *vf) +{ + if ((test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states) || + test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) && + test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, vf->pf->flags)) + return true; + + return false; +} + +/** + * ice_vf_ena_vlan_promisc - Enable Tx/Rx VLAN promiscuous for the VLAN + * @vsi: VF's VSI used to enable VLAN promiscuous mode + * @vlan: VLAN used to enable VLAN promiscuous + * + * This function should only be called if VLAN promiscuous mode is allowed, + * which can be determined via ice_is_vlan_promisc_allowed(). + */ +static int ice_vf_ena_vlan_promisc(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + u8 promisc_m = ICE_PROMISC_VLAN_TX | ICE_PROMISC_VLAN_RX; + int status; + + status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, + vlan->vid); + if (status && status != -EEXIST) + return status; + + return 0; +} + +/** + * ice_vf_dis_vlan_promisc - Disable Tx/Rx VLAN promiscuous for the VLAN + * @vsi: VF's VSI used to disable VLAN promiscuous mode for + * @vlan: VLAN used to disable VLAN promiscuous + * + * This function should only be called if VLAN promiscuous mode is allowed, + * which can be determined via ice_is_vlan_promisc_allowed(). + */ +static int ice_vf_dis_vlan_promisc(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + u8 promisc_m = ICE_PROMISC_VLAN_TX | ICE_PROMISC_VLAN_RX; + int status; + + status = ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, + vlan->vid); + if (status && status != -ENOENT) + return status; + + return 0; +} + /** * ice_vf_has_max_vlans - check if VF already has the max allowed VLAN filters * @vf: VF to check against @@ -4209,14 +4292,11 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; struct virtchnl_vlan_filter_list *vfl = (struct virtchnl_vlan_filter_list *)msg; - struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; bool vlan_promisc = false; struct ice_vsi *vsi; struct device *dev; - struct ice_hw *hw; int status = 0; - u8 promisc_m; int i; dev = ice_pf_to_dev(pf); @@ -4244,7 +4324,6 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) } } - hw = &pf->hw; vsi = ice_get_vf_vsi(vf); if (!vsi) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4260,17 +4339,22 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - if (ice_vf_is_port_vlan_ena(vf)) { + /* in DVM a VF can add/delete inner VLAN filters when + * VIRTCHNL_VF_OFFLOAD_VLAN is negotiated, so only reject in SVM + */ + if (ice_vf_is_port_vlan_ena(vf) && !ice_is_dvm_ena(&pf->hw)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; } - if ((test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states) || - test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) && - test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags)) - vlan_promisc = true; + /* in DVM VLAN promiscuous is based on the outer VLAN, which would be + * the port VLAN if VIRTCHNL_VF_OFFLOAD_VLAN was negotiated, so only + * allow vlan_promisc = true in SVM and if no port VLAN is configured + */ + vlan_promisc = ice_is_vlan_promisc_allowed(vf) && + !ice_is_dvm_ena(&pf->hw) && + !ice_vf_is_port_vlan_ena(vf); - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); if (add_v) { for (i = 0; i < vfl->num_elements; i++) { u16 vid = vfl->vlan_id[i]; @@ -4300,23 +4384,16 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - /* Enable VLAN pruning when non-zero VLAN is added */ - if (!vlan_promisc && vid && - !ice_vsi_is_vlan_pruning_ena(vsi)) { - status = vlan_ops->ena_rx_filtering(vsi); - if (status) { + /* Enable VLAN filtering on first non-zero VLAN */ + if (!vlan_promisc && vid && !ice_is_dvm_ena(&pf->hw)) { + if (vsi->inner_vlan_ops.ena_rx_filtering(vsi)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n", vid, status); goto error_param; } } else if (vlan_promisc) { - /* Enable Ucast/Mcast VLAN promiscuous mode */ - promisc_m = ICE_PROMISC_VLAN_TX | - ICE_PROMISC_VLAN_RX; - - status = ice_set_vsi_promisc(hw, vsi->idx, - promisc_m, vid); + status = ice_vf_ena_vlan_promisc(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable Unicast/multicast promiscuous mode on VLAN ID:%d failed error-%d\n", @@ -4353,19 +4430,12 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - /* Disable VLAN pruning when only VLAN 0 is left */ - if (!ice_vsi_has_non_zero_vlans(vsi) && - ice_vsi_is_vlan_pruning_ena(vsi)) - status = vlan_ops->dis_rx_filtering(vsi); - - /* Disable Unicast/Multicast VLAN promiscuous mode */ - if (vlan_promisc) { - promisc_m = ICE_PROMISC_VLAN_TX | - ICE_PROMISC_VLAN_RX; + /* Disable VLAN filtering when only VLAN 0 is left */ + if (!ice_vsi_has_non_zero_vlans(vsi)) + vsi->inner_vlan_ops.dis_rx_filtering(vsi); - ice_clear_vsi_promisc(hw, vsi->idx, - promisc_m, vid); - } + if (vlan_promisc) + ice_vf_dis_vlan_promisc(vsi, &vlan); } } @@ -4472,11 +4542,8 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf) * ice_vf_init_vlan_stripping - enable/disable VLAN stripping on initialization * @vf: VF to enable/disable VLAN stripping for on initialization * - * If the VIRTCHNL_VF_OFFLOAD_VLAN flag is set enable VLAN stripping, else if - * the flag is cleared then we want to disable stripping. For example, the flag - * will be cleared when port VLANs are configured by the administrator before - * passing the VF to the guest or if the AVF driver doesn't support VLAN - * offloads. + * Set the default for VLAN stripping based on whether a port VLAN is configured + * and the current VLAN mode of the device. */ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) { @@ -4485,8 +4552,10 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) if (!vsi) return -EINVAL; - /* don't modify stripping if port VLAN is configured */ - if (ice_vf_is_port_vlan_ena(vf)) + /* don't modify stripping if port VLAN is configured in SVM since the + * port VLAN is based on the inner/single VLAN in SVM + */ + if (ice_vf_is_port_vlan_ena(vf) && !ice_is_dvm_ena(&vsi->back->hw)) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) @@ -4495,6 +4564,955 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return vsi->inner_vlan_ops.dis_stripping(vsi); } +static u16 ice_vc_get_max_vlan_fltrs(struct ice_vf *vf) +{ + if (vf->trusted) + return VLAN_N_VID; + else + return ICE_MAX_VLAN_PER_VF; +} + +/** + * ice_vf_outer_vlan_not_allowed - check outer VLAN can be used when the device is in DVM + * @vf: VF that being checked for + */ +static bool ice_vf_outer_vlan_not_allowed(struct ice_vf *vf) +{ + if (ice_vf_is_port_vlan_ena(vf)) + return true; + + return false; +} + +/** + * ice_vc_set_dvm_caps - set VLAN capabilities when the device is in DVM + * @vf: VF that capabilities are being set for + * @caps: VLAN capabilities to populate + * + * Determine VLAN capabilities support based on whether a port VLAN is + * configured. If a port VLAN is configured then the VF should use the inner + * filtering/offload capabilities since the port VLAN is using the outer VLAN + * capabilies. + */ +static void +ice_vc_set_dvm_caps(struct ice_vf *vf, struct virtchnl_vlan_caps *caps) +{ + struct virtchnl_vlan_supported_caps *supported_caps; + + if (ice_vf_outer_vlan_not_allowed(vf)) { + /* until support for inner VLAN filtering is added when a port + * VLAN is configured, only support software offloaded inner + * VLANs when a port VLAN is confgured in DVM + */ + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + caps->offloads.ethertype_match = + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + } else { + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100 | + VIRTCHNL_VLAN_ETHERTYPE_AND; + caps->filtering.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100 | + VIRTCHNL_VLAN_ETHERTYPE_XOR | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100 | + VIRTCHNL_VLAN_ETHERTYPE_XOR | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + + caps->offloads.ethertype_match = + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + } + + caps->filtering.max_filters = ice_vc_get_max_vlan_fltrs(vf); +} + +/** + * ice_vc_set_svm_caps - set VLAN capabilities when the device is in SVM + * @vf: VF that capabilities are being set for + * @caps: VLAN capabilities to populate + * + * Determine VLAN capabilities support based on whether a port VLAN is + * configured. If a port VLAN is configured then the VF does not have any VLAN + * filtering or offload capabilities since the port VLAN is using the inner VLAN + * capabilities in single VLAN mode (SVM). Otherwise allow the VF to use inner + * VLAN fitlering and offload capabilities. + */ +static void +ice_vc_set_svm_caps(struct ice_vf *vf, struct virtchnl_vlan_caps *caps) +{ + struct virtchnl_vlan_supported_caps *supported_caps; + + if (ice_vf_is_port_vlan_ena(vf)) { + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_UNSUPPORTED; + caps->offloads.ethertype_match = VIRTCHNL_VLAN_UNSUPPORTED; + caps->filtering.max_filters = 0; + } else { + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + caps->filtering.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + caps->offloads.ethertype_match = + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + caps->filtering.max_filters = ice_vc_get_max_vlan_fltrs(vf); + } +} + +/** + * ice_vc_get_offload_vlan_v2_caps - determine VF's VLAN capabilities + * @vf: VF to determine VLAN capabilities for + * + * This will only be called if the VF and PF successfully negotiated + * VIRTCHNL_VF_OFFLOAD_VLAN_V2. + * + * Set VLAN capabilities based on the current VLAN mode and whether a port VLAN + * is configured or not. + */ +static int ice_vc_get_offload_vlan_v2_caps(struct ice_vf *vf) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_caps *caps = NULL; + int err, len = 0; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + caps = kzalloc(sizeof(*caps), GFP_KERNEL); + if (!caps) { + v_ret = VIRTCHNL_STATUS_ERR_NO_MEMORY; + goto out; + } + len = sizeof(*caps); + + if (ice_is_dvm_ena(&vf->pf->hw)) + ice_vc_set_dvm_caps(vf, caps); + else + ice_vc_set_svm_caps(vf, caps); + + /* store negotiated caps to prevent invalid VF messages */ + memcpy(&vf->vlan_v2_caps, caps, sizeof(*caps)); + +out: + err = ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, + v_ret, (u8 *)caps, len); + kfree(caps); + return err; +} + +/** + * ice_vc_validate_vlan_tpid - validate VLAN TPID + * @filtering_caps: negotiated/supported VLAN filtering capabilities + * @tpid: VLAN TPID used for validation + * + * Convert the VLAN TPID to a VIRTCHNL_VLAN_ETHERTYPE_* and then compare against + * the negotiated/supported filtering caps to see if the VLAN TPID is valid. + */ +static bool ice_vc_validate_vlan_tpid(u16 filtering_caps, u16 tpid) +{ + enum virtchnl_vlan_support vlan_ethertype = VIRTCHNL_VLAN_UNSUPPORTED; + + switch (tpid) { + case ETH_P_8021Q: + vlan_ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100; + break; + case ETH_P_8021AD: + vlan_ethertype = VIRTCHNL_VLAN_ETHERTYPE_88A8; + break; + case ETH_P_QINQ1: + vlan_ethertype = VIRTCHNL_VLAN_ETHERTYPE_9100; + break; + } + + if (!(filtering_caps & vlan_ethertype)) + return false; + + return true; +} + +/** + * ice_vc_is_valid_vlan - validate the virtchnl_vlan + * @vc_vlan: virtchnl_vlan to validate + * + * If the VLAN TCI and VLAN TPID are 0, then this filter is invalid, so return + * false. Otherwise return true. + */ +static bool ice_vc_is_valid_vlan(struct virtchnl_vlan *vc_vlan) +{ + if (!vc_vlan->tci || !vc_vlan->tpid) + return false; + + return true; +} + +/** + * ice_vc_validate_vlan_filter_list - validate the filter list from the VF + * @vfc: negotiated/supported VLAN filtering capabilities + * @vfl: VLAN filter list from VF to validate + * + * Validate all of the filters in the VLAN filter list from the VF. If any of + * the checks fail then return false. Otherwise return true. + */ +static bool +ice_vc_validate_vlan_filter_list(struct virtchnl_vlan_filtering_caps *vfc, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + u16 i; + + if (!vfl->num_elements) + return false; + + for (i = 0; i < vfl->num_elements; i++) { + struct virtchnl_vlan_supported_caps *filtering_support = + &vfc->filtering_support; + struct virtchnl_vlan_filter *vlan_fltr = &vfl->filters[i]; + struct virtchnl_vlan *outer = &vlan_fltr->outer; + struct virtchnl_vlan *inner = &vlan_fltr->inner; + + if ((ice_vc_is_valid_vlan(outer) && + filtering_support->outer == VIRTCHNL_VLAN_UNSUPPORTED) || + (ice_vc_is_valid_vlan(inner) && + filtering_support->inner == VIRTCHNL_VLAN_UNSUPPORTED)) + return false; + + if ((outer->tci_mask && + !(filtering_support->outer & VIRTCHNL_VLAN_FILTER_MASK)) || + (inner->tci_mask && + !(filtering_support->inner & VIRTCHNL_VLAN_FILTER_MASK))) + return false; + + if (((outer->tci & VLAN_PRIO_MASK) && + !(filtering_support->outer & VIRTCHNL_VLAN_PRIO)) || + ((inner->tci & VLAN_PRIO_MASK) && + !(filtering_support->inner & VIRTCHNL_VLAN_PRIO))) + return false; + + if ((ice_vc_is_valid_vlan(outer) && + !ice_vc_validate_vlan_tpid(filtering_support->outer, outer->tpid)) || + (ice_vc_is_valid_vlan(inner) && + !ice_vc_validate_vlan_tpid(filtering_support->inner, inner->tpid))) + return false; + } + + return true; +} + +/** + * ice_vc_to_vlan - transform from struct virtchnl_vlan to struct ice_vlan + * @vc_vlan: struct virtchnl_vlan to transform + */ +static struct ice_vlan ice_vc_to_vlan(struct virtchnl_vlan *vc_vlan) +{ + struct ice_vlan vlan = { 0 }; + + vlan.prio = (vc_vlan->tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + vlan.vid = vc_vlan->tci & VLAN_VID_MASK; + vlan.tpid = vc_vlan->tpid; + + return vlan; +} + +/** + * ice_vc_vlan_action - action to perform on the virthcnl_vlan + * @vsi: VF's VSI used to perform the action + * @vlan_action: function to perform the action with (i.e. add/del) + * @vlan: VLAN filter to perform the action with + */ +static int +ice_vc_vlan_action(struct ice_vsi *vsi, + int (*vlan_action)(struct ice_vsi *, struct ice_vlan *), + struct ice_vlan *vlan) +{ + int err; + + err = vlan_action(vsi, vlan); + if (err) + return err; + + return 0; +} + +/** + * ice_vc_del_vlans - delete VLAN(s) from the virtchnl filter list + * @vf: VF used to delete the VLAN(s) + * @vsi: VF's VSI used to delete the VLAN(s) + * @vfl: virthchnl filter list used to delete the filters + */ +static int +ice_vc_del_vlans(struct ice_vf *vf, struct ice_vsi *vsi, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + bool vlan_promisc = ice_is_vlan_promisc_allowed(vf); + int err; + u16 i; + + for (i = 0; i < vfl->num_elements; i++) { + struct virtchnl_vlan_filter *vlan_fltr = &vfl->filters[i]; + struct virtchnl_vlan *vc_vlan; + + vc_vlan = &vlan_fltr->outer; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->outer_vlan_ops.del_vlan, + &vlan); + if (err) + return err; + + if (vlan_promisc) + ice_vf_dis_vlan_promisc(vsi, &vlan); + } + + vc_vlan = &vlan_fltr->inner; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->inner_vlan_ops.del_vlan, + &vlan); + if (err) + return err; + + /* no support for VLAN promiscuous on inner VLAN unless + * we are in Single VLAN Mode (SVM) + */ + if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc) + ice_vf_dis_vlan_promisc(vsi, &vlan); + } + } + + return 0; +} + +/** + * ice_vc_remove_vlan_v2_msg - virtchnl handler for VIRTCHNL_OP_DEL_VLAN_V2 + * @vf: VF the message was received from + * @msg: message received from the VF + */ +static int ice_vc_remove_vlan_v2_msg(struct ice_vf *vf, u8 *msg) +{ + struct virtchnl_vlan_filter_list_v2 *vfl = + (struct virtchnl_vlan_filter_list_v2 *)msg; + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct ice_vsi *vsi; + + if (!ice_vc_validate_vlan_filter_list(&vf->vlan_v2_caps.filtering, + vfl)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, vfl->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (ice_vc_del_vlans(vf, vsi, vfl)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DEL_VLAN_V2, v_ret, NULL, + 0); +} + +/** + * ice_vc_add_vlans - add VLAN(s) from the virtchnl filter list + * @vf: VF used to add the VLAN(s) + * @vsi: VF's VSI used to add the VLAN(s) + * @vfl: virthchnl filter list used to add the filters + */ +static int +ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + bool vlan_promisc = ice_is_vlan_promisc_allowed(vf); + int err; + u16 i; + + for (i = 0; i < vfl->num_elements; i++) { + struct virtchnl_vlan_filter *vlan_fltr = &vfl->filters[i]; + struct virtchnl_vlan *vc_vlan; + + vc_vlan = &vlan_fltr->outer; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->outer_vlan_ops.add_vlan, + &vlan); + if (err) + return err; + + if (vlan_promisc) { + err = ice_vf_ena_vlan_promisc(vsi, &vlan); + if (err) + return err; + } + } + + vc_vlan = &vlan_fltr->inner; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->inner_vlan_ops.add_vlan, + &vlan); + if (err) + return err; + + /* no support for VLAN promiscuous on inner VLAN unless + * we are in Single VLAN Mode (SVM) + */ + if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc) { + err = ice_vf_ena_vlan_promisc(vsi, &vlan); + if (err) + return err; + } + } + } + + return 0; +} + +/** + * ice_vc_validate_add_vlan_filter_list - validate add filter list from the VF + * @vsi: VF VSI used to get number of existing VLAN filters + * @vfc: negotiated/supported VLAN filtering capabilities + * @vfl: VLAN filter list from VF to validate + * + * Validate all of the filters in the VLAN filter list from the VF during the + * VIRTCHNL_OP_ADD_VLAN_V2 opcode. If any of the checks fail then return false. + * Otherwise return true. + */ +static bool +ice_vc_validate_add_vlan_filter_list(struct ice_vsi *vsi, + struct virtchnl_vlan_filtering_caps *vfc, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + u16 num_requested_filters = vsi->num_vlan + vfl->num_elements; + + if (num_requested_filters > vfc->max_filters) + return false; + + return ice_vc_validate_vlan_filter_list(vfc, vfl); +} + +/** + * ice_vc_add_vlan_v2_msg - virtchnl handler for VIRTCHNL_OP_ADD_VLAN_V2 + * @vf: VF the message was received from + * @msg: message received from the VF + */ +static int ice_vc_add_vlan_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_filter_list_v2 *vfl = + (struct virtchnl_vlan_filter_list_v2 *)msg; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, vfl->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_validate_add_vlan_filter_list(vsi, + &vf->vlan_v2_caps.filtering, + vfl)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (ice_vc_add_vlans(vf, vsi, vfl)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_VLAN_V2, v_ret, NULL, + 0); +} + +/** + * ice_vc_valid_vlan_setting - validate VLAN setting + * @negotiated_settings: negotiated VLAN settings during VF init + * @ethertype_setting: ethertype(s) requested for the VLAN setting + */ +static bool +ice_vc_valid_vlan_setting(u32 negotiated_settings, u32 ethertype_setting) +{ + if (ethertype_setting && !(negotiated_settings & ethertype_setting)) + return false; + + /* only allow a single VIRTCHNL_VLAN_ETHERTYPE if + * VIRTHCNL_VLAN_ETHERTYPE_AND is not negotiated/supported + */ + if (!(negotiated_settings & VIRTCHNL_VLAN_ETHERTYPE_AND) && + hweight32(ethertype_setting) > 1) + return false; + + /* ability to modify the VLAN setting was not negotiated */ + if (!(negotiated_settings & VIRTCHNL_VLAN_TOGGLE)) + return false; + + return true; +} + +/** + * ice_vc_valid_vlan_setting_msg - validate the VLAN setting message + * @caps: negotiated VLAN settings during VF init + * @msg: message to validate + * + * Used to validate any VLAN virtchnl message sent as a + * virtchnl_vlan_setting structure. Validates the message against the + * negotiated/supported caps during VF driver init. + */ +static bool +ice_vc_valid_vlan_setting_msg(struct virtchnl_vlan_supported_caps *caps, + struct virtchnl_vlan_setting *msg) +{ + if ((!msg->outer_ethertype_setting && + !msg->inner_ethertype_setting) || + (!caps->outer && !caps->inner)) + return false; + + if (msg->outer_ethertype_setting && + !ice_vc_valid_vlan_setting(caps->outer, + msg->outer_ethertype_setting)) + return false; + + if (msg->inner_ethertype_setting && + !ice_vc_valid_vlan_setting(caps->inner, + msg->inner_ethertype_setting)) + return false; + + return true; +} + +/** + * ice_vc_get_tpid - transform from VIRTCHNL_VLAN_ETHERTYPE_* to VLAN TPID + * @ethertype_setting: VIRTCHNL_VLAN_ETHERTYPE_* used to get VLAN TPID + * @tpid: VLAN TPID to populate + */ +static int ice_vc_get_tpid(u32 ethertype_setting, u16 *tpid) +{ + switch (ethertype_setting) { + case VIRTCHNL_VLAN_ETHERTYPE_8100: + *tpid = ETH_P_8021Q; + break; + case VIRTCHNL_VLAN_ETHERTYPE_88A8: + *tpid = ETH_P_8021AD; + break; + case VIRTCHNL_VLAN_ETHERTYPE_9100: + *tpid = ETH_P_QINQ1; + break; + default: + *tpid = 0; + return -EINVAL; + } + + return 0; +} + +/** + * ice_vc_ena_vlan_offload - enable VLAN offload based on the ethertype_setting + * @vsi: VF's VSI used to enable the VLAN offload + * @ena_offload: function used to enable the VLAN offload + * @ethertype_setting: VIRTCHNL_VLAN_ETHERTYPE_* to enable offloads for + */ +static int +ice_vc_ena_vlan_offload(struct ice_vsi *vsi, + int (*ena_offload)(struct ice_vsi *vsi, u16 tpid), + u32 ethertype_setting) +{ + u16 tpid; + int err; + + err = ice_vc_get_tpid(ethertype_setting, &tpid); + if (err) + return err; + + err = ena_offload(vsi, tpid); + if (err) + return err; + + return 0; +} + +#define ICE_L2TSEL_QRX_CONTEXT_REG_IDX 3 +#define ICE_L2TSEL_BIT_OFFSET 23 +enum ice_l2tsel { + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG2_2ND, + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG1, +}; + +/** + * ice_vsi_update_l2tsel - update l2tsel field for all Rx rings on this VSI + * @vsi: VSI used to update l2tsel on + * @l2tsel: l2tsel setting requested + * + * Use the l2tsel setting to update all of the Rx queue context bits for l2tsel. + * This will modify which descriptor field the first offloaded VLAN will be + * stripped into. + */ +static void ice_vsi_update_l2tsel(struct ice_vsi *vsi, enum ice_l2tsel l2tsel) +{ + struct ice_hw *hw = &vsi->back->hw; + u32 l2tsel_bit; + int i; + + if (l2tsel == ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG2_2ND) + l2tsel_bit = 0; + else + l2tsel_bit = BIT(ICE_L2TSEL_BIT_OFFSET); + + for (i = 0; i < vsi->alloc_rxq; i++) { + u16 pfq = vsi->rxq_map[i]; + u32 qrx_context_offset; + u32 regval; + + qrx_context_offset = + QRX_CONTEXT(ICE_L2TSEL_QRX_CONTEXT_REG_IDX, pfq); + + regval = rd32(hw, qrx_context_offset); + regval &= ~BIT(ICE_L2TSEL_BIT_OFFSET); + regval |= l2tsel_bit; + wr32(hw, qrx_context_offset, regval); + } +} + +/** + * ice_vc_ena_vlan_stripping_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 + */ +static int ice_vc_ena_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *stripping_support; + struct virtchnl_vlan_setting *strip_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, strip_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + stripping_support = &vf->vlan_v2_caps.offloads.stripping_support; + if (!ice_vc_valid_vlan_setting_msg(stripping_support, strip_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = strip_msg->outer_ethertype_setting; + if (ethertype_setting) { + if (ice_vc_ena_vlan_offload(vsi, + vsi->outer_vlan_ops.ena_stripping, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } else { + enum ice_l2tsel l2tsel = + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG2_2ND; + + /* PF tells the VF that the outer VLAN tag is always + * extracted to VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 and + * inner is always extracted to + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1. This is needed to + * support outer stripping so the first tag always ends + * up in L2TAG2_2ND and the second/inner tag, if + * enabled, is extracted in L2TAG1. + */ + ice_vsi_update_l2tsel(vsi, l2tsel); + } + } + + ethertype_setting = strip_msg->inner_ethertype_setting; + if (ethertype_setting && + ice_vc_ena_vlan_offload(vsi, vsi->inner_vlan_ops.ena_stripping, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2, v_ret, NULL, 0); +} + +/** + * ice_vc_dis_vlan_stripping_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 + */ +static int ice_vc_dis_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *stripping_support; + struct virtchnl_vlan_setting *strip_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, strip_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + stripping_support = &vf->vlan_v2_caps.offloads.stripping_support; + if (!ice_vc_valid_vlan_setting_msg(stripping_support, strip_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = strip_msg->outer_ethertype_setting; + if (ethertype_setting) { + if (vsi->outer_vlan_ops.dis_stripping(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } else { + enum ice_l2tsel l2tsel = + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG1; + + /* PF tells the VF that the outer VLAN tag is always + * extracted to VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 and + * inner is always extracted to + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1. This is needed to + * support inner stripping while outer stripping is + * disabled so that the first and only tag is extracted + * in L2TAG1. + */ + ice_vsi_update_l2tsel(vsi, l2tsel); + } + } + + ethertype_setting = strip_msg->inner_ethertype_setting; + if (ethertype_setting && vsi->inner_vlan_ops.dis_stripping(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2, v_ret, NULL, 0); +} + +/** + * ice_vc_ena_vlan_insertion_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 + */ +static int ice_vc_ena_vlan_insertion_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *insertion_support; + struct virtchnl_vlan_setting *insertion_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, insertion_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + insertion_support = &vf->vlan_v2_caps.offloads.insertion_support; + if (!ice_vc_valid_vlan_setting_msg(insertion_support, insertion_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->outer_ethertype_setting; + if (ethertype_setting && + ice_vc_ena_vlan_offload(vsi, vsi->outer_vlan_ops.ena_insertion, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->inner_ethertype_setting; + if (ethertype_setting && + ice_vc_ena_vlan_offload(vsi, vsi->inner_vlan_ops.ena_insertion, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2, v_ret, NULL, 0); +} + +/** + * ice_vc_dis_vlan_insertion_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 + */ +static int ice_vc_dis_vlan_insertion_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *insertion_support; + struct virtchnl_vlan_setting *insertion_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, insertion_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + insertion_support = &vf->vlan_v2_caps.offloads.insertion_support; + if (!ice_vc_valid_vlan_setting_msg(insertion_support, insertion_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->outer_ethertype_setting; + if (ethertype_setting && vsi->outer_vlan_ops.dis_insertion(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->inner_ethertype_setting; + if (ethertype_setting && vsi->inner_vlan_ops.dis_insertion(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2, v_ret, NULL, 0); +} + static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { .get_ver_msg = ice_vc_get_ver_msg, .get_vf_res_msg = ice_vc_get_vf_res_msg, @@ -4517,6 +5535,13 @@ static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { .handle_rss_cfg_msg = ice_vc_handle_rss_cfg, .add_fdir_fltr_msg = ice_vc_add_fdir_fltr, .del_fdir_fltr_msg = ice_vc_del_fdir_fltr, + .get_offload_vlan_v2_caps = ice_vc_get_offload_vlan_v2_caps, + .add_vlan_v2_msg = ice_vc_add_vlan_v2_msg, + .remove_vlan_v2_msg = ice_vc_remove_vlan_v2_msg, + .ena_vlan_stripping_v2_msg = ice_vc_ena_vlan_stripping_v2_msg, + .dis_vlan_stripping_v2_msg = ice_vc_dis_vlan_stripping_v2_msg, + .ena_vlan_insertion_v2_msg = ice_vc_ena_vlan_insertion_v2_msg, + .dis_vlan_insertion_v2_msg = ice_vc_dis_vlan_insertion_v2_msg, }; void ice_vc_set_dflt_vf_ops(struct ice_vc_vf_ops *ops) @@ -4745,7 +5770,7 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event) case VIRTCHNL_OP_GET_VF_RESOURCES: err = ops->get_vf_res_msg(vf, msg); if (ice_vf_init_vlan_stripping(vf)) - dev_err(dev, "Failed to initialize VLAN stripping for VF %d\n", + dev_dbg(dev, "Failed to initialize VLAN stripping for VF %d\n", vf->vf_id); ice_vc_notify_vf_link_state(vf); break; @@ -4810,6 +5835,27 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event) case VIRTCHNL_OP_DEL_RSS_CFG: err = ops->handle_rss_cfg_msg(vf, msg, false); break; + case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: + err = ops->get_offload_vlan_v2_caps(vf); + break; + case VIRTCHNL_OP_ADD_VLAN_V2: + err = ops->add_vlan_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_DEL_VLAN_V2: + err = ops->remove_vlan_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + err = ops->ena_vlan_stripping_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + err = ops->dis_vlan_stripping_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2: + err = ops->ena_vlan_insertion_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: + err = ops->dis_vlan_insertion_v2_msg(vf, msg); + break; case VIRTCHNL_OP_UNKNOWN: default: dev_err(dev, "Unsupported opcode %d from VF %d\n", v_opcode, diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 4110847e0699..4f4961043638 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -95,6 +95,13 @@ struct ice_vc_vf_ops { int (*handle_rss_cfg_msg)(struct ice_vf *vf, u8 *msg, bool add); int (*add_fdir_fltr_msg)(struct ice_vf *vf, u8 *msg); int (*del_fdir_fltr_msg)(struct ice_vf *vf, u8 *msg); + int (*get_offload_vlan_v2_caps)(struct ice_vf *vf); + int (*add_vlan_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*remove_vlan_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*ena_vlan_stripping_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*dis_vlan_stripping_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*ena_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*dis_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); }; /* VF information structure */ @@ -121,6 +128,7 @@ struct ice_vf { DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); struct ice_vlan port_vlan_info; /* Port VLAN ID, QoS, and TPID */ + struct virtchnl_vlan_caps vlan_v2_caps; u8 pf_set_mac:1; /* VF MAC address set by VMM admin */ u8 trusted:1; u8 spoofchk:1; -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:53 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:53 -0800 Subject: [Intel-wired-lan] [PATCH net-next 12/14] ice: Advertise 802.1ad VLAN filtering and offloads for PF netdev In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-12-anthony.l.nguyen@intel.com> From: Brett Creeley In order for the driver to support 802.1ad VLAN filtering and offloads, it needs to advertise those VLAN features and also support modifying those VLAN features, so make the necessary changes to ice_set_netdev_features(). By default, enable CTAG insertion/stripping and CTAG filtering for both Single and Double VLAN Modes (SVM/DVM). Also, in DVM, enable STAG filtering by default. This is done by setting the feature bits in netdev->features. Also, in DVM, support toggling of STAG insertion/stripping, but don't enable them by default. This is done by setting the feature bits in netdev->hw_features. Since 802.1ad VLAN filtering and offloads are only supported in DVM, make sure they are not enabled by default and that they cannot be enabled during runtime, when the device is in SVM. Add an implementation for the ndo_fix_features() callback. This is needed since the hardware cannot support multiple VLAN ethertypes for VLAN insertion/stripping simultaneously and all supported VLAN filtering must either be enabled or disabled together. Disable inner VLAN stripping by default when DVM is enabled. If a VSI supports stripping the inner VLAN in DVM, then it will have to configure that during runtime. For example if a VF is configured in a port VLAN while DVM is enabled it will be allowed to offload inner VLANs. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_lib.c | 27 ++- drivers/net/ethernet/intel/ice/ice_main.c | 260 ++++++++++++++++++---- 2 files changed, 238 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 36507f0dc04e..de37928c2870 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -796,11 +796,12 @@ static void ice_vsi_set_rss_params(struct ice_vsi *vsi) /** * ice_set_dflt_vsi_ctx - Set default VSI context before adding a VSI + * @hw: HW structure used to determine the VLAN mode of the device * @ctxt: the VSI context being set * * This initializes a default VSI context for all sections except the Queues. */ -static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) +static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt) { u32 table = 0; @@ -811,13 +812,27 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE; /* Traffic from VSI can be sent to LAN */ ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; - /* By default bits 3 and 4 in inner_vlan_flags are 0's which results in legacy - * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all - * packets untagged/tagged. - */ + /* allow all untagged/tagged packets by default on Tx */ ctxt->info.inner_vlan_flags = ((ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL & ICE_AQ_VSI_INNER_VLAN_TX_MODE_M) >> ICE_AQ_VSI_INNER_VLAN_TX_MODE_S); + /* SVM - by default bits 3 and 4 in inner_vlan_flags are 0's which + * results in legacy behavior (show VLAN, DEI, and UP) in descriptor. + * + * DVM - leave inner VLAN in packet by default + */ + if (ice_is_dvm_ena(hw)) { + ctxt->info.inner_vlan_flags |= + ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; + ctxt->info.outer_vlan_flags = + (ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M; + ctxt->info.outer_vlan_flags |= + (ICE_AQ_VSI_OUTER_TAG_VLAN_8100 << + ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M; + } /* Have 1:1 UP mapping for both ingress/egress tables */ table |= ICE_UP_TABLE_TRANSLATE(0, 0); table |= ICE_UP_TABLE_TRANSLATE(1, 1); @@ -1094,7 +1109,7 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; } - ice_set_dflt_vsi_ctx(ctxt); + ice_set_dflt_vsi_ctx(hw, ctxt); if (test_bit(ICE_FLAG_FD_ENA, pf->flags)) ice_set_fd_vsi_ctx(ctxt, vsi); /* if the switch is in VEB mode, allow VSI loopback */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 563b597b0a85..851dbd70d809 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -416,7 +416,8 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) IFF_PROMISC; goto out_promisc; } - if (vsi->num_vlan > 1) + if (vsi->current_netdev_flags & + NETIF_F_HW_VLAN_CTAG_FILTER) vlan_ops->ena_rx_filtering(vsi); } } @@ -3240,6 +3241,7 @@ static void ice_set_ops(struct net_device *netdev) static void ice_set_netdev_features(struct net_device *netdev) { struct ice_pf *pf = ice_netdev_to_pf(netdev); + bool is_dvm_ena = ice_is_dvm_ena(&pf->hw); netdev_features_t csumo_features; netdev_features_t vlano_features; netdev_features_t dflt_features; @@ -3266,6 +3268,10 @@ static void ice_set_netdev_features(struct net_device *netdev) NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX; + /* Enable CTAG/STAG filtering by default in Double VLAN Mode (DVM) */ + if (is_dvm_ena) + vlano_features |= NETIF_F_HW_VLAN_STAG_FILTER; + tso_features = NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | @@ -3297,6 +3303,15 @@ static void ice_set_netdev_features(struct net_device *netdev) tso_features; netdev->vlan_features |= dflt_features | csumo_features | tso_features; + + /* advertise support but don't enable by default since only one type of + * VLAN offload can be enabled at a time (i.e. CTAG or STAG). When one + * type turns on the other has to be turned off. This is enforced by the + * ice_fix_features() ndo callback. + */ + if (is_dvm_ena) + netdev->hw_features |= NETIF_F_HW_VLAN_STAG_RX | + NETIF_F_HW_VLAN_STAG_TX; } /** @@ -3432,13 +3447,6 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); - /* Enable VLAN pruning when a VLAN other than 0 is added */ - if (!ice_vsi_is_vlan_pruning_ena(vsi)) { - ret = vlan_ops->ena_rx_filtering(vsi); - if (ret) - return ret; - } - /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ @@ -3481,12 +3489,8 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) if (ret) return ret; - /* Disable pruning when VLAN 0 is the only VLAN rule */ - if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - vlan_ops->dis_rx_filtering(vsi); - set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); - return ret; + return 0; } /** @@ -5596,6 +5600,194 @@ ice_fdb_del(struct ndmsg *ndm, __always_unused struct nlattr *tb[], return err; } +#define NETIF_VLAN_OFFLOAD_FEATURES (NETIF_F_HW_VLAN_CTAG_RX | \ + NETIF_F_HW_VLAN_CTAG_TX | \ + NETIF_F_HW_VLAN_STAG_RX | \ + NETIF_F_HW_VLAN_STAG_TX) + +#define NETIF_VLAN_FILTERING_FEATURES (NETIF_F_HW_VLAN_CTAG_FILTER | \ + NETIF_F_HW_VLAN_STAG_FILTER) + +/** + * ice_fix_features - fix the netdev features flags based on device limitations + * @netdev: ptr to the netdev that flags are being fixed on + * @features: features that need to be checked and possibly fixed + * + * Make sure any fixups are made to features in this callback. This enables the + * driver to not have to check unsupported configurations throughout the driver + * because that's the responsiblity of this callback. + * + * Single VLAN Mode (SVM) Supported Features: + * NETIF_F_HW_VLAN_CTAG_FILTER + * NETIF_F_HW_VLAN_CTAG_RX + * NETIF_F_HW_VLAN_CTAG_TX + * + * Double VLAN Mode (DVM) Supported Features: + * NETIF_F_HW_VLAN_CTAG_FILTER + * NETIF_F_HW_VLAN_CTAG_RX + * NETIF_F_HW_VLAN_CTAG_TX + * + * NETIF_F_HW_VLAN_STAG_FILTER + * NETIF_HW_VLAN_STAG_RX + * NETIF_HW_VLAN_STAG_TX + * + * Features that need fixing: + * Cannot simultaneously enable CTAG and STAG stripping and/or insertion. + * These are mutually exlusive as the VSI context cannot support multiple + * VLAN ethertypes simultaneously for stripping and/or insertion. If this + * is not done, then default to clearing the requested STAG offload + * settings. + * + * All supported filtering has to be enabled or disabled together. For + * example, in DVM, CTAG and STAG filtering have to be enabled and disabled + * together. If this is not done, then default to VLAN filtering disabled. + * These are mutually exclusive as there is currently no way to + * enable/disable VLAN filtering based on VLAN ethertype when using VLAN + * prune rules. + */ +static netdev_features_t +ice_fix_features(struct net_device *netdev, netdev_features_t features) +{ + struct ice_netdev_priv *np = netdev_priv(netdev); + netdev_features_t supported_vlan_filtering; + netdev_features_t requested_vlan_filtering; + struct ice_vsi *vsi = np->vsi; + + requested_vlan_filtering = features & NETIF_VLAN_FILTERING_FEATURES; + + /* make sure supported_vlan_filtering works for both SVM and DVM */ + supported_vlan_filtering = NETIF_F_HW_VLAN_CTAG_FILTER; + if (ice_is_dvm_ena(&vsi->back->hw)) + supported_vlan_filtering |= NETIF_F_HW_VLAN_STAG_FILTER; + + if (requested_vlan_filtering && + requested_vlan_filtering != supported_vlan_filtering) { + if (requested_vlan_filtering & NETIF_F_HW_VLAN_CTAG_FILTER) { + netdev_warn(netdev, "cannot support requested VLAN filtering settings, enabling all supported VLAN filtering settings\n"); + features |= supported_vlan_filtering; + } else { + netdev_warn(netdev, "cannot support requested VLAN filtering settings, clearing all supported VLAN filtering settings\n"); + features &= ~supported_vlan_filtering; + } + } + + if ((features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) && + (features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX))) { + netdev_warn(netdev, "cannot support CTAG and STAG VLAN stripping and/or insertion simultaneously since CTAG and STAG offloads are mutually exclusive, clearing STAG offload settings\n"); + features &= ~(NETIF_F_HW_VLAN_STAG_RX | + NETIF_F_HW_VLAN_STAG_TX); + } + + return features; +} + +/** + * ice_set_vlan_offload_features - set VLAN offload features for the PF VSI + * @vsi: PF's VSI + * @features: features used to determine VLAN offload settings + * + * First, determine the vlan_ethertype based on the VLAN offload bits in + * features. Then determine if stripping and insertion should be enabled or + * disabled. Finally enable or disable VLAN stripping and insertion. + */ +static int +ice_set_vlan_offload_features(struct ice_vsi *vsi, netdev_features_t features) +{ + bool enable_stripping = true, enable_insertion = true; + struct ice_vsi_vlan_ops *vlan_ops; + int strip_err = 0, insert_err = 0; + u16 vlan_ethertype = 0; + + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + + if (features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX)) + vlan_ethertype = ETH_P_8021AD; + else if (features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) + vlan_ethertype = ETH_P_8021Q; + + if (!(features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_CTAG_RX))) + enable_stripping = false; + if (!(features & (NETIF_F_HW_VLAN_STAG_TX | NETIF_F_HW_VLAN_CTAG_TX))) + enable_insertion = false; + + if (enable_stripping) + strip_err = vlan_ops->ena_stripping(vsi, vlan_ethertype); + else + strip_err = vlan_ops->dis_stripping(vsi); + + if (enable_insertion) + insert_err = vlan_ops->ena_insertion(vsi, vlan_ethertype); + else + insert_err = vlan_ops->dis_insertion(vsi); + + if (strip_err || insert_err) + return -EIO; + + return 0; +} + +/** + * ice_set_vlan_filtering_features - set VLAN filtering features for the PF VSI + * @vsi: PF's VSI + * @features: features used to determine VLAN filtering settings + * + * Enable or disable Rx VLAN filtering based on the VLAN filtering bits in the + * features. + */ +static int +ice_set_vlan_filtering_features(struct ice_vsi *vsi, netdev_features_t features) +{ + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + int err = 0; + + /* support Single VLAN Mode (SVM) and Double VLAN Mode (DVM) by checking + * if either bit is set + */ + if (features & + (NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER)) + err = vlan_ops->ena_rx_filtering(vsi); + else + err = vlan_ops->dis_rx_filtering(vsi); + + return err; +} + +/** + * ice_set_vlan_features - set VLAN settings based on suggested feature set + * @netdev: ptr to the netdev being adjusted + * @features: the feature set that the stack is suggesting + * + * Only update VLAN settings if the requested_vlan_features are different than + * the current_vlan_features. + */ +static int +ice_set_vlan_features(struct net_device *netdev, netdev_features_t features) +{ + netdev_features_t current_vlan_features, requested_vlan_features; + struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi *vsi = np->vsi; + int err; + + current_vlan_features = netdev->features & NETIF_VLAN_OFFLOAD_FEATURES; + requested_vlan_features = features & NETIF_VLAN_OFFLOAD_FEATURES; + if (current_vlan_features ^ requested_vlan_features) { + err = ice_set_vlan_offload_features(vsi, features); + if (err) + return err; + } + + current_vlan_features = netdev->features & + NETIF_VLAN_FILTERING_FEATURES; + requested_vlan_features = features & NETIF_VLAN_FILTERING_FEATURES; + if (current_vlan_features ^ requested_vlan_features) { + err = ice_set_vlan_filtering_features(vsi, features); + if (err) + return err; + } + + return 0; +} + /** * ice_set_features - set the netdev feature flags * @netdev: ptr to the netdev being adjusted @@ -5605,7 +5797,6 @@ static int ice_set_features(struct net_device *netdev, netdev_features_t features) { struct ice_netdev_priv *np = netdev_priv(netdev); - struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; int ret = 0; @@ -5622,8 +5813,6 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) return -EBUSY; } - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); - /* Multiple features can be changed in one call so keep features in * separate if/else statements to guarantee each feature is checked */ @@ -5633,26 +5822,9 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) netdev->features & NETIF_F_RXHASH) ice_vsi_manage_rss_lut(vsi, false); - if ((features & NETIF_F_HW_VLAN_CTAG_RX) && - !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vlan_ops->ena_stripping(vsi, ETH_P_8021Q); - else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && - (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vlan_ops->dis_stripping(vsi); - - if ((features & NETIF_F_HW_VLAN_CTAG_TX) && - !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vlan_ops->ena_insertion(vsi, ETH_P_8021Q); - else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && - (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vlan_ops->dis_insertion(vsi); - - if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && - !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vlan_ops->ena_rx_filtering(vsi); - else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && - (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vlan_ops->dis_rx_filtering(vsi); + ret = ice_set_vlan_features(netdev, features); + if (ret) + return ret; if ((features & NETIF_F_NTUPLE) && !(netdev->features & NETIF_F_NTUPLE)) { @@ -5676,7 +5848,7 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) else clear_bit(ICE_FLAG_CLS_FLOWER, pf->flags); - return ret; + return 0; } /** @@ -5685,14 +5857,15 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) */ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) { - struct ice_vsi_vlan_ops *vlan_ops; + int err; - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + err = ice_set_vlan_offload_features(vsi, vsi->netdev->features); + if (err) + return err; - if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - vlan_ops->ena_stripping(vsi, ETH_P_8021Q); - if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - vlan_ops->ena_insertion(vsi, ETH_P_8021Q); + err = ice_set_vlan_filtering_features(vsi, vsi->netdev->features); + if (err) + return err; return ice_vsi_add_vlan_zero(vsi); } @@ -8549,6 +8722,7 @@ static const struct net_device_ops ice_netdev_ops = { .ndo_start_xmit = ice_start_xmit, .ndo_select_queue = ice_select_queue, .ndo_features_check = ice_features_check, + .ndo_fix_features = ice_fix_features, .ndo_set_rx_mode = ice_set_rx_mode, .ndo_set_mac_address = ice_set_mac_address, .ndo_validate_addr = eth_validate_addr, -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 21:21:48 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 13:21:48 -0800 Subject: [Intel-wired-lan] [PATCH net-next 07/14] ice: Adjust naming for inner VLAN operations In-Reply-To: <20211130212155.27852-1-anthony.l.nguyen@intel.com> References: <20211130212155.27852-1-anthony.l.nguyen@intel.com> Message-ID: <20211130212155.27852-7-anthony.l.nguyen@intel.com> From: Brett Creeley Current operations act on inner VLAN fields. To support double VLAN, outer VLAN operations and functions will be implemented. Add the "inner" naming to existing VLAN operations to distinguish them from the upcoming outer values and functions. Some spacing adjustments are made to align values. Note that the inner is not talking about a tunneled VLAN, but the second VLAN in the packet. For SVM the driver uses inner or single VLAN filtering and offloads and in Double VLAN Mode the driver uses the inner filtering and offloads for SR-IOV VFs in port VLANs in order to support offloading the guest VLAN while a port VLAN is configured. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_adminq_cmd.h | 191 +++++++++--------- drivers/net/ethernet/intel/ice/ice_lib.c | 8 +- drivers/net/ethernet/intel/ice/ice_main.c | 6 +- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 57 +++--- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 10 +- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.c | 10 +- 6 files changed, 140 insertions(+), 142 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index f3afbba4a66d..b638f9e9ecd9 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -343,108 +343,113 @@ struct ice_aqc_vsi_props { #define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE BIT(7) u8 sw_flags2; #define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S 0 -#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M \ - (0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S) +#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M (0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S) #define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA BIT(0) #define ICE_AQ_VSI_SW_FLAG_LAN_ENA BIT(4) u8 veb_stat_id; #define ICE_AQ_VSI_SW_VEB_STAT_ID_S 0 -#define ICE_AQ_VSI_SW_VEB_STAT_ID_M (0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S) +#define ICE_AQ_VSI_SW_VEB_STAT_ID_M (0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S) #define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID BIT(5) /* security section */ u8 sec_flags; #define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD BIT(0) #define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF BIT(2) -#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S 4 -#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M (0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S) +#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S 4 +#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M (0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S) #define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA BIT(0) u8 sec_reserved; /* VLAN section */ - __le16 pvid; /* VLANS include priority bits */ - u8 pvlan_reserved[2]; - u8 vlan_flags; -#define ICE_AQ_VSI_VLAN_MODE_S 0 -#define ICE_AQ_VSI_VLAN_MODE_M (0x3 << ICE_AQ_VSI_VLAN_MODE_S) -#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED 0x1 -#define ICE_AQ_VSI_VLAN_MODE_TAGGED 0x2 -#define ICE_AQ_VSI_VLAN_MODE_ALL 0x3 -#define ICE_AQ_VSI_PVLAN_INSERT_PVID BIT(2) -#define ICE_AQ_VSI_VLAN_EMOD_S 3 -#define ICE_AQ_VSI_VLAN_EMOD_M (0x3 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH (0x0 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_STR_UP (0x1 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_STR (0x2 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_NOTHING (0x3 << ICE_AQ_VSI_VLAN_EMOD_S) - u8 pvlan_reserved2[3]; + __le16 port_based_inner_vlan; /* VLANS include priority bits */ + u8 inner_vlan_reserved[2]; + u8 inner_vlan_flags; +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_S 0 +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_M (0x3 << ICE_AQ_VSI_INNER_VLAN_TX_MODE_S) +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED 0x1 +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTTAGGED 0x2 +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL 0x3 +#define ICE_AQ_VSI_INNER_VLAN_INSERT_PVID BIT(2) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_S 3 +#define ICE_AQ_VSI_INNER_VLAN_EMODE_M (0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH (0x0 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_UP (0x1 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR (0x2 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING (0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) + u8 inner_vlan_reserved2[3]; /* ingress egress up sections */ __le32 ingress_table; /* bitmap, 3 bits per up */ -#define ICE_AQ_VSI_UP_TABLE_UP0_S 0 -#define ICE_AQ_VSI_UP_TABLE_UP0_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S) -#define ICE_AQ_VSI_UP_TABLE_UP1_S 3 -#define ICE_AQ_VSI_UP_TABLE_UP1_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S) -#define ICE_AQ_VSI_UP_TABLE_UP2_S 6 -#define ICE_AQ_VSI_UP_TABLE_UP2_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S) -#define ICE_AQ_VSI_UP_TABLE_UP3_S 9 -#define ICE_AQ_VSI_UP_TABLE_UP3_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S) -#define ICE_AQ_VSI_UP_TABLE_UP4_S 12 -#define ICE_AQ_VSI_UP_TABLE_UP4_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S) -#define ICE_AQ_VSI_UP_TABLE_UP5_S 15 -#define ICE_AQ_VSI_UP_TABLE_UP5_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S) -#define ICE_AQ_VSI_UP_TABLE_UP6_S 18 -#define ICE_AQ_VSI_UP_TABLE_UP6_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S) -#define ICE_AQ_VSI_UP_TABLE_UP7_S 21 -#define ICE_AQ_VSI_UP_TABLE_UP7_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S) +#define ICE_AQ_VSI_UP_TABLE_UP0_S 0 +#define ICE_AQ_VSI_UP_TABLE_UP0_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S) +#define ICE_AQ_VSI_UP_TABLE_UP1_S 3 +#define ICE_AQ_VSI_UP_TABLE_UP1_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S) +#define ICE_AQ_VSI_UP_TABLE_UP2_S 6 +#define ICE_AQ_VSI_UP_TABLE_UP2_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S) +#define ICE_AQ_VSI_UP_TABLE_UP3_S 9 +#define ICE_AQ_VSI_UP_TABLE_UP3_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S) +#define ICE_AQ_VSI_UP_TABLE_UP4_S 12 +#define ICE_AQ_VSI_UP_TABLE_UP4_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S) +#define ICE_AQ_VSI_UP_TABLE_UP5_S 15 +#define ICE_AQ_VSI_UP_TABLE_UP5_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S) +#define ICE_AQ_VSI_UP_TABLE_UP6_S 18 +#define ICE_AQ_VSI_UP_TABLE_UP6_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S) +#define ICE_AQ_VSI_UP_TABLE_UP7_S 21 +#define ICE_AQ_VSI_UP_TABLE_UP7_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S) __le32 egress_table; /* same defines as for ingress table */ /* outer tags section */ - __le16 outer_tag; - u8 outer_tag_flags; -#define ICE_AQ_VSI_OUTER_TAG_MODE_S 0 -#define ICE_AQ_VSI_OUTER_TAG_MODE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S) -#define ICE_AQ_VSI_OUTER_TAG_NOTHING 0x0 -#define ICE_AQ_VSI_OUTER_TAG_REMOVE 0x1 -#define ICE_AQ_VSI_OUTER_TAG_COPY 0x2 -#define ICE_AQ_VSI_OUTER_TAG_TYPE_S 2 -#define ICE_AQ_VSI_OUTER_TAG_TYPE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S) -#define ICE_AQ_VSI_OUTER_TAG_NONE 0x0 -#define ICE_AQ_VSI_OUTER_TAG_STAG 0x1 -#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100 0x2 -#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100 0x3 -#define ICE_AQ_VSI_OUTER_TAG_INSERT BIT(4) -#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6) - u8 outer_tag_reserved; + __le16 port_based_outer_vlan; + u8 outer_vlan_flags; +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_S 0 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_M (0x3 << ICE_AQ_VSI_OUTER_VLAN_EMODE_S) +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH 0x0 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_UP 0x1 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW 0x2 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING 0x3 +#define ICE_AQ_VSI_OUTER_TAG_TYPE_S 2 +#define ICE_AQ_VSI_OUTER_TAG_TYPE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S) +#define ICE_AQ_VSI_OUTER_TAG_NONE 0x0 +#define ICE_AQ_VSI_OUTER_TAG_STAG 0x1 +#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100 0x2 +#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100 0x3 +#define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT BIT(4) +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S 5 +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M (0x3 << ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED 0x1 +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTTAGGED 0x2 +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL 0x3 +#define ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC BIT(7) + u8 outer_vlan_reserved; /* queue mapping section */ __le16 mapping_flags; -#define ICE_AQ_VSI_Q_MAP_CONTIG 0x0 -#define ICE_AQ_VSI_Q_MAP_NONCONTIG BIT(0) +#define ICE_AQ_VSI_Q_MAP_CONTIG 0x0 +#define ICE_AQ_VSI_Q_MAP_NONCONTIG BIT(0) __le16 q_mapping[16]; -#define ICE_AQ_VSI_Q_S 0 -#define ICE_AQ_VSI_Q_M (0x7FF << ICE_AQ_VSI_Q_S) +#define ICE_AQ_VSI_Q_S 0 +#define ICE_AQ_VSI_Q_M (0x7FF << ICE_AQ_VSI_Q_S) __le16 tc_mapping[8]; -#define ICE_AQ_VSI_TC_Q_OFFSET_S 0 -#define ICE_AQ_VSI_TC_Q_OFFSET_M (0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S) -#define ICE_AQ_VSI_TC_Q_NUM_S 11 -#define ICE_AQ_VSI_TC_Q_NUM_M (0xF << ICE_AQ_VSI_TC_Q_NUM_S) +#define ICE_AQ_VSI_TC_Q_OFFSET_S 0 +#define ICE_AQ_VSI_TC_Q_OFFSET_M (0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S) +#define ICE_AQ_VSI_TC_Q_NUM_S 11 +#define ICE_AQ_VSI_TC_Q_NUM_M (0xF << ICE_AQ_VSI_TC_Q_NUM_S) /* queueing option section */ u8 q_opt_rss; -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S 0 -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI 0x0 -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF 0x2 -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL 0x3 -#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S 2 -#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M (0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S) -#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S 6 -#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ (0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ (0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_XOR (0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_JHASH (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S 0 +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI 0x0 +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF 0x2 +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL 0x3 +#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S 2 +#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M (0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S) +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S 6 +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ (0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ (0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_XOR (0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_JHASH (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) u8 q_opt_tc; -#define ICE_AQ_VSI_Q_OPT_TC_OVR_S 0 -#define ICE_AQ_VSI_Q_OPT_TC_OVR_M (0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S) -#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR BIT(7) +#define ICE_AQ_VSI_Q_OPT_TC_OVR_S 0 +#define ICE_AQ_VSI_Q_OPT_TC_OVR_M (0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S) +#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR BIT(7) u8 q_opt_flags; -#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN BIT(0) +#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN BIT(0) u8 q_opt_reserved[3]; /* outer up section */ __le32 outer_up_table; /* same structure and defines as ingress tbl */ @@ -452,27 +457,27 @@ struct ice_aqc_vsi_props { __le16 sect_10_reserved; /* flow director section */ __le16 fd_options; -#define ICE_AQ_VSI_FD_ENABLE BIT(0) -#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE BIT(1) -#define ICE_AQ_VSI_FD_PROG_ENABLE BIT(3) +#define ICE_AQ_VSI_FD_ENABLE BIT(0) +#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE BIT(1) +#define ICE_AQ_VSI_FD_PROG_ENABLE BIT(3) __le16 max_fd_fltr_dedicated; __le16 max_fd_fltr_shared; __le16 fd_def_q; -#define ICE_AQ_VSI_FD_DEF_Q_S 0 -#define ICE_AQ_VSI_FD_DEF_Q_M (0x7FF << ICE_AQ_VSI_FD_DEF_Q_S) -#define ICE_AQ_VSI_FD_DEF_GRP_S 12 -#define ICE_AQ_VSI_FD_DEF_GRP_M (0x7 << ICE_AQ_VSI_FD_DEF_GRP_S) +#define ICE_AQ_VSI_FD_DEF_Q_S 0 +#define ICE_AQ_VSI_FD_DEF_Q_M (0x7FF << ICE_AQ_VSI_FD_DEF_Q_S) +#define ICE_AQ_VSI_FD_DEF_GRP_S 12 +#define ICE_AQ_VSI_FD_DEF_GRP_M (0x7 << ICE_AQ_VSI_FD_DEF_GRP_S) __le16 fd_report_opt; -#define ICE_AQ_VSI_FD_REPORT_Q_S 0 -#define ICE_AQ_VSI_FD_REPORT_Q_M (0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S) -#define ICE_AQ_VSI_FD_DEF_PRIORITY_S 12 -#define ICE_AQ_VSI_FD_DEF_PRIORITY_M (0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S) -#define ICE_AQ_VSI_FD_DEF_DROP BIT(15) +#define ICE_AQ_VSI_FD_REPORT_Q_S 0 +#define ICE_AQ_VSI_FD_REPORT_Q_M (0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S) +#define ICE_AQ_VSI_FD_DEF_PRIORITY_S 12 +#define ICE_AQ_VSI_FD_DEF_PRIORITY_M (0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S) +#define ICE_AQ_VSI_FD_DEF_DROP BIT(15) /* PASID section */ __le32 pasid_id; -#define ICE_AQ_VSI_PASID_ID_S 0 -#define ICE_AQ_VSI_PASID_ID_M (0xFFFFF << ICE_AQ_VSI_PASID_ID_S) -#define ICE_AQ_VSI_PASID_ID_VALID BIT(31) +#define ICE_AQ_VSI_PASID_ID_S 0 +#define ICE_AQ_VSI_PASID_ID_M (0xFFFFF << ICE_AQ_VSI_PASID_ID_S) +#define ICE_AQ_VSI_PASID_ID_VALID BIT(31) u8 reserved[24]; }; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 0fff5ec897c9..c8991711b754 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -810,13 +810,13 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE; /* Traffic from VSI can be sent to LAN */ ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; - /* By default bits 3 and 4 in vlan_flags are 0's which results in legacy + /* By default bits 3 and 4 in inner_vlan_flags are 0's which results in legacy * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all * packets untagged/tagged. */ - ctxt->info.vlan_flags = ((ICE_AQ_VSI_VLAN_MODE_ALL & - ICE_AQ_VSI_VLAN_MODE_M) >> - ICE_AQ_VSI_VLAN_MODE_S); + ctxt->info.inner_vlan_flags = ((ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL & + ICE_AQ_VSI_INNER_VLAN_TX_MODE_M) >> + ICE_AQ_VSI_INNER_VLAN_TX_MODE_S); /* Have 1:1 UP mapping for both ingress/egress tables */ table |= ICE_UP_TABLE_TRANSLATE(0, 0); table |= ICE_UP_TABLE_TRANSLATE(1, 1); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 8a0684c0ebd0..6843b8e87441 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -4071,8 +4071,8 @@ static void ice_set_safe_mode_vlan_cfg(struct ice_pf *pf) ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; /* allow all VLANs on Tx and don't strip on Rx */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL | - ICE_AQ_VSI_VLAN_EMOD_NOTHING; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL | + ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; status = ice_update_vsi(hw, vsi->idx, ctxt, NULL); if (status) { @@ -4081,7 +4081,7 @@ static void ice_set_safe_mode_vlan_cfg(struct ice_pf *pf) } else { vsi->info.sec_flags = ctxt->info.sec_flags; vsi->info.sw_flags2 = ctxt->info.sw_flags2; - vsi->info.vlan_flags = ctxt->info.vlan_flags; + vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags; } kfree(ctxt); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 6b7feab0b2a1..0b130505b68a 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -100,14 +100,14 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) return -ENOMEM; /* Here we are configuring the VSI to let the driver add VLAN tags by - * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag + * setting inner_vlan_flags to ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL. The actual VLAN tag * insertion happens in the Tx hot path, in ice_tx_map. */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL; /* Preserve existing VLAN strip setting */ - ctxt->info.vlan_flags |= (vsi->info.vlan_flags & - ICE_AQ_VSI_VLAN_EMOD_M); + ctxt->info.inner_vlan_flags |= (vsi->info.inner_vlan_flags & + ICE_AQ_VSI_INNER_VLAN_EMODE_M); ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); @@ -118,7 +118,7 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) goto out; } - vsi->info.vlan_flags = ctxt->info.vlan_flags; + vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags; out: kfree(ctxt); return err; @@ -138,7 +138,7 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) /* do not allow modifying VLAN stripping when a port VLAN is configured * on this VSI */ - if (vsi->info.pvid) + if (vsi->info.port_based_inner_vlan) return 0; ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); @@ -151,13 +151,13 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) */ if (ena) /* Strip VLAN tag from Rx packet and put it in the desc */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH; else /* Disable stripping. Leave tag in packet */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; /* Allow all packets untagged/tagged */ - ctxt->info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL; + ctxt->info.inner_vlan_flags |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL; ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); @@ -168,13 +168,13 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) goto out; } - vsi->info.vlan_flags = ctxt->info.vlan_flags; + vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags; out: kfree(ctxt); return err; } -int ice_vsi_ena_stripping(struct ice_vsi *vsi, const u16 tpid) +int ice_vsi_ena_inner_stripping(struct ice_vsi *vsi, const u16 tpid) { if (tpid != ETH_P_8021Q) { print_invalid_tpid(vsi, tpid); @@ -184,12 +184,12 @@ int ice_vsi_ena_stripping(struct ice_vsi *vsi, const u16 tpid) return ice_vsi_manage_vlan_stripping(vsi, true); } -int ice_vsi_dis_stripping(struct ice_vsi *vsi) +int ice_vsi_dis_inner_stripping(struct ice_vsi *vsi) { return ice_vsi_manage_vlan_stripping(vsi, false); } -int ice_vsi_ena_insertion(struct ice_vsi *vsi, const u16 tpid) +int ice_vsi_ena_inner_insertion(struct ice_vsi *vsi, const u16 tpid) { if (tpid != ETH_P_8021Q) { print_invalid_tpid(vsi, tpid); @@ -199,18 +199,17 @@ int ice_vsi_ena_insertion(struct ice_vsi *vsi, const u16 tpid) return ice_vsi_manage_vlan_insertion(vsi); } -int ice_vsi_dis_insertion(struct ice_vsi *vsi) +int ice_vsi_dis_inner_insertion(struct ice_vsi *vsi) { return ice_vsi_manage_vlan_insertion(vsi); } /** - * ice_vsi_manage_pvid - Enable or disable port VLAN for VSI + * __ice_vsi_set_inner_port_vlan - set port VLAN VSI context settings to enable a port VLAN * @vsi: the VSI to update * @pvid_info: VLAN ID and QoS used to set the PVID VSI context field - * @enable: true for enable PVID false for disable */ -static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) +static int __ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, u16 pvid_info) { struct ice_hw *hw = &vsi->back->hw; struct ice_aqc_vsi_props *info; @@ -223,18 +222,12 @@ static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) ctxt->info = vsi->info; info = &ctxt->info; - if (enable) { - info->vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | - ICE_AQ_VSI_PVLAN_INSERT_PVID | - ICE_AQ_VSI_VLAN_EMOD_STR; - info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } else { - info->vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING | - ICE_AQ_VSI_VLAN_MODE_ALL; - info->sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } + info->inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED | + ICE_AQ_VSI_INNER_VLAN_INSERT_PVID | + ICE_AQ_VSI_INNER_VLAN_EMODE_STR; + info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - info->pvid = cpu_to_le16(pvid_info); + info->port_based_inner_vlan = cpu_to_le16(pvid_info); info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | ICE_AQ_VSI_PROP_SW_VALID); @@ -245,15 +238,15 @@ static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) goto out; } - vsi->info.vlan_flags = info->vlan_flags; + vsi->info.inner_vlan_flags = info->inner_vlan_flags; vsi->info.sw_flags2 = info->sw_flags2; - vsi->info.pvid = info->pvid; + vsi->info.port_based_inner_vlan = info->port_based_inner_vlan; out: kfree(ctxt); return ret; } -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) +int ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { u16 port_vlan_info; @@ -265,7 +258,7 @@ int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) port_vlan_info = vlan->vid | (vlan->prio << VLAN_PRIO_SHIFT); - return ice_vsi_manage_pvid(vsi, port_vlan_info, true); + return __ice_vsi_set_inner_port_vlan(vsi, port_vlan_info); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index 1bdbf585db7d..a10671133e36 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -12,11 +12,11 @@ struct ice_vsi; int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); -int ice_vsi_ena_stripping(struct ice_vsi *vsi, u16 tpid); -int ice_vsi_dis_stripping(struct ice_vsi *vsi); -int ice_vsi_ena_insertion(struct ice_vsi *vsi, u16 tpid); -int ice_vsi_dis_insertion(struct ice_vsi *vsi); -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_vsi_ena_inner_stripping(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_inner_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_inner_insertion(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_inner_insertion(struct ice_vsi *vsi); +int ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c index 3bab6c025856..6a6b49581c70 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c @@ -8,13 +8,13 @@ void ice_vsi_init_vlan_ops(struct ice_vsi *vsi) { vsi->vlan_ops.add_vlan = ice_vsi_add_vlan; vsi->vlan_ops.del_vlan = ice_vsi_del_vlan; - vsi->vlan_ops.ena_stripping = ice_vsi_ena_stripping; - vsi->vlan_ops.dis_stripping = ice_vsi_dis_stripping; - vsi->vlan_ops.ena_insertion = ice_vsi_ena_insertion; - vsi->vlan_ops.dis_insertion = ice_vsi_dis_insertion; + vsi->vlan_ops.ena_stripping = ice_vsi_ena_inner_stripping; + vsi->vlan_ops.dis_stripping = ice_vsi_dis_inner_stripping; + vsi->vlan_ops.ena_insertion = ice_vsi_ena_inner_insertion; + vsi->vlan_ops.dis_insertion = ice_vsi_dis_inner_insertion; vsi->vlan_ops.ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; vsi->vlan_ops.dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vsi->vlan_ops.ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vsi->vlan_ops.dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; - vsi->vlan_ops.set_port_vlan = ice_vsi_set_port_vlan; + vsi->vlan_ops.set_port_vlan = ice_vsi_set_inner_port_vlan; } -- 2.20.1 From hkallweit1 at gmail.com Tue Nov 30 21:35:27 2021 From: hkallweit1 at gmail.com (Heiner Kallweit) Date: Tue, 30 Nov 2021 22:35:27 +0100 Subject: [Intel-wired-lan] [PATCH net] igb: fix deadlock caused by taking RTNL in RPM resume path In-Reply-To: <20211130091206.488a541f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> References: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> <20211129171712.500e37cb@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <6edc23a1-5907-3a41-7b46-8d53c5664a56@gmail.com> <20211130091206.488a541f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> Message-ID: On 30.11.2021 18:12, Jakub Kicinski wrote: > On Tue, 30 Nov 2021 07:46:22 +0100 Heiner Kallweit wrote: >> On 30.11.2021 02:17, Jakub Kicinski wrote: >>> On Mon, 29 Nov 2021 22:14:06 +0100 Heiner Kallweit wrote: >>>> - rtnl_lock(); >>>> + if (!rpm) >>>> + rtnl_lock(); >>> >>> Is there an ASSERT_RTNL() hidden in any of the below? Can we add one? >>> Unless we're 100% confident nobody will RPM resume without rtnl held.. >>> >> >> Not sure whether igb uses RPM the same way as r8169. There the device >> is runtime-suspended (D3hot) w/o link. Once cable is plugged in the PHY >> triggers a PME, and PCI core runtime-resumes the device (MAC). >> In this case RTNL isn't held by the caller. Therefore I don't think >> it's safe to assume that all callers hold RTNL. > > No, no - I meant to leave the locking in but add ASSERT_RTNL() to catch > if rpm == true && rtnl_held() == false. > This is a valid case. Maybe it's not my day today, I still don't get how we would benefit from adding an ASSERT_RTNL(). Based on the following I think that RPM resume and device open() can't collide, because RPM resume is finished before open() starts its actual work. static int __igb_open(struct net_device *netdev, bool resuming) { ... if (!resuming) pm_runtime_get_sync(&pdev->dev); From anthony.l.nguyen at intel.com Tue Nov 30 21:42:33 2021 From: anthony.l.nguyen at intel.com (Nguyen, Anthony L) Date: Tue, 30 Nov 2021 21:42:33 +0000 Subject: [Intel-wired-lan] [PATCH net v1] i40e: Fix for failed to init adminq while VF reset In-Reply-To: <20211130073211.1114232-1-karen.sornek@intel.com> References: <20211130073211.1114232-1-karen.sornek@intel.com> Message-ID: <8f454effc188a4e01b758caa9579f933621c381e.camel@intel.com> On Tue, 2021-11-30 at 08:32 +0100, Sornek, Karen wrote: > From: Karen Sornek > > Fix for failed to init adminq: -53 while VF is resetting via MAC > address changing procedure. > Added sync module to avoid reading deadbeef value in reinit adminq > during software reset. > Without this patch it is possible to trigger VF reset procedure > during reinit adminq. This resulted in an incorrect reading of > value from the AQP registers and generated the -53 error. > If this is for net, it needs a Fixes: > Signed-off-by: Grzegorz Szczurek > Signed-off-by: Karen Sornek From anthony.l.nguyen at intel.com Tue Nov 30 23:51:34 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:34 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 02/14] ice: Add helper function for adding VLAN 0 In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-2-anthony.l.nguyen@intel.com> From: Brett Creeley There are multiple places where VLAN 0 is being added. Create a function to be called in order to minimize changes as the implementation is expanded to support double VLAN and avoid duplicated code. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 4 ++-- drivers/net/ethernet/intel/ice/ice_lib.c | 11 ++++++++++- drivers/net/ethernet/intel/ice/ice_lib.h | 2 +- drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 2 +- 4 files changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index a737c54c4895..291748553800 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -127,7 +127,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) __dev_mc_unsync(uplink_netdev, NULL); netif_addr_unlock_bh(uplink_netdev); - if (ice_vsi_add_vlan(uplink_vsi, 0, ICE_FWD_TO_VSI)) + if (ice_vsi_add_vlan_zero(uplink_vsi)) goto err_def_rx; if (!ice_is_dflt_vsi_in_use(uplink_vsi->vsw)) { @@ -231,7 +231,7 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) goto err; } - if (ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI)) { + if (ice_vsi_add_vlan_zero(vsi)) { ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr.addr, ICE_FWD_TO_VSI); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 2db3cd6d8907..cc135792834e 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2621,7 +2621,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, * so this handles those cases (i.e. adding the PF to a bridge * without the 8021q module loaded). */ - ret = ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); + ret = ice_vsi_add_vlan_zero(vsi); if (ret) goto unroll_clear_rings; @@ -4069,6 +4069,15 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) return 0; } +/** + * ice_vsi_add_vlan_zero - add VLAN 0 filter(s) for this VSI + * @vsi: VSI used to add VLAN filters + */ +int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) +{ + return ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); +} + /** * ice_is_feature_supported * @pf: pointer to the struct ice_pf instance diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 9fdd95dd5a14..28e0f1147c82 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -133,7 +133,7 @@ void ice_vsi_ctx_clear_antispoof(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx); - +int ice_vsi_add_vlan_zero(struct ice_vsi *vsi); bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f); void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_init_feature_support(struct ice_pf *pf); diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index f947d936def3..ab03010c822d 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -1855,7 +1855,7 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf) if (!vsi) return -ENOMEM; - err = ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); + err = ice_vsi_add_vlan_zero(vsi); if (err) { dev_warn(dev, "Failed to add VLAN 0 filter for VF %d\n", vf->vf_id); -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:37 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:37 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 05/14] ice: Refactor vf->port_vlan_info to use ice_vlan In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-5-anthony.l.nguyen@intel.com> From: Brett Creeley The current vf->port_vlan_info variable is a packed u16 that contains the port VLAN ID and QoS/prio value. This is fine, but changes are incoming that allow for an 802.1ad port VLAN. Add flexibility by changing the vf->port_vlan_info member to be an ice_vlan structure. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 76 ++++++++++--------- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 3 +- 2 files changed, 44 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index d580120dbb93..4971e547432c 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -751,6 +751,21 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf) return 0; } +static u16 ice_vf_get_port_vlan_id(struct ice_vf *vf) +{ + return vf->port_vlan_info.vid; +} + +static u8 ice_vf_get_port_vlan_prio(struct ice_vf *vf) +{ + return vf->port_vlan_info.prio; +} + +static bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) +{ + return (ice_vf_get_port_vlan_id(vf) || ice_vf_get_port_vlan_prio(vf)); +} + /** * ice_vf_rebuild_host_vlan_cfg - add VLAN 0 filter or rebuild the Port VLAN * @vf: VF to add MAC filters for @@ -760,16 +775,12 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf) */ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) { - u8 vlan_prio = (vf->port_vlan_info & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; - u16 vlan_id = vf->port_vlan_info & VLAN_VID_MASK; struct device *dev = ice_pf_to_dev(vf->pf); struct ice_vsi *vsi = ice_get_vf_vsi(vf); - struct ice_vlan vlan; int err; - vlan = ICE_VLAN(vlan_id, vlan_prio); - if (vf->port_vlan_info) { - err = vsi->vlan_ops.set_port_vlan(vsi, &vlan); + if (ice_vf_is_port_vlan_ena(vf)) { + err = vsi->vlan_ops.set_port_vlan(vsi, &vf->port_vlan_info); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); @@ -777,12 +788,11 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) } } - /* vlan_id will either be 0 or the port VLAN number */ - err = vsi->vlan_ops.add_vlan(vsi, &vlan); + err = vsi->vlan_ops.add_vlan(vsi, &vf->port_vlan_info); if (err) { - dev_err(dev, "failed to add %s VLAN %u filter for VF %u, error %d\n", - vf->port_vlan_info ? "port" : "", vlan_id, vf->vf_id, - err); + dev_err(dev, "failed to add VLAN %u filter for VF %u during VF rebuild, error %d\n", + ice_vf_is_port_vlan_ena(vf) ? + ice_vf_get_port_vlan_id(vf) : 0, vf->vf_id, err); return err; } @@ -1255,9 +1265,9 @@ static int ice_vf_set_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 pro struct ice_hw *hw = &vsi->back->hw; int status; - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_set_vsi_promisc(hw, vsi->idx, promisc_m, - vf->port_vlan_info & VLAN_VID_MASK); + ice_vf_get_port_vlan_id(vf)); else if (vsi->num_vlan > 1) status = ice_fltr_set_vlan_vsi_promisc(hw, vsi, promisc_m); else @@ -1277,9 +1287,9 @@ static int ice_vf_clear_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 p struct ice_hw *hw = &vsi->back->hw; int status; - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_clear_vsi_promisc(hw, vsi->idx, promisc_m, - vf->port_vlan_info & VLAN_VID_MASK); + ice_vf_get_port_vlan_id(vf)); else if (vsi->num_vlan > 1) status = ice_fltr_clear_vlan_vsi_promisc(hw, vsi, promisc_m); else @@ -1654,7 +1664,7 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr) */ if (test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states) || test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) { - if (vf->port_vlan_info || vsi->num_vlan) + if (ice_vf_is_port_vlan_ena(vf) || vsi->num_vlan) promisc_m = ICE_UCAST_VLAN_PROMISC_BITS; else promisc_m = ICE_UCAST_PROMISC_BITS; @@ -2277,7 +2287,7 @@ static u16 ice_vc_get_max_frame_size(struct ice_vf *vf) max_frame_size = pi->phy.link_info.max_frame_size; - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) max_frame_size -= VLAN_HLEN; return max_frame_size; @@ -2326,7 +2336,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) goto err; } - if (!vsi->info.pvid) + if (!ice_vf_is_port_vlan_ena(vf)) vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { @@ -3050,7 +3060,7 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) rm_promisc = !allmulti && !alluni; - if (vsi->num_vlan || vf->port_vlan_info) { + if (vsi->num_vlan || ice_vf_is_port_vlan_ena(vf)) { if (rm_promisc) ret = vsi->vlan_ops.ena_rx_filtering(vsi); else @@ -3086,7 +3096,7 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) } else { u8 mcast_m, ucast_m; - if (vf->port_vlan_info || vsi->num_vlan > 1) { + if (ice_vf_is_port_vlan_ena(vf) || vsi->num_vlan > 1) { mcast_m = ICE_MCAST_VLAN_PROMISC_BITS; ucast_m = ICE_UCAST_VLAN_PROMISC_BITS; } else { @@ -3669,7 +3679,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) /* add space for the port VLAN since the VF driver is not * expected to account for it in the MTU calculation */ - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) vsi->max_frame += VLAN_HLEN; if (ice_vsi_cfg_single_rxq(vsi, q_idx)) { @@ -4097,7 +4107,6 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, struct ice_pf *pf = ice_netdev_to_pf(netdev); struct device *dev; struct ice_vf *vf; - u16 vlanprio; int ret; dev = ice_pf_to_dev(pf); @@ -4120,20 +4129,19 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, if (ret) return ret; - vlanprio = vlan_id | (qos << VLAN_PRIO_SHIFT); - - if (vf->port_vlan_info == vlanprio) { + if (ice_vf_get_port_vlan_prio(vf) == qos && + ice_vf_get_port_vlan_id(vf) == vlan_id) { /* duplicate request, so just return success */ - dev_dbg(dev, "Duplicate pvid %d request\n", vlanprio); + dev_dbg(dev, "Duplicate port VLAN %u, QoS %u request\n", + vlan_id, qos); return 0; } mutex_lock(&vf->cfg_lock); - vf->port_vlan_info = vlanprio; - - if (vf->port_vlan_info) - dev_info(dev, "Setting VLAN %d, QoS 0x%x on VF %d\n", + vf->port_vlan_info = ICE_VLAN(vlan_id, qos); + if (ice_vf_is_port_vlan_ena(vf)) + dev_info(dev, "Setting VLAN %u, QoS %u on VF %d\n", vlan_id, qos, vf_id); else dev_info(dev, "Clearing port VLAN on VF %d\n", vf_id); @@ -4219,7 +4227,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - if (vsi->info.pvid) { + if (ice_vf_is_port_vlan_ena(vf)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; } @@ -4445,7 +4453,7 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return -EINVAL; /* don't modify stripping if port VLAN is configured */ - if (vsi->info.pvid) + if (ice_vf_is_port_vlan_ena(vf)) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) @@ -4815,8 +4823,8 @@ ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi) ether_addr_copy(ivi->mac, vf->hw_lan_addr.addr); /* VF configuration for VLAN and applicable QoS */ - ivi->vlan = vf->port_vlan_info & VLAN_VID_MASK; - ivi->qos = (vf->port_vlan_info & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + ivi->vlan = ice_vf_get_port_vlan_id(vf); + ivi->qos = ice_vf_get_port_vlan_prio(vf); ivi->trusted = vf->trusted; ivi->spoofchk = vf->spoofchk; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 752487a1bdd6..5079a3b72698 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -5,6 +5,7 @@ #define _ICE_VIRTCHNL_PF_H_ #include "ice.h" #include "ice_virtchnl_fdir.h" +#include "ice_vsi_vlan_ops.h" /* Restrict number of MAC Addr and VLAN that non-trusted VF can programmed */ #define ICE_MAX_VLAN_PER_VF 8 @@ -119,7 +120,7 @@ struct ice_vf { struct ice_time_mac legacy_last_added_umac; DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); - u16 port_vlan_info; /* Port VLAN ID and QoS */ + struct ice_vlan port_vlan_info; /* Port VLAN ID and QoS */ u8 pf_set_mac:1; /* VF MAC address set by VMM admin */ u8 trusted:1; u8 spoofchk:1; -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:38 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:38 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 06/14] ice: Use the proto argument for VLAN ops In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-6-anthony.l.nguyen@intel.com> From: Brett Creeley Currently the proto argument is unused. This is because the driver only supports 802.1Q VLAN filtering. This policy is enforced via netdev features that the driver sets up when configuring the netdev, so the proto argument won't ever be anything other than 802.1Q. However, this will allow for future iterations of the driver to seemlessly support 802.1ad filtering. Begin using the proto argument and extend the related structures to support its use. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_fltr.c | 2 + drivers/net/ethernet/intel/ice/ice_lib.c | 2 +- drivers/net/ethernet/intel/ice/ice_main.c | 22 ++++----- drivers/net/ethernet/intel/ice/ice_switch.c | 5 ++ drivers/net/ethernet/intel/ice/ice_switch.h | 2 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 10 ++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 2 +- drivers/net/ethernet/intel/ice/ice_vlan.h | 3 +- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 48 ++++++++++++++++++- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 4 +- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 4 +- 11 files changed, 78 insertions(+), 26 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c index 8f543851e39f..67044556b5bd 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.c +++ b/drivers/net/ethernet/intel/ice/ice_fltr.c @@ -220,6 +220,8 @@ ice_fltr_add_vlan_to_list(struct ice_vsi *vsi, struct list_head *list, info.fltr_act = ICE_FWD_TO_VSI; info.vsi_handle = vsi->idx; info.l_data.vlan.vlan_id = vlan->vid; + info.l_data.vlan.tpid = vlan->tpid; + info.l_data.vlan.tpid_valid = true; return ice_fltr_add_entry_to_list(ice_pf_to_dev(vsi->back), &info, list); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 55a2aef54922..0fff5ec897c9 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3880,7 +3880,7 @@ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { struct ice_vlan vlan; - vlan = ICE_VLAN(0, 0); + vlan = ICE_VLAN(0, 0, 0); return vsi->vlan_ops.add_vlan(vsi, &vlan); } diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 8669858d104c..8a0684c0ebd0 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3410,14 +3410,13 @@ ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) /** * ice_vlan_rx_add_vid - Add a VLAN ID filter to HW offload * @netdev: network interface to be adjusted - * @proto: unused protocol + * @proto: VLAN TPID * @vid: VLAN ID to be added * * net_device_ops implementation for adding VLAN IDs */ static int -ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, - u16 vid) +ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; @@ -3438,7 +3437,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); ret = vsi->vlan_ops.add_vlan(vsi, &vlan); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3449,14 +3448,13 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /** * ice_vlan_rx_kill_vid - Remove a VLAN ID filter from HW offload * @netdev: network interface to be adjusted - * @proto: unused protocol + * @proto: VLAN TPID * @vid: VLAN ID to be removed * * net_device_ops implementation for removing VLAN IDs */ static int -ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, - u16 vid) +ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; @@ -3470,7 +3468,7 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, /* Make sure VLAN delete is successful before updating VLAN * information */ - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); ret = vsi->vlan_ops.del_vlan(vsi, &vlan); if (ret) return ret; @@ -5621,14 +5619,14 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vsi->vlan_ops.ena_stripping(vsi); + ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) ret = vsi->vlan_ops.dis_stripping(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_TX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vsi->vlan_ops.ena_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) ret = vsi->vlan_ops.dis_insertion(vsi); @@ -5674,9 +5672,9 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) int ret = 0; if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - ret = vsi->vlan_ops.ena_stripping(vsi); + ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - ret = vsi->vlan_ops.ena_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index f998fcddc789..f851a81a7240 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -1539,6 +1539,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc) { u16 vlan_id = ICE_MAX_VLAN_ID + 1; + u16 vlan_tpid = ETH_P_8021Q; void *daddr = NULL; u16 eth_hdr_sz; u8 *eth_hdr; @@ -1611,6 +1612,8 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, break; case ICE_SW_LKUP_VLAN: vlan_id = f_info->l_data.vlan.vlan_id; + if (f_info->l_data.vlan.tpid_valid) + vlan_tpid = f_info->l_data.vlan.tpid; if (f_info->fltr_act == ICE_FWD_TO_VSI || f_info->fltr_act == ICE_FWD_TO_VSI_LIST) { act |= ICE_SINGLE_ACT_PRUNE; @@ -1653,6 +1656,8 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, if (!(vlan_id > ICE_MAX_VLAN_ID)) { off = (__force __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET); *off = cpu_to_be16(vlan_id); + off = (__force __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET); + *off = cpu_to_be16(vlan_tpid); } /* Create the switch rule with the final dummy Ethernet header */ diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index 4fb1a7ae5dbb..5000cc8276cd 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -77,6 +77,8 @@ struct ice_fltr_info { } mac_vlan; struct { u16 vlan_id; + u16 tpid; + u8 tpid_valid; } vlan; /* Set lkup_type as ICE_SW_LKUP_ETHERTYPE * if just using ethertype as filter. Set lkup_type as diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 4971e547432c..e576cd201a48 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -4139,7 +4139,7 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, mutex_lock(&vf->cfg_lock); - vf->port_vlan_info = ICE_VLAN(vlan_id, qos); + vf->port_vlan_info = ICE_VLAN(ETH_P_8021Q, vlan_id, qos); if (ice_vf_is_port_vlan_ena(vf)) dev_info(dev, "Setting VLAN %u, QoS %u on VF %d\n", vlan_id, qos, vf_id); @@ -4260,7 +4260,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); status = vsi->vlan_ops.add_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4313,7 +4313,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); status = vsi->vlan_ops.del_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4392,7 +4392,7 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf) } vsi = ice_get_vf_vsi(vf); - if (vsi->vlan_ops.ena_stripping(vsi)) + if (vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4457,7 +4457,7 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) - return vsi->vlan_ops.ena_stripping(vsi); + return vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); else return vsi->vlan_ops.dis_stripping(vsi); } diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 5079a3b72698..b06ca1f97833 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -120,7 +120,7 @@ struct ice_vf { struct ice_time_mac legacy_last_added_umac; DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); - struct ice_vlan port_vlan_info; /* Port VLAN ID and QoS */ + struct ice_vlan port_vlan_info; /* Port VLAN ID, QoS, and TPID */ u8 pf_set_mac:1; /* VF MAC address set by VMM admin */ u8 trusted:1; u8 spoofchk:1; diff --git a/drivers/net/ethernet/intel/ice/ice_vlan.h b/drivers/net/ethernet/intel/ice/ice_vlan.h index 3fad0cba2da6..bc4550a03173 100644 --- a/drivers/net/ethernet/intel/ice/ice_vlan.h +++ b/drivers/net/ethernet/intel/ice/ice_vlan.h @@ -8,10 +8,11 @@ #include "ice_type.h" struct ice_vlan { + u16 tpid; u16 vid; u8 prio; }; -#define ICE_VLAN(vid, prio) ((struct ice_vlan){ vid, prio }) +#define ICE_VLAN(tpid, vid, prio) ((struct ice_vlan){ tpid, vid, prio }) #endif /* _ICE_VLAN_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 74b6dec0744b..6b7feab0b2a1 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -6,6 +6,31 @@ #include "ice_fltr.h" #include "ice.h" +static void print_invalid_tpid(struct ice_vsi *vsi, u16 tpid) +{ + dev_err(ice_pf_to_dev(vsi->back), "%s %d specified invalid VLAN tpid 0x%04x\n", + ice_vsi_type_str(vsi->type), vsi->idx, tpid); +} + +/** + * validate_vlan - check if the ice_vlan passed in is valid + * @vsi: VSI used for printing error message + * @vlan: ice_vlan structure to validate + * + * Return true if the VLAN TPID is valid or if the VLAN TPID is 0 and the VLAN + * VID is 0, which allows for non-zero VLAN filters with the specified VLAN TPID + * and untagged VLAN 0 filters to be added to the prune list respectively. + */ +static bool validate_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + if (vlan->tpid != ETH_P_8021Q && (vlan->tpid || vlan->vid)) { + print_invalid_tpid(vsi, vlan->tpid); + return false; + } + + return true; +} + /** * ice_vsi_add_vlan - default add VLAN implementation for all VSI types * @vsi: VSI being configured @@ -15,6 +40,9 @@ int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { int err = 0; + if (!validate_vlan(vsi, vlan)) + return -EINVAL; + if (!ice_fltr_add_vlan(vsi, vlan)) { vsi->num_vlan++; } else { @@ -37,6 +65,9 @@ int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) struct device *dev; int err; + if (!validate_vlan(vsi, vlan)) + return -EINVAL; + dev = ice_pf_to_dev(pf); err = ice_fltr_remove_vlan(vsi, vlan); @@ -143,8 +174,13 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) return err; } -int ice_vsi_ena_stripping(struct ice_vsi *vsi) +int ice_vsi_ena_stripping(struct ice_vsi *vsi, const u16 tpid) { + if (tpid != ETH_P_8021Q) { + print_invalid_tpid(vsi, tpid); + return -EINVAL; + } + return ice_vsi_manage_vlan_stripping(vsi, true); } @@ -153,8 +189,13 @@ int ice_vsi_dis_stripping(struct ice_vsi *vsi) return ice_vsi_manage_vlan_stripping(vsi, false); } -int ice_vsi_ena_insertion(struct ice_vsi *vsi) +int ice_vsi_ena_insertion(struct ice_vsi *vsi, const u16 tpid) { + if (tpid != ETH_P_8021Q) { + print_invalid_tpid(vsi, tpid); + return -EINVAL; + } + return ice_vsi_manage_vlan_insertion(vsi); } @@ -216,6 +257,9 @@ int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { u16 port_vlan_info; + if (vlan->tpid != ETH_P_8021Q) + return -EINVAL; + if (vlan->prio > 7) return -EINVAL; diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index a0305007896c..1bdbf585db7d 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -12,9 +12,9 @@ struct ice_vsi; int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); -int ice_vsi_ena_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_stripping(struct ice_vsi *vsi, u16 tpid); int ice_vsi_dis_stripping(struct ice_vsi *vsi); -int ice_vsi_ena_insertion(struct ice_vsi *vsi); +int ice_vsi_ena_insertion(struct ice_vsi *vsi, u16 tpid); int ice_vsi_dis_insertion(struct ice_vsi *vsi); int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index c944f04acd3c..76e55b259bc8 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -12,9 +12,9 @@ struct ice_vsi; struct ice_vsi_vlan_ops { int (*add_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); int (*del_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); - int (*ena_stripping)(struct ice_vsi *vsi); + int (*ena_stripping)(struct ice_vsi *vsi, const u16 tpid); int (*dis_stripping)(struct ice_vsi *vsi); - int (*ena_insertion)(struct ice_vsi *vsi); + int (*ena_insertion)(struct ice_vsi *vsi, const u16 tpid); int (*dis_insertion)(struct ice_vsi *vsi); int (*ena_rx_filtering)(struct ice_vsi *vsi); int (*dis_rx_filtering)(struct ice_vsi *vsi); -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:33 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:33 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 01/14] ice: Refactor spoofcheck configuration functions Message-ID: <20211130235146.28731-1-anthony.l.nguyen@intel.com> From: Brett Creeley Add functions to configure Tx VLAN antispoof based on iproute configuration and/or VLAN mode and VF driver support. This is needed later so the driver can control when it can be configured. Also, add functions that can be used to enable and disable MAC and VLAN spoofcheck. Move spoofchk configuration during VSI setup into the SR-IOV initialization path and into the post VSI rebuild flow for VF VSIs. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_lib.c | 19 --- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 159 ++++++++++++++---- 2 files changed, 128 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 5ef959769104..2db3cd6d8907 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1125,25 +1125,6 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID); } - /* enable/disable MAC and VLAN anti-spoof when spoofchk is on/off - * respectively - */ - if (vsi->type == ICE_VSI_VF) { - ctxt->info.valid_sections |= - cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); - if (pf->vf[vsi->vf_id].spoofchk) { - ctxt->info.sec_flags |= - ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); - } else { - ctxt->info.sec_flags &= - ~(ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)); - } - } - /* Allow control frames out of main VSI */ if (vsi->type == ICE_VSI_PF) { ctxt->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 8f2045b7c29f..f947d936def3 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -837,6 +837,114 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) return 0; } +static int ice_cfg_vlan_antispoof(struct ice_vsi *vsi, bool enable) +{ + struct ice_vsi_ctx *ctx; + int err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->info.sec_flags = vsi->info.sec_flags; + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); + + if (enable) + ctx->info.sec_flags |= ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S; + else + ctx->info.sec_flags &= ~(ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); + + err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx VLAN anti-spoof %s for VSI %d, error %d\n", + enable ? "ON" : "OFF", vsi->vsi_num, err); + else + vsi->info.sec_flags = ctx->info.sec_flags; + + kfree(ctx); + + return err; +} + +static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable) +{ + struct ice_vsi_ctx *ctx; + int err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->info.sec_flags = vsi->info.sec_flags; + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); + + if (enable) + ctx->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF; + else + ctx->info.sec_flags &= ~ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF; + + err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx MAC anti-spoof %s for VSI %d, error %d\n", + enable ? "ON" : "OFF", vsi->vsi_num, err); + else + vsi->info.sec_flags = ctx->info.sec_flags; + + kfree(ctx); + + return err; +} + +/** + * ice_vsi_ena_spoofchk - enable Tx spoof checking for this VSI + * @vsi: VSI to enable Tx spoof checking for + */ +static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) +{ + int err; + + err = ice_cfg_vlan_antispoof(vsi, true); + if (err) + return err; + + return ice_cfg_mac_antispoof(vsi, true); +} + +/** + * ice_vsi_dis_spoofchk - disable Tx spoof checking for this VSI + * @vsi: VSI to disable Tx spoof checking for + */ +static int ice_vsi_dis_spoofchk(struct ice_vsi *vsi) +{ + int err; + + err = ice_cfg_vlan_antispoof(vsi, false); + if (err) + return err; + + return ice_cfg_mac_antispoof(vsi, false); +} + +/** + * ice_vf_set_spoofchk_cfg - apply Tx spoof checking setting + * @vf: VF set spoofchk for + * @vsi: VSI associated to the VF + */ +static int +ice_vf_set_spoofchk_cfg(struct ice_vf *vf, struct ice_vsi *vsi) +{ + int err; + + if (vf->spoofchk) + err = ice_vsi_ena_spoofchk(vsi); + else + err = ice_vsi_dis_spoofchk(vsi); + + return err; +} + /** * ice_vf_rebuild_host_mac_cfg - add broadcast and the VF's perm_addr/LAA * @vf: VF to add MAC filters for @@ -1344,6 +1452,10 @@ static void ice_vf_rebuild_host_cfg(struct ice_vf *vf) dev_err(dev, "failed to rebuild Tx rate limiting configuration for VF %u\n", vf->vf_id); + if (ice_vf_set_spoofchk_cfg(vf, vsi)) + dev_err(dev, "failed to rebuild spoofchk configuration for VF %d\n", + vf->vf_id); + /* rebuild aggregator node config for main VF VSI */ ice_vf_rebuild_aggregator_node_cfg(vsi); } @@ -1758,6 +1870,13 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf) goto release_vsi; } + err = ice_vf_set_spoofchk_cfg(vf, vsi); + if (err) { + dev_warn(dev, "Failed to initialize spoofchk setting for VF %d\n", + vf->vf_id); + goto release_vsi; + } + vf->num_mac = 1; return 0; @@ -2891,7 +3010,6 @@ int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_pf *pf = np->vsi->back; - struct ice_vsi_ctx *ctx; struct ice_vsi *vf_vsi; struct device *dev; struct ice_vf *vf; @@ -2924,37 +3042,16 @@ int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena) return 0; } - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); - if (!ctx) - return -ENOMEM; - - ctx->info.sec_flags = vf_vsi->info.sec_flags; - ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); - if (ena) { - ctx->info.sec_flags |= - ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); - } else { - ctx->info.sec_flags &= - ~(ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)); - } - - ret = ice_update_vsi(&pf->hw, vf_vsi->idx, ctx, NULL); - if (ret) { - dev_err(dev, "Failed to %sable spoofchk on VF %d VSI %d\n error %d\n", - ena ? "en" : "dis", vf->vf_id, vf_vsi->vsi_num, ret); - goto out; - } - - /* only update spoofchk state and VSI context on success */ - vf_vsi->info.sec_flags = ctx->info.sec_flags; - vf->spoofchk = ena; + if (ena) + ret = ice_vsi_ena_spoofchk(vf_vsi); + else + ret = ice_vsi_dis_spoofchk(vf_vsi); + if (ret) + dev_err(dev, "Failed to set spoofchk %s for VF %d VSI %d\n error %d\n", + ena ? "ON" : "OFF", vf->vf_id, vf_vsi->vsi_num, ret); + else + vf->spoofchk = ena; -out: - kfree(ctx); return ret; } -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:36 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:36 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 04/14] ice: Introduce ice_vlan struct In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-4-anthony.l.nguyen@intel.com> From: Brett Creeley Add a new struct for VLAN related information. Currently this holds VLAN ID and priority values, but will be expanded to hold TPID value. This reduces the changes necessary if any other values are added in future. Remove the action argument from these calls as it's always ICE_FWD_VSI. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_fltr.c | 35 +++++++------------ drivers/net/ethernet/intel/ice/ice_fltr.h | 10 +++--- drivers/net/ethernet/intel/ice/ice_lib.c | 5 ++- drivers/net/ethernet/intel/ice/ice_lib.h | 1 + drivers/net/ethernet/intel/ice/ice_main.c | 8 +++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 19 ++++++---- drivers/net/ethernet/intel/ice/ice_vlan.h | 17 +++++++++ .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 31 +++++++++------- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 9 +++-- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 6 ++-- 10 files changed, 82 insertions(+), 59 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_vlan.h diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c index cf07eef39e9d..8f543851e39f 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.c +++ b/drivers/net/ethernet/intel/ice/ice_fltr.c @@ -206,21 +206,20 @@ ice_fltr_add_mac_to_list(struct ice_vsi *vsi, struct list_head *list, * ice_fltr_add_vlan_to_list - add VLAN filter info to exsisting list * @vsi: pointer to VSI struct * @list: list to add filter info to - * @vlan_id: VLAN ID to add - * @action: filter action + * @vlan: VLAN filter details */ static int ice_fltr_add_vlan_to_list(struct ice_vsi *vsi, struct list_head *list, - u16 vlan_id, enum ice_sw_fwd_act_type action) + struct ice_vlan *vlan) { struct ice_fltr_info info = { 0 }; info.flag = ICE_FLTR_TX; info.src_id = ICE_SRC_ID_VSI; info.lkup_type = ICE_SW_LKUP_VLAN; - info.fltr_act = action; + info.fltr_act = ICE_FWD_TO_VSI; info.vsi_handle = vsi->idx; - info.l_data.vlan.vlan_id = vlan_id; + info.l_data.vlan.vlan_id = vlan->vid; return ice_fltr_add_entry_to_list(ice_pf_to_dev(vsi->back), &info, list); @@ -313,19 +312,17 @@ ice_fltr_prepare_mac_and_broadcast(struct ice_vsi *vsi, const u8 *mac, /** * ice_fltr_prepare_vlan - add or remove VLAN filter * @vsi: pointer to VSI struct - * @vlan_id: VLAN ID to add - * @action: action to be performed on filter match + * @vlan: VLAN filter details * @vlan_action: pointer to add or remove VLAN function */ static int -ice_fltr_prepare_vlan(struct ice_vsi *vsi, u16 vlan_id, - enum ice_sw_fwd_act_type action, +ice_fltr_prepare_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan, int (*vlan_action)(struct ice_vsi *, struct list_head *)) { LIST_HEAD(tmp_list); int result; - if (ice_fltr_add_vlan_to_list(vsi, &tmp_list, vlan_id, action)) + if (ice_fltr_add_vlan_to_list(vsi, &tmp_list, vlan)) return -ENOMEM; result = vlan_action(vsi, &tmp_list); @@ -398,27 +395,21 @@ int ice_fltr_remove_mac(struct ice_vsi *vsi, const u8 *mac, /** * ice_fltr_add_vlan - add single VLAN filter * @vsi: pointer to VSI struct - * @vlan_id: VLAN ID to add - * @action: action to be performed on filter match + * @vlan: VLAN filter details */ -int ice_fltr_add_vlan(struct ice_vsi *vsi, u16 vlan_id, - enum ice_sw_fwd_act_type action) +int ice_fltr_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - return ice_fltr_prepare_vlan(vsi, vlan_id, action, - ice_fltr_add_vlan_list); + return ice_fltr_prepare_vlan(vsi, vlan, ice_fltr_add_vlan_list); } /** * ice_fltr_remove_vlan - remove VLAN filter * @vsi: pointer to VSI struct - * @vlan_id: filter VLAN to remove - * @action: action to remove + * @vlan: VLAN filter details */ -int ice_fltr_remove_vlan(struct ice_vsi *vsi, u16 vlan_id, - enum ice_sw_fwd_act_type action) +int ice_fltr_remove_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - return ice_fltr_prepare_vlan(vsi, vlan_id, action, - ice_fltr_remove_vlan_list); + return ice_fltr_prepare_vlan(vsi, vlan, ice_fltr_remove_vlan_list); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.h b/drivers/net/ethernet/intel/ice/ice_fltr.h index d271f61e0d34..4f7fe09d10e9 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.h +++ b/drivers/net/ethernet/intel/ice/ice_fltr.h @@ -4,6 +4,8 @@ #ifndef _ICE_FLTR_H_ #define _ICE_FLTR_H_ +#include "ice_vlan.h" + void ice_fltr_free_list(struct device *dev, struct list_head *h); int ice_fltr_set_vlan_vsi_promisc(struct ice_hw *hw, struct ice_vsi *vsi, u8 promisc_mask); @@ -32,12 +34,8 @@ ice_fltr_remove_mac(struct ice_vsi *vsi, const u8 *mac, int ice_fltr_remove_mac_list(struct ice_vsi *vsi, struct list_head *list); -int -ice_fltr_add_vlan(struct ice_vsi *vsi, u16 vid, - enum ice_sw_fwd_act_type action); -int -ice_fltr_remove_vlan(struct ice_vsi *vsi, u16 vid, - enum ice_sw_fwd_act_type action); +int ice_fltr_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_fltr_remove_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_fltr_add_eth(struct ice_vsi *vsi, u16 ethertype, u16 flag, diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index b50509584b31..55a2aef54922 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3878,7 +3878,10 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) */ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { - return vsi->vlan_ops.add_vlan(vsi, 0, ICE_FWD_TO_VSI); + struct ice_vlan vlan; + + vlan = ICE_VLAN(0, 0); + return vsi->vlan_ops.add_vlan(vsi, &vlan); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 427e5e4e9f17..8f42a3f3a949 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -5,6 +5,7 @@ #define _ICE_LIB_H_ #include "ice.h" +#include "ice_vlan.h" const char *ice_vsi_type_str(enum ice_vsi_type vsi_type); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 904571527e27..8669858d104c 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3421,6 +3421,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; + struct ice_vlan vlan; int ret; /* VLAN 0 is added by default during load/reset */ @@ -3437,7 +3438,8 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ - ret = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); + vlan = ICE_VLAN(vid, 0); + ret = vsi->vlan_ops.add_vlan(vsi, &vlan); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3458,6 +3460,7 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; + struct ice_vlan vlan; int ret; /* don't allow removal of VLAN 0 */ @@ -3467,7 +3470,8 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, /* Make sure VLAN delete is successful before updating VLAN * information */ - ret = vsi->vlan_ops.del_vlan(vsi, vid); + vlan = ICE_VLAN(vid, 0); + ret = vsi->vlan_ops.del_vlan(vsi, &vlan); if (ret) return ret; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 6fa0968f0912..d580120dbb93 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -760,24 +760,25 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf) */ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) { + u8 vlan_prio = (vf->port_vlan_info & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + u16 vlan_id = vf->port_vlan_info & VLAN_VID_MASK; struct device *dev = ice_pf_to_dev(vf->pf); struct ice_vsi *vsi = ice_get_vf_vsi(vf); - u16 vlan_id = 0; + struct ice_vlan vlan; int err; + vlan = ICE_VLAN(vlan_id, vlan_prio); if (vf->port_vlan_info) { - err = vsi->vlan_ops.set_port_vlan(vsi, vf->port_vlan_info); + err = vsi->vlan_ops.set_port_vlan(vsi, &vlan); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); return err; } - - vlan_id = vf->port_vlan_info & VLAN_VID_MASK; } /* vlan_id will either be 0 or the port VLAN number */ - err = vsi->vlan_ops.add_vlan(vsi, vlan_id, ICE_FWD_TO_VSI); + err = vsi->vlan_ops.add_vlan(vsi, &vlan); if (err) { dev_err(dev, "failed to add %s VLAN %u filter for VF %u, error %d\n", vf->port_vlan_info ? "port" : "", vlan_id, vf->vf_id, @@ -4231,6 +4232,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (add_v) { for (i = 0; i < vfl->num_elements; i++) { u16 vid = vfl->vlan_id[i]; + struct ice_vlan vlan; if (!ice_is_vf_trusted(vf) && vsi->num_vlan >= ICE_MAX_VLAN_PER_VF) { @@ -4250,7 +4252,8 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - status = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); + vlan = ICE_VLAN(vid, 0); + status = vsi->vlan_ops.add_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4293,6 +4296,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) num_vf_vlan = vsi->num_vlan; for (i = 0; i < vfl->num_elements && i < num_vf_vlan; i++) { u16 vid = vfl->vlan_id[i]; + struct ice_vlan vlan; /* we add VLAN 0 by default for each VF so we can enable * Tx VLAN anti-spoof without triggering MDD events so @@ -4301,7 +4305,8 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - status = vsi->vlan_ops.del_vlan(vsi, vid); + vlan = ICE_VLAN(vid, 0); + status = vsi->vlan_ops.del_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; diff --git a/drivers/net/ethernet/intel/ice/ice_vlan.h b/drivers/net/ethernet/intel/ice/ice_vlan.h new file mode 100644 index 000000000000..3fad0cba2da6 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vlan.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VLAN_H_ +#define _ICE_VLAN_H_ + +#include +#include "ice_type.h" + +struct ice_vlan { + u16 vid; + u8 prio; +}; + +#define ICE_VLAN(vid, prio) ((struct ice_vlan){ vid, prio }) + +#endif /* _ICE_VLAN_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 6b0a4bf28305..74b6dec0744b 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -9,20 +9,18 @@ /** * ice_vsi_add_vlan - default add VLAN implementation for all VSI types * @vsi: VSI being configured - * @vid: VLAN ID to be added - * @action: filter action to be performed on match + * @vlan: VLAN filter to add */ -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) +int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { int err = 0; - if (!ice_fltr_add_vlan(vsi, vid, action)) { + if (!ice_fltr_add_vlan(vsi, vlan)) { vsi->num_vlan++; } else { err = -ENODEV; dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i\n", - vid, vsi->vsi_num); + vlan->vid, vsi->vsi_num); } return err; @@ -31,9 +29,9 @@ ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) /** * ice_vsi_del_vlan - default del VLAN implementation for all VSI types * @vsi: VSI being configured - * @vid: VLAN ID to be removed + * @vlan: VLAN filter to delete */ -int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid) +int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { struct ice_pf *pf = vsi->back; struct device *dev; @@ -41,16 +39,16 @@ int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid) dev = ice_pf_to_dev(pf); - err = ice_fltr_remove_vlan(vsi, vid, ICE_FWD_TO_VSI); + err = ice_fltr_remove_vlan(vsi, vlan); if (!err) { vsi->num_vlan--; } else if (err == -ENOENT) { dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist\n", - vid, vsi->vsi_num); + vlan->vid, vsi->vsi_num); err = 0; } else { dev_err(dev, "Error removing VLAN %d on VSI %i error: %d\n", - vid, vsi->vsi_num, err); + vlan->vid, vsi->vsi_num, err); } return err; @@ -214,9 +212,16 @@ static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) return ret; } -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info) +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - return ice_vsi_manage_pvid(vsi, pvid_info, true); + u16 port_vlan_info; + + if (vlan->prio > 7) + return -EINVAL; + + port_vlan_info = vlan->vid | (vlan->prio << VLAN_PRIO_SHIFT); + + return ice_vsi_manage_pvid(vsi, port_vlan_info, true); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index f9fe33026306..a0305007896c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -5,19 +5,18 @@ #define _ICE_VSI_VLAN_LIB_H_ #include -#include "ice_type.h" +#include "ice_vlan.h" struct ice_vsi; -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); -int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid); +int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_ena_stripping(struct ice_vsi *vsi); int ice_vsi_dis_stripping(struct ice_vsi *vsi); int ice_vsi_ena_insertion(struct ice_vsi *vsi); int ice_vsi_dis_insertion(struct ice_vsi *vsi); -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info); +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 522169742661..c944f04acd3c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -10,8 +10,8 @@ struct ice_vsi; struct ice_vsi_vlan_ops { - int (*add_vlan)(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); - int (*del_vlan)(struct ice_vsi *vsi, u16 vid); + int (*add_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); + int (*del_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); int (*ena_stripping)(struct ice_vsi *vsi); int (*dis_stripping)(struct ice_vsi *vsi); int (*ena_insertion)(struct ice_vsi *vsi); @@ -20,7 +20,7 @@ struct ice_vsi_vlan_ops { int (*dis_rx_filtering)(struct ice_vsi *vsi); int (*ena_tx_filtering)(struct ice_vsi *vsi); int (*dis_tx_filtering)(struct ice_vsi *vsi); - int (*set_port_vlan)(struct ice_vsi *vsi, u16 pvid_info); + int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); }; void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:35 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:35 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 03/14] ice: Add new VSI VLAN ops In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-3-anthony.l.nguyen@intel.com> From: Brett Creeley Incoming changes to support 802.1Q and/or 802.1ad VLAN filtering and offloads require more flexibility when configuring VLANs. The VSI VLAN interface will allow flexibility for configuring VLANs for all VSI types. Add new files to separate the VSI VLAN ops and move functions to make the code more organized. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/Makefile | 2 + drivers/net/ethernet/intel/ice/ice.h | 2 + drivers/net/ethernet/intel/ice/ice_eswitch.c | 2 +- drivers/net/ethernet/intel/ice/ice_lib.c | 207 +---------- drivers/net/ethernet/intel/ice/ice_lib.h | 11 - drivers/net/ethernet/intel/ice/ice_main.c | 30 +- drivers/net/ethernet/intel/ice/ice_osdep.h | 1 + drivers/net/ethernet/intel/ice/ice_switch.h | 9 - drivers/net/ethernet/intel/ice/ice_type.h | 9 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 111 +----- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 326 ++++++++++++++++++ .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 27 ++ .../net/ethernet/intel/ice/ice_vsi_vlan_ops.c | 20 ++ .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 28 ++ 14 files changed, 450 insertions(+), 335 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index c22434a3ec4d..c40b3aa1d195 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -18,6 +18,8 @@ ice-y := ice_main.o \ ice_txrx_lib.o \ ice_txrx.o \ ice_fltr.o \ + ice_vsi_vlan_ops.o \ + ice_vsi_vlan_lib.o \ ice_fdir.o \ ice_ethtool_fdir.o \ ice_flex_pipe.o \ diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 6fa06b00c268..efcc713ba287 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -73,6 +73,7 @@ #include "ice_eswitch.h" #include "ice_lag.h" #include "ice_gnss.h" +#include "ice_vsi_vlan_ops.h" #define ICE_BAR0 0 #define ICE_REQ_DESC_MULTIPLE 32 @@ -370,6 +371,7 @@ struct ice_vsi { u8 irqs_ready:1; u8 current_isup:1; /* Sync 'link up' logging */ u8 stat_offsets_loaded:1; + struct ice_vsi_vlan_ops vlan_ops; u16 num_vlan; /* queue information */ diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 291748553800..0ff1a375f2aa 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -118,7 +118,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; bool rule_added = false; - ice_vsi_manage_vlan_stripping(ctrl_vsi, false); + ctrl_vsi->vlan_ops.dis_stripping(ctrl_vsi); ice_remove_vsi_fltr(&pf->hw, uplink_vsi->idx); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index cc135792834e..b50509584b31 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1694,62 +1694,6 @@ void ice_update_eth_stats(struct ice_vsi *vsi) vsi->stat_offsets_loaded = true; } -/** - * ice_vsi_add_vlan - Add VSI membership for given VLAN - * @vsi: the VSI being configured - * @vid: VLAN ID to be added - * @action: filter action to be performed on match - */ -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) -{ - struct ice_pf *pf = vsi->back; - struct device *dev; - int err = 0; - - dev = ice_pf_to_dev(pf); - - if (!ice_fltr_add_vlan(vsi, vid, action)) { - vsi->num_vlan++; - } else { - err = -ENODEV; - dev_err(dev, "Failure Adding VLAN %d on VSI %i\n", vid, - vsi->vsi_num); - } - - return err; -} - -/** - * ice_vsi_kill_vlan - Remove VSI membership for a given VLAN - * @vsi: the VSI being configured - * @vid: VLAN ID to be removed - * - * Returns 0 on success and negative on failure - */ -int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid) -{ - struct ice_pf *pf = vsi->back; - struct device *dev; - int err; - - dev = ice_pf_to_dev(pf); - - err = ice_fltr_remove_vlan(vsi, vid, ICE_FWD_TO_VSI); - if (!err) { - vsi->num_vlan--; - } else if (err == -ENOENT) { - dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist, error: %d\n", - vid, vsi->vsi_num, err); - err = 0; - } else { - dev_err(dev, "Error removing VLAN %d on vsi %i error: %d\n", - vid, vsi->vsi_num, err); - } - - return err; -} - /** * ice_vsi_cfg_frame_size - setup max frame size and Rx buffer length * @vsi: VSI @@ -2077,96 +2021,6 @@ void ice_vsi_cfg_msix(struct ice_vsi *vsi) } } -/** - * ice_vsi_manage_vlan_insertion - Manage VLAN insertion for the VSI for Tx - * @vsi: the VSI being changed - */ -int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) -{ - struct ice_hw *hw = &vsi->back->hw; - struct ice_vsi_ctx *ctxt; - int ret; - - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - /* Here we are configuring the VSI to let the driver add VLAN tags by - * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag - * insertion happens in the Tx hot path, in ice_tx_map. - */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL; - - /* Preserve existing VLAN strip setting */ - ctxt->info.vlan_flags |= (vsi->info.vlan_flags & - ICE_AQ_VSI_VLAN_EMOD_M); - - ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); - - ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); - if (ret) { - dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN insert failed, err %d aq_err %s\n", - ret, ice_aq_str(hw->adminq.sq_last_status)); - goto out; - } - - vsi->info.vlan_flags = ctxt->info.vlan_flags; -out: - kfree(ctxt); - return ret; -} - -/** - * ice_vsi_manage_vlan_stripping - Manage VLAN stripping for the VSI for Rx - * @vsi: the VSI being changed - * @ena: boolean value indicating if this is a enable or disable request - */ -int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) -{ - struct ice_hw *hw = &vsi->back->hw; - struct ice_vsi_ctx *ctxt; - int ret; - - /* do not allow modifying VLAN stripping when a port VLAN is configured - * on this VSI - */ - if (vsi->info.pvid) - return 0; - - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - /* Here we are configuring what the VSI should do with the VLAN tag in - * the Rx packet. We can either leave the tag in the packet or put it in - * the Rx descriptor. - */ - if (ena) - /* Strip VLAN tag from Rx packet and put it in the desc */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH; - else - /* Disable stripping. Leave tag in packet */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING; - - /* Allow all packets untagged/tagged */ - ctxt->info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL; - - ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); - - ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); - if (ret) { - dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN strip failed, ena = %d err %d aq_err %s\n", - ena, ret, ice_aq_str(hw->adminq.sq_last_status)); - ret = -EIO; - goto out; - } - - vsi->info.vlan_flags = ctxt->info.vlan_flags; -out: - kfree(ctxt); - return ret; -} - /** * ice_vsi_start_all_rx_rings - start/enable all of a VSI's Rx rings * @vsi: the VSI whose rings are to be enabled @@ -2260,61 +2114,6 @@ bool ice_vsi_is_vlan_pruning_ena(struct ice_vsi *vsi) return (vsi->info.sw_flags2 & ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA); } -/** - * ice_cfg_vlan_pruning - enable or disable VLAN pruning on the VSI - * @vsi: VSI to enable or disable VLAN pruning on - * @ena: set to true to enable VLAN pruning and false to disable it - * - * returns 0 if VSI is updated, negative otherwise - */ -int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena) -{ - struct ice_vsi_ctx *ctxt; - struct ice_pf *pf; - int status; - - if (!vsi) - return -EINVAL; - - /* Don't enable VLAN pruning if the netdev is currently in promiscuous - * mode. VLAN pruning will be enabled when the interface exits - * promiscuous mode if any VLAN filters are active. - */ - if (vsi->netdev && vsi->netdev->flags & IFF_PROMISC && ena) - return 0; - - pf = vsi->back; - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - ctxt->info = vsi->info; - - if (ena) - ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - else - ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - - ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SW_VALID); - - status = ice_update_vsi(&pf->hw, vsi->idx, ctxt, NULL); - if (status) { - netdev_err(vsi->netdev, "%sabling VLAN pruning on VSI handle: %d, VSI HW ID: %d failed, err = %d, aq_err = %s\n", - ena ? "En" : "Dis", vsi->idx, vsi->vsi_num, - status, ice_aq_str(pf->hw.adminq.sq_last_status)); - goto err_out; - } - - vsi->info.sw_flags2 = ctxt->info.sw_flags2; - - kfree(ctxt); - return 0; - -err_out: - kfree(ctxt); - return -EIO; -} - static void ice_vsi_set_tc_cfg(struct ice_vsi *vsi) { if (!test_bit(ICE_FLAG_DCB_ENA, vsi->back->flags)) { @@ -2594,6 +2393,8 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, if (ret) goto unroll_get_qs; + ice_vsi_init_vlan_ops(vsi); + switch (vsi->type) { case ICE_VSI_CTRL: case ICE_VSI_SWITCHDEV_CTRL: @@ -3257,6 +3058,8 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) if (vtype == ICE_VSI_VF) vf = &pf->vf[vsi->vf_id]; + ice_vsi_init_vlan_ops(vsi); + coalesce = kcalloc(vsi->num_q_vectors, sizeof(struct ice_coalesce_stored), GFP_KERNEL); if (!coalesce) @@ -4075,7 +3878,7 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) */ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { - return ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); + return vsi->vlan_ops.add_vlan(vsi, 0, ICE_FWD_TO_VSI); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 28e0f1147c82..427e5e4e9f17 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -22,15 +22,6 @@ int ice_vsi_cfg_lan_txqs(struct ice_vsi *vsi); void ice_vsi_cfg_msix(struct ice_vsi *vsi); -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); - -int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid); - -int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi); - -int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena); - int ice_vsi_start_all_rx_rings(struct ice_vsi *vsi); int ice_vsi_stop_all_rx_rings(struct ice_vsi *vsi); @@ -45,8 +36,6 @@ int ice_vsi_stop_xdp_tx_rings(struct ice_vsi *vsi); bool ice_vsi_is_vlan_pruning_ena(struct ice_vsi *vsi); -int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena); - void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create); int ice_set_link(struct ice_vsi *vsi, bool ena); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 18ecb1eb85a6..904571527e27 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -401,7 +401,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) ~IFF_PROMISC; goto out_promisc; } - ice_cfg_vlan_pruning(vsi, false); + vsi->vlan_ops.dis_rx_filtering(vsi); } } else { /* Clear Rx filter to remove traffic from wire */ @@ -415,7 +415,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) goto out_promisc; } if (vsi->num_vlan > 1) - ice_cfg_vlan_pruning(vsi, true); + vsi->vlan_ops.ena_rx_filtering(vsi); } } } @@ -3429,7 +3429,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Enable VLAN pruning when a VLAN other than 0 is added */ if (!ice_vsi_is_vlan_pruning_ena(vsi)) { - ret = ice_cfg_vlan_pruning(vsi, true); + ret = vsi->vlan_ops.ena_rx_filtering(vsi); if (ret) return ret; } @@ -3437,7 +3437,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ - ret = ice_vsi_add_vlan(vsi, vid, ICE_FWD_TO_VSI); + ret = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3464,16 +3464,16 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, if (!vid) return 0; - /* Make sure ice_vsi_kill_vlan is successful before updating VLAN + /* Make sure VLAN delete is successful before updating VLAN * information */ - ret = ice_vsi_kill_vlan(vsi, vid); + ret = vsi->vlan_ops.del_vlan(vsi, vid); if (ret) return ret; /* Disable pruning when VLAN 0 is the only VLAN rule */ if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - ret = ice_cfg_vlan_pruning(vsi, false); + vsi->vlan_ops.dis_rx_filtering(vsi); set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); return ret; @@ -5617,24 +5617,24 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = ice_vsi_manage_vlan_stripping(vsi, true); + ret = vsi->vlan_ops.ena_stripping(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = ice_vsi_manage_vlan_stripping(vsi, false); + ret = vsi->vlan_ops.dis_stripping(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_TX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = ice_vsi_manage_vlan_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = ice_vsi_manage_vlan_insertion(vsi); + ret = vsi->vlan_ops.dis_insertion(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = ice_cfg_vlan_pruning(vsi, true); + ret = vsi->vlan_ops.ena_rx_filtering(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = ice_cfg_vlan_pruning(vsi, false); + ret = vsi->vlan_ops.dis_rx_filtering(vsi); if ((features & NETIF_F_NTUPLE) && !(netdev->features & NETIF_F_NTUPLE)) { @@ -5670,9 +5670,9 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) int ret = 0; if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - ret = ice_vsi_manage_vlan_stripping(vsi, true); + ret = vsi->vlan_ops.ena_stripping(vsi); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - ret = ice_vsi_manage_vlan_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_osdep.h b/drivers/net/ethernet/intel/ice/ice_osdep.h index f57c414bc0a9..380e8ae94fc9 100644 --- a/drivers/net/ethernet/intel/ice/ice_osdep.h +++ b/drivers/net/ethernet/intel/ice/ice_osdep.h @@ -9,6 +9,7 @@ #ifndef CONFIG_64BIT #include #endif +#include #define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg))) #define rd32(a, reg) readl((a)->hw_addr + (reg)) diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index d8334beaaa8a..4fb1a7ae5dbb 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -33,15 +33,6 @@ struct ice_vsi_ctx { struct ice_q_ctx *rdma_q_ctx[ICE_MAX_TRAFFIC_CLASS]; }; -enum ice_sw_fwd_act_type { - ICE_FWD_TO_VSI = 0, - ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */ - ICE_FWD_TO_Q, - ICE_FWD_TO_QGRP, - ICE_DROP_PACKET, - ICE_INVAL_ACT -}; - /* Switch recipe ID enum values are specific to hardware */ enum ice_sw_lkup_type { ICE_SW_LKUP_ETHERTYPE = 0, diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index caf0a02b25f5..ef2ef064a74c 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -1007,6 +1007,15 @@ struct ice_hw_port_stats { u64 fd_sb_match; }; +enum ice_sw_fwd_act_type { + ICE_FWD_TO_VSI = 0, + ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */ + ICE_FWD_TO_Q, + ICE_FWD_TO_QGRP, + ICE_DROP_PACKET, + ICE_INVAL_ACT +}; + struct ice_aq_get_set_rss_lut_params { u16 vsi_handle; /* software VSI handle */ u16 lut_size; /* size of the LUT buffer */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index ab03010c822d..6fa0968f0912 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -642,55 +642,6 @@ static void ice_trigger_vf_reset(struct ice_vf *vf, bool is_vflr, bool is_pfr) } } -/** - * ice_vsi_manage_pvid - Enable or disable port VLAN for VSI - * @vsi: the VSI to update - * @pvid_info: VLAN ID and QoS used to set the PVID VSI context field - * @enable: true for enable PVID false for disable - */ -static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) -{ - struct ice_hw *hw = &vsi->back->hw; - struct ice_aqc_vsi_props *info; - struct ice_vsi_ctx *ctxt; - int ret; - - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - ctxt->info = vsi->info; - info = &ctxt->info; - if (enable) { - info->vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | - ICE_AQ_VSI_PVLAN_INSERT_PVID | - ICE_AQ_VSI_VLAN_EMOD_STR; - info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } else { - info->vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING | - ICE_AQ_VSI_VLAN_MODE_ALL; - info->sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } - - info->pvid = cpu_to_le16(pvid_info); - info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | - ICE_AQ_VSI_PROP_SW_VALID); - - ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); - if (ret) { - dev_info(ice_hw_to_dev(hw), "update VSI for port VLAN failed, err %d aq_err %s\n", - ret, ice_aq_str(hw->adminq.sq_last_status)); - goto out; - } - - vsi->info.vlan_flags = info->vlan_flags; - vsi->info.sw_flags2 = info->sw_flags2; - vsi->info.pvid = info->pvid; -out: - kfree(ctxt); - return ret; -} - /** * ice_vf_get_port_info - Get the VF's port info structure * @vf: VF used to get the port info structure for @@ -815,7 +766,7 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) int err; if (vf->port_vlan_info) { - err = ice_vsi_manage_pvid(vsi, vf->port_vlan_info, true); + err = vsi->vlan_ops.set_port_vlan(vsi, vf->port_vlan_info); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); @@ -826,7 +777,7 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) } /* vlan_id will either be 0 or the port VLAN number */ - err = ice_vsi_add_vlan(vsi, vlan_id, ICE_FWD_TO_VSI); + err = vsi->vlan_ops.add_vlan(vsi, vlan_id, ICE_FWD_TO_VSI); if (err) { dev_err(dev, "failed to add %s VLAN %u filter for VF %u, error %d\n", vf->port_vlan_info ? "port" : "", vlan_id, vf->vf_id, @@ -837,37 +788,6 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) return 0; } -static int ice_cfg_vlan_antispoof(struct ice_vsi *vsi, bool enable) -{ - struct ice_vsi_ctx *ctx; - int err; - - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); - if (!ctx) - return -ENOMEM; - - ctx->info.sec_flags = vsi->info.sec_flags; - ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); - - if (enable) - ctx->info.sec_flags |= ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S; - else - ctx->info.sec_flags &= ~(ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); - - err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); - if (err) - dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx VLAN anti-spoof %s for VSI %d, error %d\n", - enable ? "ON" : "OFF", vsi->vsi_num, err); - else - vsi->info.sec_flags = ctx->info.sec_flags; - - kfree(ctx); - - return err; -} - static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable) { struct ice_vsi_ctx *ctx; @@ -905,7 +825,7 @@ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) { int err; - err = ice_cfg_vlan_antispoof(vsi, true); + err = vsi->vlan_ops.ena_tx_filtering(vsi); if (err) return err; @@ -920,7 +840,7 @@ static int ice_vsi_dis_spoofchk(struct ice_vsi *vsi) { int err; - err = ice_cfg_vlan_antispoof(vsi, false); + err = vsi->vlan_ops.dis_tx_filtering(vsi); if (err) return err; @@ -3131,9 +3051,9 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) if (vsi->num_vlan || vf->port_vlan_info) { if (rm_promisc) - ret = ice_cfg_vlan_pruning(vsi, true); + ret = vsi->vlan_ops.ena_rx_filtering(vsi); else - ret = ice_cfg_vlan_pruning(vsi, false); + ret = vsi->vlan_ops.dis_rx_filtering(vsi); if (ret) { dev_err(dev, "Failed to configure VLAN pruning in promiscuous mode\n"); v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4330,7 +4250,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - status = ice_vsi_add_vlan(vsi, vid, ICE_FWD_TO_VSI); + status = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4339,7 +4259,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) /* Enable VLAN pruning when non-zero VLAN is added */ if (!vlan_promisc && vid && !ice_vsi_is_vlan_pruning_ena(vsi)) { - status = ice_cfg_vlan_pruning(vsi, true); + status = vsi->vlan_ops.ena_rx_filtering(vsi); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n", @@ -4381,10 +4301,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - /* Make sure ice_vsi_kill_vlan is successful before - * updating VLAN information - */ - status = ice_vsi_kill_vlan(vsi, vid); + status = vsi->vlan_ops.del_vlan(vsi, vid); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4393,7 +4310,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) /* Disable VLAN pruning when only VLAN 0 is left */ if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - ice_cfg_vlan_pruning(vsi, false); + status = vsi->vlan_ops.dis_rx_filtering(vsi); /* Disable Unicast/Multicast VLAN promiscuous mode */ if (vlan_promisc) { @@ -4462,7 +4379,7 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf) } vsi = ice_get_vf_vsi(vf); - if (ice_vsi_manage_vlan_stripping(vsi, true)) + if (vsi->vlan_ops.ena_stripping(vsi)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4497,7 +4414,7 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf) goto error_param; } - if (ice_vsi_manage_vlan_stripping(vsi, false)) + if (vsi->vlan_ops.dis_stripping(vsi)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4527,9 +4444,9 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) - return ice_vsi_manage_vlan_stripping(vsi, true); + return vsi->vlan_ops.ena_stripping(vsi); else - return ice_vsi_manage_vlan_stripping(vsi, false); + return vsi->vlan_ops.dis_stripping(vsi); } static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c new file mode 100644 index 000000000000..6b0a4bf28305 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -0,0 +1,326 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_lib.h" +#include "ice_lib.h" +#include "ice_fltr.h" +#include "ice.h" + +/** + * ice_vsi_add_vlan - default add VLAN implementation for all VSI types + * @vsi: VSI being configured + * @vid: VLAN ID to be added + * @action: filter action to be performed on match + */ +int +ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) +{ + int err = 0; + + if (!ice_fltr_add_vlan(vsi, vid, action)) { + vsi->num_vlan++; + } else { + err = -ENODEV; + dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i\n", + vid, vsi->vsi_num); + } + + return err; +} + +/** + * ice_vsi_del_vlan - default del VLAN implementation for all VSI types + * @vsi: VSI being configured + * @vid: VLAN ID to be removed + */ +int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid) +{ + struct ice_pf *pf = vsi->back; + struct device *dev; + int err; + + dev = ice_pf_to_dev(pf); + + err = ice_fltr_remove_vlan(vsi, vid, ICE_FWD_TO_VSI); + if (!err) { + vsi->num_vlan--; + } else if (err == -ENOENT) { + dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist\n", + vid, vsi->vsi_num); + err = 0; + } else { + dev_err(dev, "Error removing VLAN %d on VSI %i error: %d\n", + vid, vsi->vsi_num, err); + } + + return err; +} + +/** + * ice_vsi_manage_vlan_insertion - Manage VLAN insertion for the VSI for Tx + * @vsi: the VSI being changed + */ +static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + /* Here we are configuring the VSI to let the driver add VLAN tags by + * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag + * insertion happens in the Tx hot path, in ice_tx_map. + */ + ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL; + + /* Preserve existing VLAN strip setting */ + ctxt->info.vlan_flags |= (vsi->info.vlan_flags & + ICE_AQ_VSI_VLAN_EMOD_M); + + ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) { + dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN insert failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + goto out; + } + + vsi->info.vlan_flags = ctxt->info.vlan_flags; +out: + kfree(ctxt); + return err; +} + +/** + * ice_vsi_manage_vlan_stripping - Manage VLAN stripping for the VSI for Rx + * @vsi: the VSI being changed + * @ena: boolean value indicating if this is a enable or disable request + */ +static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + /* do not allow modifying VLAN stripping when a port VLAN is configured + * on this VSI + */ + if (vsi->info.pvid) + return 0; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + /* Here we are configuring what the VSI should do with the VLAN tag in + * the Rx packet. We can either leave the tag in the packet or put it in + * the Rx descriptor. + */ + if (ena) + /* Strip VLAN tag from Rx packet and put it in the desc */ + ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH; + else + /* Disable stripping. Leave tag in packet */ + ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING; + + /* Allow all packets untagged/tagged */ + ctxt->info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL; + + ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) { + dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN strip failed, ena = %d err %d aq_err %s\n", + ena, err, ice_aq_str(hw->adminq.sq_last_status)); + goto out; + } + + vsi->info.vlan_flags = ctxt->info.vlan_flags; +out: + kfree(ctxt); + return err; +} + +int ice_vsi_ena_stripping(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_stripping(vsi, true); +} + +int ice_vsi_dis_stripping(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_stripping(vsi, false); +} + +int ice_vsi_ena_insertion(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_insertion(vsi); +} + +int ice_vsi_dis_insertion(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_insertion(vsi); +} + +/** + * ice_vsi_manage_pvid - Enable or disable port VLAN for VSI + * @vsi: the VSI to update + * @pvid_info: VLAN ID and QoS used to set the PVID VSI context field + * @enable: true for enable PVID false for disable + */ +static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_aqc_vsi_props *info; + struct ice_vsi_ctx *ctxt; + int ret; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info = vsi->info; + info = &ctxt->info; + if (enable) { + info->vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | + ICE_AQ_VSI_PVLAN_INSERT_PVID | + ICE_AQ_VSI_VLAN_EMOD_STR; + info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + } else { + info->vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING | + ICE_AQ_VSI_VLAN_MODE_ALL; + info->sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + } + + info->pvid = cpu_to_le16(pvid_info); + info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | + ICE_AQ_VSI_PROP_SW_VALID); + + ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (ret) { + dev_info(ice_hw_to_dev(hw), "update VSI for port VLAN failed, err %d aq_err %s\n", + ret, ice_aq_str(hw->adminq.sq_last_status)); + goto out; + } + + vsi->info.vlan_flags = info->vlan_flags; + vsi->info.sw_flags2 = info->sw_flags2; + vsi->info.pvid = info->pvid; +out: + kfree(ctxt); + return ret; +} + +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info) +{ + return ice_vsi_manage_pvid(vsi, pvid_info, true); +} + +/** + * ice_cfg_vlan_pruning - enable or disable VLAN pruning on the VSI + * @vsi: VSI to enable or disable VLAN pruning on + * @ena: set to true to enable VLAN pruning and false to disable it + * + * returns 0 if VSI is updated, negative otherwise + */ +static int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena) +{ + struct ice_vsi_ctx *ctxt; + struct ice_pf *pf; + int status; + + if (!vsi) + return -EINVAL; + + /* Don't enable VLAN pruning if the netdev is currently in promiscuous + * mode. VLAN pruning will be enabled when the interface exits + * promiscuous mode if any VLAN filters are active. + */ + if (vsi->netdev && vsi->netdev->flags & IFF_PROMISC && ena) + return 0; + + pf = vsi->back; + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info = vsi->info; + + if (ena) + ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + else + ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + + ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SW_VALID); + + status = ice_update_vsi(&pf->hw, vsi->idx, ctxt, NULL); + if (status) { + netdev_err(vsi->netdev, "%sabling VLAN pruning on VSI handle: %d, VSI HW ID: %d failed, err = %d, aq_err = %s\n", + ena ? "En" : "Dis", vsi->idx, vsi->vsi_num, status, + ice_aq_str(pf->hw.adminq.sq_last_status)); + goto err_out; + } + + vsi->info.sw_flags2 = ctxt->info.sw_flags2; + + kfree(ctxt); + return 0; + +err_out: + kfree(ctxt); + return status; +} + +int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_pruning(vsi, true); +} + +int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_pruning(vsi, false); +} + +static int ice_cfg_vlan_antispoof(struct ice_vsi *vsi, bool enable) +{ + struct ice_vsi_ctx *ctx; + int err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->info.sec_flags = vsi->info.sec_flags; + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); + + if (enable) + ctx->info.sec_flags |= ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S; + else + ctx->info.sec_flags &= ~(ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); + + err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx VLAN anti-spoof %s for VSI %d, error %d\n", + enable ? "ON" : "OFF", vsi->vsi_num, err); + else + vsi->info.sec_flags = ctx->info.sec_flags; + + kfree(ctx); + + return err; +} + +int ice_vsi_ena_tx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_antispoof(vsi, true); +} + +int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_antispoof(vsi, false); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h new file mode 100644 index 000000000000..f9fe33026306 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VSI_VLAN_LIB_H_ +#define _ICE_VSI_VLAN_LIB_H_ + +#include +#include "ice_type.h" + +struct ice_vsi; + +int +ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); +int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid); + +int ice_vsi_ena_stripping(struct ice_vsi *vsi); +int ice_vsi_dis_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_insertion(struct ice_vsi *vsi); +int ice_vsi_dis_insertion(struct ice_vsi *vsi); +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info); + +int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_ena_tx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi); + +#endif /* _ICE_VSI_VLAN_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c new file mode 100644 index 000000000000..3bab6c025856 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_ops.h" +#include "ice.h" + +void ice_vsi_init_vlan_ops(struct ice_vsi *vsi) +{ + vsi->vlan_ops.add_vlan = ice_vsi_add_vlan; + vsi->vlan_ops.del_vlan = ice_vsi_del_vlan; + vsi->vlan_ops.ena_stripping = ice_vsi_ena_stripping; + vsi->vlan_ops.dis_stripping = ice_vsi_dis_stripping; + vsi->vlan_ops.ena_insertion = ice_vsi_ena_insertion; + vsi->vlan_ops.dis_insertion = ice_vsi_dis_insertion; + vsi->vlan_ops.ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vsi->vlan_ops.dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vsi->vlan_ops.ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vsi->vlan_ops.dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + vsi->vlan_ops.set_port_vlan = ice_vsi_set_port_vlan; +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h new file mode 100644 index 000000000000..522169742661 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VSI_VLAN_OPS_H_ +#define _ICE_VSI_VLAN_OPS_H_ + +#include "ice_type.h" +#include "ice_vsi_vlan_lib.h" + +struct ice_vsi; + +struct ice_vsi_vlan_ops { + int (*add_vlan)(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); + int (*del_vlan)(struct ice_vsi *vsi, u16 vid); + int (*ena_stripping)(struct ice_vsi *vsi); + int (*dis_stripping)(struct ice_vsi *vsi); + int (*ena_insertion)(struct ice_vsi *vsi); + int (*dis_insertion)(struct ice_vsi *vsi); + int (*ena_rx_filtering)(struct ice_vsi *vsi); + int (*dis_rx_filtering)(struct ice_vsi *vsi); + int (*ena_tx_filtering)(struct ice_vsi *vsi); + int (*dis_tx_filtering)(struct ice_vsi *vsi); + int (*set_port_vlan)(struct ice_vsi *vsi, u16 pvid_info); +}; + +void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); + +#endif /* _ICE_VSI_VLAN_OPS_H_ */ -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:45 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:45 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 13/14] ice: Add support for 802.1ad port VLANs VF In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-13-anthony.l.nguyen@intel.com> From: Brett Creeley Currently there is only support for 802.1Q port VLANs on SR-IOV VFs. Add support to also allow 802.1ad port VLANs when double VLAN mode is enabled. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 51 ++++++++++++++++--- 1 file changed, 44 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index de74a2b4f846..f1802de98b82 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -768,6 +768,11 @@ bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) return (ice_vf_get_port_vlan_id(vf) || ice_vf_get_port_vlan_prio(vf)); } +static u16 ice_vf_get_port_vlan_tpid(struct ice_vf *vf) +{ + return vf->port_vlan_info.tpid; +} + /** * ice_vf_rebuild_host_vlan_cfg - add VLAN 0 filter or rebuild the Port VLAN * @vf: VF to add MAC filters for @@ -4129,6 +4134,33 @@ static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg) v_ret, (u8 *)vfres, sizeof(*vfres)); } +/** + * ice_is_supported_port_vlan_proto - make sure the vlan_proto is supported + * @hw: hardware structure used to check the VLAN mode + * @vlan_proto: VLAN TPID being checked + * + * If the device is configured in Double VLAN Mode (DVM), then both ETH_P_8021Q + * and ETH_P_8021AD are supported. If the device is configured in Single VLAN + * Mode (SVM), then only ETH_P_8021Q is supported. + */ +static bool +ice_is_supported_port_vlan_proto(struct ice_hw *hw, u16 vlan_proto) +{ + bool is_supported = false; + + switch (vlan_proto) { + case ETH_P_8021Q: + is_supported = true; + break; + case ETH_P_8021AD: + if (ice_is_dvm_ena(hw)) + is_supported = true; + break; + } + + return is_supported; +} + /** * ice_set_vf_port_vlan * @netdev: network interface device structure @@ -4144,6 +4176,7 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, __be16 vlan_proto) { struct ice_pf *pf = ice_netdev_to_pf(netdev); + u16 local_vlan_proto = ntohs(vlan_proto); struct device *dev; struct ice_vf *vf; int ret; @@ -4158,8 +4191,9 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, return -EINVAL; } - if (vlan_proto != htons(ETH_P_8021Q)) { - dev_err(dev, "VF VLAN protocol is not supported\n"); + if (!ice_is_supported_port_vlan_proto(&pf->hw, local_vlan_proto)) { + dev_err(dev, "VF VLAN protocol 0x%04x is not supported\n", + local_vlan_proto); return -EPROTONOSUPPORT; } @@ -4169,19 +4203,20 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, return ret; if (ice_vf_get_port_vlan_prio(vf) == qos && + ice_vf_get_port_vlan_tpid(vf) == local_vlan_proto && ice_vf_get_port_vlan_id(vf) == vlan_id) { /* duplicate request, so just return success */ - dev_dbg(dev, "Duplicate port VLAN %u, QoS %u request\n", - vlan_id, qos); + dev_dbg(dev, "Duplicate port VLAN %u, QoS %u, TPID 0x%04x request\n", + vlan_id, qos, local_vlan_proto); return 0; } mutex_lock(&vf->cfg_lock); - vf->port_vlan_info = ICE_VLAN(ETH_P_8021Q, vlan_id, qos); + vf->port_vlan_info = ICE_VLAN(local_vlan_proto, vlan_id, qos); if (ice_vf_is_port_vlan_ena(vf)) - dev_info(dev, "Setting VLAN %u, QoS %u on VF %d\n", - vlan_id, qos, vf_id); + dev_info(dev, "Setting VLAN %u, QoS %u, TPID 0x%04x on VF %d\n", + vlan_id, qos, local_vlan_proto, vf_id); else dev_info(dev, "Clearing port VLAN on VF %d\n", vf_id); @@ -5904,6 +5939,8 @@ ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi) /* VF configuration for VLAN and applicable QoS */ ivi->vlan = ice_vf_get_port_vlan_id(vf); ivi->qos = ice_vf_get_port_vlan_prio(vf); + if (ice_vf_is_port_vlan_ena(vf)) + ivi->vlan_proto = cpu_to_be16(ice_vf_get_port_vlan_tpid(vf)); ivi->trusted = vf->trusted; ivi->spoofchk = vf->spoofchk; -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:41 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:41 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 09/14] ice: Add hot path support for 802.1Q and 802.1ad VLAN offloads In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-9-anthony.l.nguyen@intel.com> From: Brett Creeley Currently the driver only supports 802.1Q VLAN insertion and stripping. However, once Double VLAN Mode (DVM) is fully supported, then both 802.1Q and 802.1ad VLAN insertion and stripping will be supported. Unfortunately the VSI context parameters only allow for one VLAN ethertype at a time for VLAN offloads so only one or the other VLAN ethertype offload can be supported at once. To support this, multiple changes are needed. Rx path changes: [1] In DVM, the Rx queue context l2tagsel field needs to be cleared so the outermost tag shows up in the l2tag2_2nd field of the Rx flex descriptor. In Single VLAN Mode (SVM), the l2tagsel field should remain 1 to support SVM configurations. [2] Modify the ice_test_staterr() function to take a __le16 instead of the ice_32b_rx_flex_desc union pointer so this function can be used for both rx_desc->wb.status_error0 and rx_desc->wb.status_error1. [3] Add the new inline function ice_get_vlan_tag_from_rx_desc() that checks if there is a VLAN tag in l2tag1 or l2tag2_2nd. [4] In ice_receive_skb(), add a check to see if NETIF_F_HW_VLAN_STAG_RX is enabled in netdev->features. If it is, then this is the VLAN ethertype that needs to be added to the stripping VLAN tag. Since ice_fix_features() prevents CTAG_RX and STAG_RX from being enabled simultaneously, the VLAN ethertype will only ever be 802.1Q or 802.1ad. Tx path changes: [1] In DVM, the VLAN tag needs to be placed in the l2tag2 field of the Tx context descriptor. The new define ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN was added to the list of tx_flags to handle this case. [2] When the stack requests the VLAN tag to be offloaded on Tx, the driver needs to set either ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN or ICE_TX_FLAGS_HW_VLAN, so the tag is inserted in l2tag2 or l2tag1 respectively. To determine which location to use, set a bit in the Tx ring flags field during ring allocation that can be used to determine which field to use in the Tx descriptor. In DVM, always use l2tag2, and in SVM, always use l2tag1. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- v2: Fix kdoc issue drivers/net/ethernet/intel/ice/ice_base.c | 18 +++++++++-- drivers/net/ethernet/intel/ice/ice_dcb_lib.c | 8 +++-- .../net/ethernet/intel/ice/ice_lan_tx_rx.h | 2 ++ drivers/net/ethernet/intel/ice/ice_lib.c | 5 ++++ drivers/net/ethernet/intel/ice/ice_txrx.c | 28 +++++++++++------ drivers/net/ethernet/intel/ice/ice_txrx.h | 3 ++ drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 9 ++++-- drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 30 +++++++++++++++++-- drivers/net/ethernet/intel/ice/ice_xsk.c | 6 ++-- 9 files changed, 87 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 44bdd0ed1629..9ca0ae2bb1dc 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -406,8 +406,22 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring) */ rlan_ctx.crcstrip = 1; - /* L2TSEL flag defines the reported L2 Tags in the receive descriptor */ - rlan_ctx.l2tsel = 1; + /* L2TSEL flag defines the reported L2 Tags in the receive descriptor + * and it needs to remain 1 for non-DVM capable configurations to not + * break backward compatibility for VF drivers. Setting this field to 0 + * will cause the single/outer VLAN tag to be stripped to the L2TAG2_2ND + * field in the Rx descriptor. Setting it to 1 allows the VLAN tag to + * be stripped in L2TAG1 of the Rx descriptor, which is where VFs will + * check for the tag + */ + if (ice_is_dvm_ena(hw)) + if (vsi->type == ICE_VSI_VF && + ice_vf_is_port_vlan_ena(&vsi->back->vf[vsi->vf_id])) + rlan_ctx.l2tsel = 1; + else + rlan_ctx.l2tsel = 0; + else + rlan_ctx.l2tsel = 1; rlan_ctx.dtype = ICE_RX_DTYPE_NO_SPLIT; rlan_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_NO_SPLIT; diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c index b94d8daeaa58..add90e75f05c 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c @@ -916,7 +916,8 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, return; /* Insert 802.1p priority into VLAN header */ - if ((first->tx_flags & ICE_TX_FLAGS_HW_VLAN) || + if ((first->tx_flags & ICE_TX_FLAGS_HW_VLAN || + first->tx_flags & ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN) || skb->priority != TC_PRIO_CONTROL) { first->tx_flags &= ~ICE_TX_FLAGS_VLAN_PR_M; /* Mask the lower 3 bits to set the 802.1p priority */ @@ -925,7 +926,10 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, /* if this is not already set it means a VLAN 0 + priority needs * to be offloaded */ - first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; + if (tx_ring->flags & ICE_TX_FLAGS_RING_VLAN_L2TAG2) + first->tx_flags |= ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN; + else + first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; } } diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index d981dc6f2323..a1fc676a4665 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -424,6 +424,8 @@ enum ice_rx_flex_desc_status_error_0_bits { enum ice_rx_flex_desc_status_error_1_bits { /* Note: These are predefined bit offsets */ ICE_RX_FLEX_DESC_STATUS1_NAT_S = 4, + /* [10:5] reserved */ + ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11, ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */ }; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 6a7f107a43c5..36507f0dc04e 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1370,6 +1370,7 @@ static void ice_vsi_clear_rings(struct ice_vsi *vsi) */ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) { + bool dvm_ena = ice_is_dvm_ena(&vsi->back->hw); struct ice_pf *pf = vsi->back; struct device *dev; u16 i; @@ -1391,6 +1392,10 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) ring->tx_tstamps = &pf->ptp.port.tx; ring->dev = dev; ring->count = vsi->num_tx_desc; + if (dvm_ena) + ring->flags |= ICE_TX_FLAGS_RING_VLAN_L2TAG2; + else + ring->flags |= ICE_TX_FLAGS_RING_VLAN_L2TAG1; WRITE_ONCE(vsi->tx_rings[i], ring); } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index d21f1c946767..3461aa21641a 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1073,7 +1073,7 @@ ice_is_non_eop(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) { /* if we are the last buffer then there is nothing else to do */ #define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S) - if (likely(ice_test_staterr(rx_desc, ICE_RXD_EOF))) + if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF))) return false; rx_ring->rx_stats.non_eop_descs++; @@ -1135,7 +1135,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) * hardware wrote DD then it will be non-zero */ stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S); - if (!ice_test_staterr(rx_desc, stat_err_bits)) + if (!ice_test_staterr(rx_desc->wb.status_error0, stat_err_bits)) break; /* This memory barrier is needed to keep us from reading @@ -1221,14 +1221,13 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) continue; stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S); - if (unlikely(ice_test_staterr(rx_desc, stat_err_bits))) { + if (unlikely(ice_test_staterr(rx_desc->wb.status_error0, + stat_err_bits))) { dev_kfree_skb_any(skb); continue; } - stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S); - if (ice_test_staterr(rx_desc, stat_err_bits)) - vlan_tag = le16_to_cpu(rx_desc->wb.l2tag1); + vlan_tag = ice_get_vlan_tag_from_rx_desc(rx_desc); /* pad the skb if needed, to make a valid ethernet frame */ if (eth_skb_pad(skb)) { @@ -1910,12 +1909,16 @@ ice_tx_prepare_vlan_flags(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first) if (!skb_vlan_tag_present(skb) && eth_type_vlan(skb->protocol)) return; - /* currently, we always assume 802.1Q for VLAN insertion as VLAN - * insertion for 802.1AD is not supported + /* the VLAN ethertype/tpid is determined by VSI configuration and netdev + * feature flags, which the driver only allows either 802.1Q or 802.1ad + * VLAN offloads exclusively so we only care about the VLAN ID here */ if (skb_vlan_tag_present(skb)) { first->tx_flags |= skb_vlan_tag_get(skb) << ICE_TX_FLAGS_VLAN_S; - first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; + if (tx_ring->flags & ICE_TX_FLAGS_RING_VLAN_L2TAG2) + first->tx_flags |= ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN; + else + first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; } ice_tx_prepare_vlan_flags_dcb(tx_ring, first); @@ -2288,6 +2291,13 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring) /* prepare the VLAN tagging flags for Tx */ ice_tx_prepare_vlan_flags(tx_ring, first); + if (first->tx_flags & ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN) { + offload.cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX | + (ICE_TX_CTX_DESC_IL2TAG2 << + ICE_TXD_CTX_QW1_CMD_S)); + offload.cd_l2tag2 = (first->tx_flags & ICE_TX_FLAGS_VLAN_M) >> + ICE_TX_FLAGS_VLAN_S; + } /* set up TSO offload */ tso = ice_tso(first, &offload); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index c56dd1749903..03bbae035de8 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -123,6 +123,7 @@ static inline int ice_skb_pad(void) #define ICE_TX_FLAGS_IPV4 BIT(5) #define ICE_TX_FLAGS_IPV6 BIT(6) #define ICE_TX_FLAGS_TUNNEL BIT(7) +#define ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN BIT(8) #define ICE_TX_FLAGS_VLAN_M 0xffff0000 #define ICE_TX_FLAGS_VLAN_PR_M 0xe0000000 #define ICE_TX_FLAGS_VLAN_PR_S 29 @@ -334,6 +335,8 @@ struct ice_tx_ring { spinlock_t tx_lock; u32 txq_teid; /* Added Tx queue TEID */ #define ICE_TX_FLAGS_RING_XDP BIT(0) +#define ICE_TX_FLAGS_RING_VLAN_L2TAG1 BIT(1) +#define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2) u8 flags; u8 dcb_tc; /* Traffic class of ring */ u8 ptp_tx; diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 84a6a3f9d624..9c37d827ed28 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -207,9 +207,14 @@ ice_process_skb_fields(struct ice_rx_ring *rx_ring, void ice_receive_skb(struct ice_rx_ring *rx_ring, struct sk_buff *skb, u16 vlan_tag) { - if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) && - (vlan_tag & VLAN_VID_MASK)) + netdev_features_t features = rx_ring->netdev->features; + bool non_zero_vlan = !!(vlan_tag & VLAN_VID_MASK); + + if ((features & NETIF_F_HW_VLAN_CTAG_RX) && non_zero_vlan) __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag); + else if ((features & NETIF_F_HW_VLAN_STAG_RX) && non_zero_vlan) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021AD), vlan_tag); + napi_gro_receive(&rx_ring->q_vector->napi, skb); } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h index 11b6c1601986..c7d2954dc9ea 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h @@ -7,7 +7,7 @@ /** * ice_test_staterr - tests bits in Rx descriptor status and error fields - * @rx_desc: pointer to receive descriptor (in le64 format) + * @status_err_n: Rx descriptor status_error0 or status_error1 bits * @stat_err_bits: value to mask * * This function does some fast chicanery in order to return the @@ -16,9 +16,9 @@ * at offset zero. */ static inline bool -ice_test_staterr(union ice_32b_rx_flex_desc *rx_desc, const u16 stat_err_bits) +ice_test_staterr(__le16 status_err_n, const u16 stat_err_bits) { - return !!(rx_desc->wb.status_error0 & cpu_to_le16(stat_err_bits)); + return !!(status_err_n & cpu_to_le16(stat_err_bits)); } static inline __le64 @@ -31,6 +31,30 @@ ice_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, u64 td_tag) (td_tag << ICE_TXD_QW1_L2TAG1_S)); } +/** + * ice_get_vlan_tag_from_rx_desc - get VLAN from Rx flex descriptor + * @rx_desc: Rx 32b flex descriptor with RXDID=2 + * + * The OS and current PF implementation only support stripping a single VLAN tag + * at a time, so there should only ever be 0 or 1 tags in the l2tag* fields. If + * one is found return the tag, else return 0 to mean no VLAN tag was found. + */ +static inline u16 +ice_get_vlan_tag_from_rx_desc(union ice_32b_rx_flex_desc *rx_desc) +{ + u16 stat_err_bits; + + stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S); + if (ice_test_staterr(rx_desc->wb.status_error0, stat_err_bits)) + return le16_to_cpu(rx_desc->wb.l2tag1); + + stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S); + if (ice_test_staterr(rx_desc->wb.status_error1, stat_err_bits)) + return le16_to_cpu(rx_desc->wb.l2tag2_2nd); + + return 0; +} + /** * ice_xdp_ring_update_tail - Updates the XDP Tx ring tail register * @xdp_ring: XDP Tx ring diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index ff55cb415b11..5b5fa3df29d5 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -530,7 +530,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) rx_desc = ICE_RX_DESC(rx_ring, rx_ring->next_to_clean); stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S); - if (!ice_test_staterr(rx_desc, stat_err_bits)) + if (!ice_test_staterr(rx_desc->wb.status_error0, stat_err_bits)) break; /* This memory barrier is needed to keep us from reading @@ -582,9 +582,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) total_rx_bytes += skb->len; total_rx_packets++; - stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S); - if (ice_test_staterr(rx_desc, stat_err_bits)) - vlan_tag = le16_to_cpu(rx_desc->wb.l2tag1); + vlan_tag = ice_get_vlan_tag_from_rx_desc(rx_desc); rx_ptype = le16_to_cpu(rx_desc->wb.ptype_flex_flags0) & ICE_RX_FLEX_DESC_PTYPE_M; -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:46 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:46 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 14/14] ice: Add ability for PF admin to enable VF VLAN pruning In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-14-anthony.l.nguyen@intel.com> From: Brett Creeley VFs by default are able to see all tagged traffic regardless of trust and VLAN filters. Based on legacy devices (i.e. ixgbe, i40e), customers expect VFs to receive all VLAN tagged traffic with a matching destination MAC. Add an ethtool private flag 'vf-vlan-pruning' and set the default to off so VFs will receive all VLAN traffic directed towards them. When the flag is turned on, VF will only be able to receive untagged traffic or traffic with VLAN tags it has created interfaces for. Also, the flag cannot be changed while any VFs are allocated. This was done to simplify the implementation. So, if this flag is needed, then the PF admin must enable it. If the user tries to enable the flag while VFs are active, then print an unsupported message with the vf-vlan-pruning flag included. In case multiple flags were specified, this makes it clear to the user which flag failed. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice.h | 1 + drivers/net/ethernet/intel/ice/ice_ethtool.c | 9 +++++++++ .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 18 ++++++++++++++++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 14 ++++++++++++++ 4 files changed, 40 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 14aaca8dbbb7..dc86f2562e0f 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -486,6 +486,7 @@ enum ice_pf_flags { ICE_FLAG_LEGACY_RX, ICE_FLAG_VF_TRUE_PROMISC_ENA, ICE_FLAG_MDD_AUTO_RESET_VF, + ICE_FLAG_VF_VLAN_PRUNING, ICE_FLAG_LINK_LENIENT_MODE_ENA, ICE_FLAG_GNSS, /* GNSS successfully initialized */ ICE_PF_FLAGS_NBITS /* must be last */ diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index e2e3ef7fba7f..28ead0b4712f 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -164,6 +164,7 @@ static const struct ice_priv_flag ice_gstrings_priv_flags[] = { ICE_PRIV_FLAG("vf-true-promisc-support", ICE_FLAG_VF_TRUE_PROMISC_ENA), ICE_PRIV_FLAG("mdd-auto-reset-vf", ICE_FLAG_MDD_AUTO_RESET_VF), + ICE_PRIV_FLAG("vf-vlan-pruning", ICE_FLAG_VF_VLAN_PRUNING), ICE_PRIV_FLAG("legacy-rx", ICE_FLAG_LEGACY_RX), }; @@ -1295,6 +1296,14 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags) change_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags); ret = -EAGAIN; } + + if (test_bit(ICE_FLAG_VF_VLAN_PRUNING, change_flags) && + pf->num_alloc_vfs) { + dev_err(dev, "vf-vlan-pruning: VLAN pruning cannot be changed while VFs are active.\n"); + /* toggle bit back to previous state */ + change_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags); + ret = -EOPNOTSUPP; + } ethtool_exit: clear_bit(ICE_FLAG_ETHTOOL_CTXT, pf->flags); return ret; diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index 4be29f97365c..39f2d36cabba 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -43,7 +43,6 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) /* outer VLAN ops regardless of port VLAN config */ vlan_ops->add_vlan = ice_vsi_add_vlan; - vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; @@ -51,6 +50,8 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) if (ice_vf_is_port_vlan_ena(vf)) { /* setup outer VLAN ops */ vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; /* setup inner VLAN ops */ vlan_ops = &vsi->inner_vlan_ops; @@ -61,6 +62,12 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; } else { + if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) + vlan_ops->ena_rx_filtering = noop_vlan; + else + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; + vlan_ops->del_vlan = ice_vsi_del_vlan; vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; @@ -80,14 +87,21 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) /* inner VLAN ops regardless of port VLAN config */ vlan_ops->add_vlan = ice_vsi_add_vlan; - vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; if (ice_vf_is_port_vlan_ena(vf)) { vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; } else { + if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) + vlan_ops->ena_rx_filtering = noop_vlan; + else + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; + vlan_ops->del_vlan = ice_vsi_del_vlan; vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index f1802de98b82..674d27c1a81d 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -807,6 +807,11 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf, struct ice_vsi *vsi) return err; } + err = vlan_ops->ena_rx_filtering(vsi); + if (err) + dev_warn(dev, "failed to enable Rx VLAN filtering for VF %d VSI %d during VF rebuild, error %d\n", + vf->vf_id, vsi->idx, err); + return 0; } @@ -1791,6 +1796,7 @@ static void ice_vc_notify_vf_reset(struct ice_vf *vf) */ static int ice_init_vf_vsi_res(struct ice_vf *vf) { + struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; u8 broadcast[ETH_ALEN]; struct ice_vsi *vsi; @@ -1811,6 +1817,14 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf) goto release_vsi; } + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + err = vlan_ops->ena_rx_filtering(vsi); + if (err) { + dev_warn(dev, "Failed to enable Rx VLAN filtering for VF %d\n", + vf->vf_id); + goto release_vsi; + } + eth_broadcast_addr(broadcast); err = ice_fltr_add_mac(vsi, broadcast, ICE_FWD_TO_VSI); if (err) { -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:40 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:40 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 08/14] ice: Add outer_vlan_ops and VSI specific VLAN ops implementations In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-8-anthony.l.nguyen@intel.com> From: Brett Creeley Add a new outer_vlan_ops member to the ice_vsi structure as outer VLAN ops are only available when the device is in Double VLAN Mode (DVM). Depending on the VSI type, the requirements for what operations to use/allow differ. By default all VSI's have unsupported inner and outer VSI VLAN ops. This implementation was chosen to prevent unexpected crashes due to null pointer dereferences. Instead, if a VSI calls an unsupported op, it will just return -EOPNOTSUPP. Add implementations to support modifying outer VLAN fields for VSI context. This includes the ability to modify VLAN stripping, insertion, and the port VLAN based on the outer VLAN handling fields of the VSI context. These functions should only ever be used if DVM is enabled because that means the firmware supports the outer VLAN fields in the VSI context. If the device is in DVM, then always use the outer_vlan_ops, else use the vlan_ops since the device is in Single VLAN Mode (SVM). Also, move adding the untagged VLAN 0 filter from ice_vsi_setup() to ice_vsi_vlan_setup() as the latter function is specific to the PF and all other VSI types that need an untagged VLAN 0 filter already do this in their specific flows. Without this change, Flow Director is failing to initialize because it does not implement any VSI VLAN ops. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/Makefile | 3 +- drivers/net/ethernet/intel/ice/ice.h | 3 +- drivers/net/ethernet/intel/ice/ice_eswitch.c | 5 +- drivers/net/ethernet/intel/ice/ice_lib.c | 111 +++++- drivers/net/ethernet/intel/ice/ice_lib.h | 3 + drivers/net/ethernet/intel/ice/ice_main.c | 60 +-- .../ethernet/intel/ice/ice_pf_vsi_vlan_ops.c | 37 ++ .../ethernet/intel/ice/ice_pf_vsi_vlan_ops.h | 13 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 72 ++++ .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.h | 16 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 101 +++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 6 + .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 344 +++++++++++++++++- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 6 + .../net/ethernet/intel/ice/ice_vsi_vlan_ops.c | 107 +++++- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 6 + 16 files changed, 808 insertions(+), 85 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c create mode 100644 drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h create mode 100644 drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index c40b3aa1d195..3ece1df919f8 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -18,6 +18,7 @@ ice-y := ice_main.o \ ice_txrx_lib.o \ ice_txrx.o \ ice_fltr.o \ + ice_pf_vsi_vlan_ops.o \ ice_vsi_vlan_ops.o \ ice_vsi_vlan_lib.o \ ice_fdir.o \ @@ -32,7 +33,7 @@ ice-y := ice_main.o \ ice_repr.o \ ice_tc_lib.o ice-$(CONFIG_PCI_IOV) += ice_virtchnl_allowlist.o -ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o ice_virtchnl_fdir.o +ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o ice_virtchnl_fdir.o ice_vf_vsi_vlan_ops.o ice-$(CONFIG_PTP_1588_CLOCK) += ice_ptp.o ice_ptp_hw.o ice_gnss.o ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index efcc713ba287..14aaca8dbbb7 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -371,7 +371,8 @@ struct ice_vsi { u8 irqs_ready:1; u8 current_isup:1; /* Sync 'link up' logging */ u8 stat_offsets_loaded:1; - struct ice_vsi_vlan_ops vlan_ops; + struct ice_vsi_vlan_ops inner_vlan_ops; + struct ice_vsi_vlan_ops outer_vlan_ops; u16 num_vlan; /* queue information */ diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 0ff1a375f2aa..30a00fe59c52 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -116,9 +116,12 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) struct ice_vsi *uplink_vsi = pf->switchdev.uplink_vsi; struct net_device *uplink_netdev = uplink_vsi->netdev; struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi_vlan_ops *vlan_ops; bool rule_added = false; - ctrl_vsi->vlan_ops.dis_stripping(ctrl_vsi); + vlan_ops = ice_get_compat_vsi_vlan_ops(ctrl_vsi); + if (vlan_ops->dis_stripping(ctrl_vsi)) + return -ENODEV; ice_remove_vsi_fltr(&pf->hw, uplink_vsi->idx); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index c8991711b754..6a7f107a43c5 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -8,6 +8,7 @@ #include "ice_fltr.h" #include "ice_dcb_lib.h" #include "ice_devlink.h" +#include "ice_vsi_vlan_ops.h" /** * ice_vsi_type_str - maps VSI type enum to string equivalents @@ -2415,17 +2416,6 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, if (ret) goto unroll_vector_base; - /* Always add VLAN ID 0 switch rule by default. This is needed - * in order to allow all untagged and 0 tagged priority traffic - * if Rx VLAN pruning is enabled. Also there are cases where we - * don't get the call to add VLAN 0 via ice_vlan_rx_add_vid() - * so this handles those cases (i.e. adding the PF to a bridge - * without the 8021q module loaded). - */ - ret = ice_vsi_add_vlan_zero(vsi); - if (ret) - goto unroll_clear_rings; - ice_vsi_map_rings_to_vectors(vsi); /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ @@ -3875,13 +3865,110 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) /** * ice_vsi_add_vlan_zero - add VLAN 0 filter(s) for this VSI * @vsi: VSI used to add VLAN filters + * + * In Single VLAN Mode (SVM), single VLAN filters via ICE_SW_LKUP_VLAN are based + * on the inner VLAN ID, so the VLAN TPID (i.e. 0x8100 or 0x888a8) doesn't + * matter. In Double VLAN Mode (DVM), outer/single VLAN filters via + * ICE_SW_LKUP_VLAN are based on the outer/single VLAN ID + VLAN TPID. + * + * For both modes add a VLAN 0 + no VLAN TPID filter to handle untagged traffic + * when VLAN pruning is enabled. Also, this handles VLAN 0 priority tagged + * traffic in SVM, since the VLAN TPID isn't part of filtering. + * + * If DVM is enabled then an explicit VLAN 0 + VLAN TPID filter needs to be + * added to allow VLAN 0 priority tagged traffic in DVM, since the VLAN TPID is + * part of filtering. */ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + struct ice_vlan vlan; + int err; + + vlan = ICE_VLAN(0, 0, 0); + err = vlan_ops->add_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + /* in SVM both VLAN 0 filters are identical */ + if (!ice_is_dvm_ena(&vsi->back->hw)) + return 0; + + vlan = ICE_VLAN(ETH_P_8021Q, 0, 0); + err = vlan_ops->add_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + return 0; +} + +/** + * ice_vsi_del_vlan_zero - delete VLAN 0 filter(s) for this VSI + * @vsi: VSI used to add VLAN filters + * + * Delete the VLAN 0 filters in the same manner that they were added in + * ice_vsi_add_vlan_zero. + */ +int ice_vsi_del_vlan_zero(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); struct ice_vlan vlan; + int err; vlan = ICE_VLAN(0, 0, 0); - return vsi->vlan_ops.add_vlan(vsi, &vlan); + err = vlan_ops->del_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + /* in SVM both VLAN 0 filters are identical */ + if (!ice_is_dvm_ena(&vsi->back->hw)) + return 0; + + vlan = ICE_VLAN(ETH_P_8021Q, 0, 0); + err = vlan_ops->del_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + return 0; +} + +/** + * ice_vsi_num_zero_vlans - get number of VLAN 0 filters based on VLAN mode + * @vsi: VSI used to get the VLAN mode + * + * If DVM is enabled then 2 VLAN 0 filters are added, else if SVM is enabled + * then 1 VLAN 0 filter is added. See ice_vsi_add_vlan_zero for more details. + */ +static u16 ice_vsi_num_zero_vlans(struct ice_vsi *vsi) +{ +#define ICE_DVM_NUM_ZERO_VLAN_FLTRS 2 +#define ICE_SVM_NUM_ZERO_VLAN_FLTRS 1 + /* no VLAN 0 filter is created when a port VLAN is active */ + if (vsi->type == ICE_VSI_VF && + ice_vf_is_port_vlan_ena(&vsi->back->vf[vsi->vf_id])) + return 0; + if (ice_is_dvm_ena(&vsi->back->hw)) + return ICE_DVM_NUM_ZERO_VLAN_FLTRS; + else + return ICE_SVM_NUM_ZERO_VLAN_FLTRS; +} + +/** + * ice_vsi_has_non_zero_vlans - check if VSI has any non-zero VLANs + * @vsi: VSI used to determine if any non-zero VLANs have been added + */ +bool ice_vsi_has_non_zero_vlans(struct ice_vsi *vsi) +{ + return (vsi->num_vlan > ice_vsi_num_zero_vlans(vsi)); +} + +/** + * ice_vsi_num_non_zero_vlans - get the number of non-zero VLANs for this VSI + * @vsi: VSI used to get the number of non-zero VLANs added + */ +u16 ice_vsi_num_non_zero_vlans(struct ice_vsi *vsi) +{ + return (vsi->num_vlan - ice_vsi_num_zero_vlans(vsi)); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 8f42a3f3a949..0d61f1772ae3 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -124,6 +124,9 @@ void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx); int ice_vsi_add_vlan_zero(struct ice_vsi *vsi); +int ice_vsi_del_vlan_zero(struct ice_vsi *vsi); +bool ice_vsi_has_non_zero_vlans(struct ice_vsi *vsi); +u16 ice_vsi_num_non_zero_vlans(struct ice_vsi *vsi); bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f); void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_init_feature_support(struct ice_pf *pf); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 6843b8e87441..ff2b721e0e45 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -21,6 +21,7 @@ #include "ice_trace.h" #include "ice_eswitch.h" #include "ice_tc_lib.h" +#include "ice_vsi_vlan_ops.h" #define DRV_SUMMARY "Intel(R) Ethernet Connection E800 Series Linux Driver" static const char ice_driver_string[] = DRV_SUMMARY; @@ -249,7 +250,7 @@ static int ice_set_promisc(struct ice_vsi *vsi, u8 promisc_m) if (vsi->type != ICE_VSI_PF) return 0; - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_set_vlan_vsi_promisc(&vsi->back->hw, vsi, promisc_m); else status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, 0); @@ -270,7 +271,7 @@ static int ice_clear_promisc(struct ice_vsi *vsi, u8 promisc_m) if (vsi->type != ICE_VSI_PF) return 0; - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_clear_vlan_vsi_promisc(&vsi->back->hw, vsi, promisc_m); else status = ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, 0); @@ -286,6 +287,7 @@ static int ice_clear_promisc(struct ice_vsi *vsi, u8 promisc_m) */ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); struct device *dev = ice_pf_to_dev(vsi->back); struct net_device *netdev = vsi->netdev; bool promisc_forced_on = false; @@ -358,7 +360,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) /* check for changes in promiscuous modes */ if (changed_flags & IFF_ALLMULTI) { if (vsi->current_netdev_flags & IFF_ALLMULTI) { - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) promisc_m = ICE_MCAST_VLAN_PROMISC_BITS; else promisc_m = ICE_MCAST_PROMISC_BITS; @@ -372,7 +374,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) } } else { /* !(vsi->current_netdev_flags & IFF_ALLMULTI) */ - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) promisc_m = ICE_MCAST_VLAN_PROMISC_BITS; else promisc_m = ICE_MCAST_PROMISC_BITS; @@ -401,7 +403,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) ~IFF_PROMISC; goto out_promisc; } - vsi->vlan_ops.dis_rx_filtering(vsi); + vlan_ops->dis_rx_filtering(vsi); } } else { /* Clear Rx filter to remove traffic from wire */ @@ -415,7 +417,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) goto out_promisc; } if (vsi->num_vlan > 1) - vsi->vlan_ops.ena_rx_filtering(vsi); + vlan_ops->ena_rx_filtering(vsi); } } } @@ -3419,6 +3421,7 @@ static int ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_vlan vlan; int ret; @@ -3427,9 +3430,11 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) if (!vid) return 0; + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + /* Enable VLAN pruning when a VLAN other than 0 is added */ if (!ice_vsi_is_vlan_pruning_ena(vsi)) { - ret = vsi->vlan_ops.ena_rx_filtering(vsi); + ret = vlan_ops->ena_rx_filtering(vsi); if (ret) return ret; } @@ -3438,7 +3443,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) * packets aren't pruned by the device's internal switch on Rx */ vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); - ret = vsi->vlan_ops.add_vlan(vsi, &vlan); + ret = vlan_ops->add_vlan(vsi, &vlan); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3457,6 +3462,7 @@ static int ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_vlan vlan; int ret; @@ -3465,17 +3471,19 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) if (!vid) return 0; + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + /* Make sure VLAN delete is successful before updating VLAN * information */ vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); - ret = vsi->vlan_ops.del_vlan(vsi, &vlan); + ret = vlan_ops->del_vlan(vsi, &vlan); if (ret) return ret; /* Disable pruning when VLAN 0 is the only VLAN rule */ if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - vsi->vlan_ops.dis_rx_filtering(vsi); + vlan_ops->dis_rx_filtering(vsi); set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); return ret; @@ -5592,6 +5600,7 @@ static int ice_set_features(struct net_device *netdev, netdev_features_t features) { struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; int ret = 0; @@ -5608,6 +5617,8 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) return -EBUSY; } + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + /* Multiple features can be changed in one call so keep features in * separate if/else statements to guarantee each feature is checked */ @@ -5619,24 +5630,24 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); + ret = vlan_ops->ena_stripping(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vsi->vlan_ops.dis_stripping(vsi); + ret = vlan_ops->dis_stripping(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_TX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); + ret = vlan_ops->ena_insertion(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vsi->vlan_ops.dis_insertion(vsi); + ret = vlan_ops->dis_insertion(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vsi->vlan_ops.ena_rx_filtering(vsi); + ret = vlan_ops->ena_rx_filtering(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vsi->vlan_ops.dis_rx_filtering(vsi); + ret = vlan_ops->dis_rx_filtering(vsi); if ((features & NETIF_F_NTUPLE) && !(netdev->features & NETIF_F_NTUPLE)) { @@ -5664,19 +5675,21 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) } /** - * ice_vsi_vlan_setup - Setup VLAN offload properties on a VSI + * ice_vsi_vlan_setup - Setup VLAN offload properties on a PF VSI * @vsi: VSI to setup VLAN properties for */ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) { - int ret = 0; + struct ice_vsi_vlan_ops *vlan_ops; + + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); + vlan_ops->ena_stripping(vsi, ETH_P_8021Q); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); + vlan_ops->ena_insertion(vsi, ETH_P_8021Q); - return ret; + return ice_vsi_add_vlan_zero(vsi); } /** @@ -6279,11 +6292,12 @@ static void ice_napi_disable_all(struct ice_vsi *vsi) */ int ice_down(struct ice_vsi *vsi) { - int i, tx_err, rx_err, link_err = 0; + int i, tx_err, rx_err, link_err = 0, vlan_err = 0; WARN_ON(!test_bit(ICE_VSI_DOWN, vsi->state)); if (vsi->netdev && vsi->type == ICE_VSI_PF) { + vlan_err = ice_vsi_del_vlan_zero(vsi); if (!ice_is_e810(&vsi->back->hw)) ice_ptp_link_change(vsi->back, vsi->back->hw.pf_id, false); netif_carrier_off(vsi->netdev); @@ -6325,7 +6339,7 @@ int ice_down(struct ice_vsi *vsi) ice_for_each_rxq(vsi, i) ice_clean_rx_ring(vsi->rx_rings[i]); - if (tx_err || rx_err || link_err) { + if (tx_err || rx_err || link_err || vlan_err) { netdev_err(vsi->netdev, "Failed to close VSI 0x%04X on switch 0x%04X\n", vsi->vsi_num, vsi->vsw->sw_id); return -EIO; diff --git a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c new file mode 100644 index 000000000000..b00360ca6e92 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_ops.h" +#include "ice_vsi_vlan_lib.h" +#include "ice.h" +#include "ice_pf_vsi_vlan_ops.h" + +void ice_pf_vsi_init_vlan_ops(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops; + + if (ice_is_dvm_ena(&vsi->back->hw)) { + vlan_ops = &vsi->outer_vlan_ops; + + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_outer_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + } else { + vlan_ops = &vsi->inner_vlan_ops; + + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + } +} + diff --git a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h new file mode 100644 index 000000000000..6741ec8c5f6b --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_PF_VSI_VLAN_OPS_H_ +#define _ICE_PF_VSI_VLAN_OPS_H_ + +#include "ice_vsi_vlan_ops.h" + +struct ice_vsi; + +void ice_pf_vsi_init_vlan_ops(struct ice_vsi *vsi); + +#endif /* _ICE_PF_VSI_VLAN_OPS_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c new file mode 100644 index 000000000000..741b041606a2 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -0,0 +1,72 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_ops.h" +#include "ice_vsi_vlan_lib.h" +#include "ice.h" +#include "ice_vf_vsi_vlan_ops.h" +#include "ice_virtchnl_pf.h" + +static int +noop_vlan_arg(struct ice_vsi __always_unused *vsi, + struct ice_vlan __always_unused *vlan) +{ + return 0; +} + +/** + * ice_vf_vsi_init_vlan_ops - Initialize default VSI VLAN ops for VF VSI + * @vsi: VF's VSI being configured + */ +void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops; + struct ice_pf *pf = vsi->back; + struct ice_vf *vf; + + vf = &pf->vf[vsi->vf_id]; + + if (ice_is_dvm_ena(&pf->hw)) { + vlan_ops = &vsi->outer_vlan_ops; + + /* outer VLAN ops regardless of port VLAN config */ + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + + if (ice_vf_is_port_vlan_ena(vf)) { + /* setup outer VLAN ops */ + vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; + + /* setup inner VLAN ops */ + vlan_ops = &vsi->inner_vlan_ops; + vlan_ops->add_vlan = noop_vlan_arg; + vlan_ops->del_vlan = noop_vlan_arg; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } + } else { + vlan_ops = &vsi->inner_vlan_ops; + + /* inner VLAN ops regardless of port VLAN config */ + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + + if (ice_vf_is_port_vlan_ena(vf)) { + vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; + } else { + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } + } +} diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h new file mode 100644 index 000000000000..8ea13628a5e1 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VF_VSI_VLAN_OPS_H_ +#define _ICE_VF_VSI_VLAN_OPS_H_ + +#include "ice_vsi_vlan_ops.h" + +struct ice_vsi; + +#ifdef CONFIG_PCI_IOV +void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi); +#else +static inline void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) { } +#endif /* CONFIG_PCI_IOV */ +#endif /* _ICE_PF_VSI_VLAN_OPS_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index e576cd201a48..100c86c8ad9a 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -10,6 +10,7 @@ #include "ice_eswitch.h" #include "ice_virtchnl_allowlist.h" #include "ice_flex_pipe.h" +#include "ice_vf_vsi_vlan_ops.h" #define FIELD_SELECTOR(proto_hdr_field) \ BIT((proto_hdr_field) & PROTO_HDR_FIELD_MASK) @@ -761,7 +762,7 @@ static u8 ice_vf_get_port_vlan_prio(struct ice_vf *vf) return vf->port_vlan_info.prio; } -static bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) +bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) { return (ice_vf_get_port_vlan_id(vf) || ice_vf_get_port_vlan_prio(vf)); } @@ -769,26 +770,30 @@ static bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) /** * ice_vf_rebuild_host_vlan_cfg - add VLAN 0 filter or rebuild the Port VLAN * @vf: VF to add MAC filters for + * @vsi: Pointer to VSI * * Called after a VF VSI has been re-added/rebuilt during reset. The PF driver * always re-adds either a VLAN 0 or port VLAN based filter after reset. */ -static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) +static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf, struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); struct device *dev = ice_pf_to_dev(vf->pf); - struct ice_vsi *vsi = ice_get_vf_vsi(vf); int err; if (ice_vf_is_port_vlan_ena(vf)) { - err = vsi->vlan_ops.set_port_vlan(vsi, &vf->port_vlan_info); + err = vlan_ops->set_port_vlan(vsi, &vf->port_vlan_info); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); return err; } + + err = vlan_ops->add_vlan(vsi, &vf->port_vlan_info); + } else { + err = ice_vsi_add_vlan_zero(vsi); } - err = vsi->vlan_ops.add_vlan(vsi, &vf->port_vlan_info); if (err) { dev_err(dev, "failed to add VLAN %u filter for VF %u during VF rebuild, error %d\n", ice_vf_is_port_vlan_ena(vf) ? @@ -834,9 +839,12 @@ static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable) */ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops; int err; - err = vsi->vlan_ops.ena_tx_filtering(vsi); + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + + err = vlan_ops->ena_tx_filtering(vsi); if (err) return err; @@ -849,9 +857,12 @@ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) */ static int ice_vsi_dis_spoofchk(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops; int err; - err = vsi->vlan_ops.dis_tx_filtering(vsi); + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + + err = vlan_ops->dis_tx_filtering(vsi); if (err) return err; @@ -1268,7 +1279,7 @@ static int ice_vf_set_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 pro if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_set_vsi_promisc(hw, vsi->idx, promisc_m, ice_vf_get_port_vlan_id(vf)); - else if (vsi->num_vlan > 1) + else if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_set_vlan_vsi_promisc(hw, vsi, promisc_m); else status = ice_fltr_set_vsi_promisc(hw, vsi->idx, promisc_m, 0); @@ -1290,7 +1301,7 @@ static int ice_vf_clear_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 p if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_clear_vsi_promisc(hw, vsi->idx, promisc_m, ice_vf_get_port_vlan_id(vf)); - else if (vsi->num_vlan > 1) + else if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_clear_vlan_vsi_promisc(hw, vsi, promisc_m); else status = ice_fltr_clear_vsi_promisc(hw, vsi->idx, promisc_m, 0); @@ -1375,7 +1386,7 @@ static void ice_vf_rebuild_host_cfg(struct ice_vf *vf) dev_err(dev, "failed to rebuild default MAC configuration for VF %d\n", vf->vf_id); - if (ice_vf_rebuild_host_vlan_cfg(vf)) + if (ice_vf_rebuild_host_vlan_cfg(vf, vsi)) dev_err(dev, "failed to rebuild VLAN configuration for VF %u\n", vf->vf_id); @@ -3022,6 +3033,7 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) bool rm_promisc, alluni = false, allmulti = false; struct virtchnl_promisc_info *info = (struct virtchnl_promisc_info *)msg; + struct ice_vsi_vlan_ops *vlan_ops; int mcast_err = 0, ucast_err = 0; struct ice_pf *pf = vf->pf; struct ice_vsi *vsi; @@ -3060,16 +3072,15 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) rm_promisc = !allmulti && !alluni; - if (vsi->num_vlan || ice_vf_is_port_vlan_ena(vf)) { - if (rm_promisc) - ret = vsi->vlan_ops.ena_rx_filtering(vsi); - else - ret = vsi->vlan_ops.dis_rx_filtering(vsi); - if (ret) { - dev_err(dev, "Failed to configure VLAN pruning in promiscuous mode\n"); - v_ret = VIRTCHNL_STATUS_ERR_PARAM; - goto error_param; - } + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + if (rm_promisc) + ret = vlan_ops->ena_rx_filtering(vsi); + else + ret = vlan_ops->dis_rx_filtering(vsi); + if (ret) { + dev_err(dev, "Failed to configure VLAN pruning in promiscuous mode\n"); + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto error_param; } if (!test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags)) { @@ -3096,7 +3107,8 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) } else { u8 mcast_m, ucast_m; - if (ice_vf_is_port_vlan_ena(vf) || vsi->num_vlan > 1) { + if (ice_vf_is_port_vlan_ena(vf) || + ice_vsi_has_non_zero_vlans(vsi)) { mcast_m = ICE_MCAST_VLAN_PROMISC_BITS; ucast_m = ICE_UCAST_VLAN_PROMISC_BITS; } else { @@ -4163,6 +4175,27 @@ static bool ice_vf_vlan_offload_ena(u32 caps) return !!(caps & VIRTCHNL_VF_OFFLOAD_VLAN); } +/** + * ice_vf_has_max_vlans - check if VF already has the max allowed VLAN filters + * @vf: VF to check against + * @vsi: VF's VSI + * + * If the VF is trusted then the VF is allowed to add as many VLANs as it + * wants to, so return false. + * + * When the VF is untrusted compare the number of non-zero VLANs + 1 to the max + * allowed VLANs for an untrusted VF. Return the result of this comparison. + */ +static bool ice_vf_has_max_vlans(struct ice_vf *vf, struct ice_vsi *vsi) +{ + if (ice_is_vf_trusted(vf)) + return false; + +#define ICE_VF_ADDED_VLAN_ZERO_FLTRS 1 + return ((ice_vsi_num_non_zero_vlans(vsi) + + ICE_VF_ADDED_VLAN_ZERO_FLTRS) >= ICE_MAX_VLAN_PER_VF); +} + /** * ice_vc_process_vlan_msg * @vf: pointer to the VF info @@ -4176,6 +4209,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; struct virtchnl_vlan_filter_list *vfl = (struct virtchnl_vlan_filter_list *)msg; + struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; bool vlan_promisc = false; struct ice_vsi *vsi; @@ -4217,8 +4251,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - if (add_v && !ice_is_vf_trusted(vf) && - vsi->num_vlan >= ICE_MAX_VLAN_PER_VF) { + if (add_v && ice_vf_has_max_vlans(vf, vsi)) { dev_info(dev, "VF-%d is not trusted, switch the VF to trusted mode, in order to add more VLAN addresses\n", vf->vf_id); /* There is no need to let VF know about being not trusted, @@ -4237,13 +4270,13 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags)) vlan_promisc = true; + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); if (add_v) { for (i = 0; i < vfl->num_elements; i++) { u16 vid = vfl->vlan_id[i]; struct ice_vlan vlan; - if (!ice_is_vf_trusted(vf) && - vsi->num_vlan >= ICE_MAX_VLAN_PER_VF) { + if (ice_vf_has_max_vlans(vf, vsi)) { dev_info(dev, "VF-%d is not trusted, switch the VF to trusted mode, in order to add more VLAN addresses\n", vf->vf_id); /* There is no need to let VF know about being @@ -4261,7 +4294,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) continue; vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); - status = vsi->vlan_ops.add_vlan(vsi, &vlan); + status = vsi->inner_vlan_ops.add_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4270,7 +4303,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) /* Enable VLAN pruning when non-zero VLAN is added */ if (!vlan_promisc && vid && !ice_vsi_is_vlan_pruning_ena(vsi)) { - status = vsi->vlan_ops.ena_rx_filtering(vsi); + status = vlan_ops->ena_rx_filtering(vsi); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n", @@ -4314,16 +4347,16 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) continue; vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); - status = vsi->vlan_ops.del_vlan(vsi, &vlan); + status = vsi->inner_vlan_ops.del_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; } /* Disable VLAN pruning when only VLAN 0 is left */ - if (vsi->num_vlan == 1 && + if (!ice_vsi_has_non_zero_vlans(vsi) && ice_vsi_is_vlan_pruning_ena(vsi)) - status = vsi->vlan_ops.dis_rx_filtering(vsi); + status = vlan_ops->dis_rx_filtering(vsi); /* Disable Unicast/Multicast VLAN promiscuous mode */ if (vlan_promisc) { @@ -4392,7 +4425,7 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf) } vsi = ice_get_vf_vsi(vf); - if (vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q)) + if (vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4427,7 +4460,7 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf) goto error_param; } - if (vsi->vlan_ops.dis_stripping(vsi)) + if (vsi->inner_vlan_ops.dis_stripping(vsi)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4457,9 +4490,9 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) - return vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); + return vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q); else - return vsi->vlan_ops.dis_stripping(vsi); + return vsi->inner_vlan_ops.dis_stripping(vsi); } static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index b06ca1f97833..4110847e0699 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -211,6 +211,7 @@ int ice_vc_send_msg_to_vf(struct ice_vf *vf, u32 v_opcode, enum virtchnl_status_code v_retval, u8 *msg, u16 msglen); bool ice_vc_isvalid_vsi_id(struct ice_vf *vf, u16 vsi_id); +bool ice_vf_is_port_vlan_ena(struct ice_vf *vf); #else /* CONFIG_PCI_IOV */ static inline void ice_process_vflr_event(struct ice_pf *pf) { } static inline void ice_free_vfs(struct ice_pf *pf) { } @@ -343,5 +344,10 @@ static inline bool ice_is_any_vf_in_promisc(struct ice_pf __always_unused *pf) { return false; } + +static inline bool ice_vf_is_port_vlan_ena(struct ice_vf __always_unused *vf) +{ + return false; +} #endif /* CONFIG_PCI_IOV */ #endif /* _ICE_VIRTCHNL_PF_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 0b130505b68a..62a2630d6fab 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -23,7 +23,8 @@ static void print_invalid_tpid(struct ice_vsi *vsi, u16 tpid) */ static bool validate_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - if (vlan->tpid != ETH_P_8021Q && (vlan->tpid || vlan->vid)) { + if (vlan->tpid != ETH_P_8021Q && vlan->tpid != ETH_P_8021AD && + vlan->tpid != ETH_P_QINQ1 && (vlan->tpid || vlan->vid)) { print_invalid_tpid(vsi, vlan->tpid); return false; } @@ -366,3 +367,344 @@ int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi) { return ice_cfg_vlan_antispoof(vsi, false); } + +/** + * tpid_to_vsi_outer_vlan_type - convert from TPID to VSI context based tag_type + * @tpid: tpid used to translate into VSI context based tag_type + * @tag_type: output variable to hold the VSI context based tag type + */ +static int tpid_to_vsi_outer_vlan_type(u16 tpid, u8 *tag_type) +{ + switch (tpid) { + case ETH_P_8021Q: + *tag_type = ICE_AQ_VSI_OUTER_TAG_VLAN_8100; + break; + case ETH_P_8021AD: + *tag_type = ICE_AQ_VSI_OUTER_TAG_STAG; + break; + case ETH_P_QINQ1: + *tag_type = ICE_AQ_VSI_OUTER_TAG_VLAN_9100; + break; + default: + *tag_type = 0; + return -EINVAL; + } + + return 0; +} + +/** + * ice_vsi_ena_outer_stripping - enable outer VLAN stripping + * @vsi: VSI to configure + * @tpid: TPID to enable outer VLAN stripping for + * + * Enable outer VLAN stripping via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Since the VSI context only supports a single TPID for insertion and + * stripping, setting the TPID for stripping will affect the TPID for insertion. + * Callers need to be aware of this limitation. + * + * Only modify outer VLAN stripping settings and the VLAN TPID. Outer VLAN + * insertion settings are unmodified. + * + * This enables hardware to strip a VLAN tag with the specified TPID to be + * stripped from the packet and placed in the receive descriptor. + */ +int ice_vsi_ena_outer_stripping(struct ice_vsi *vsi, u16 tpid) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + u8 tag_type; + int err; + + /* do not allow modifying VLAN stripping when a port VLAN is configured + * on this VSI + */ + if (vsi->info.port_based_outer_vlan) + return 0; + + if (tpid_to_vsi_outer_vlan_type(tpid, &tag_type)) + return -EINVAL; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN strip settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~(ICE_AQ_VSI_OUTER_VLAN_EMODE_M | ICE_AQ_VSI_OUTER_TAG_TYPE_M); + ctxt->info.outer_vlan_flags |= + ((ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH << + ICE_AQ_VSI_OUTER_VLAN_EMODE_S) | + ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M)); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for enabling outer VLAN stripping failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_dis_outer_stripping - disable outer VLAN stripping + * @vsi: VSI to configure + * + * Disable outer VLAN stripping via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Only modify the outer VLAN stripping settings. The VLAN TPID and outer VLAN + * insertion settings are unmodified. + * + * This tells the hardware to not strip any VLAN tagged packets, thus leaving + * them in the packet. This enables software offloaded VLAN stripping and + * disables hardware offloaded VLAN stripping. + */ +int ice_vsi_dis_outer_stripping(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + if (vsi->info.port_based_outer_vlan) + return 0; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN strip settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~ICE_AQ_VSI_OUTER_VLAN_EMODE_M; + ctxt->info.outer_vlan_flags |= ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING << + ICE_AQ_VSI_OUTER_VLAN_EMODE_S; + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for disabling outer VLAN stripping failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_ena_outer_insertion - enable outer VLAN insertion + * @vsi: VSI to configure + * @tpid: TPID to enable outer VLAN insertion for + * + * Enable outer VLAN insertion via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Since the VSI context only supports a single TPID for insertion and + * stripping, setting the TPID for insertion will affect the TPID for stripping. + * Callers need to be aware of this limitation. + * + * Only modify outer VLAN insertion settings and the VLAN TPID. Outer VLAN + * stripping settings are unmodified. + * + * This allows a VLAN tag with the specified TPID to be inserted in the transmit + * descriptor. + */ +int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + u8 tag_type; + int err; + + if (vsi->info.port_based_outer_vlan) + return 0; + + if (tpid_to_vsi_outer_vlan_type(tpid, &tag_type)) + return -EINVAL; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN insertion settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT | + ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M | + ICE_AQ_VSI_OUTER_TAG_TYPE_M); + ctxt->info.outer_vlan_flags |= + ((ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M) | + ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for enabling outer VLAN insertion failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_dis_outer_insertion - disable outer VLAN insertion + * @vsi: VSI to configure + * + * Disable outer VLAN insertion via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Only modify the outer VLAN insertion settings. The VLAN TPID and outer VLAN + * settings are unmodified. + * + * This tells the hardware to not allow any VLAN tagged packets in the transmit + * descriptor. This enables software offloaded VLAN insertion and disables + * hardware offloaded VLAN insertion. + */ +int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + if (vsi->info.port_based_outer_vlan) + return 0; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN insertion settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT | + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M); + ctxt->info.outer_vlan_flags |= + ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | + ((ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for disabling outer VLAN insertion failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * __ice_vsi_set_outer_port_vlan - set the outer port VLAN and related settings + * @vsi: VSI to configure + * @vlan_info: packed u16 that contains the VLAN prio and ID + * @tpid: TPID of the port VLAN + * + * Set the port VLAN prio, ID, and TPID. + * + * Enable VLAN pruning so the VSI doesn't receive any traffic that doesn't match + * a VLAN prune rule. The caller should take care to add a VLAN prune rule that + * matches the port VLAN ID and TPID. + * + * Tell hardware to strip outer VLAN tagged packets on receive and don't put + * them in the receive descriptor. VSI(s) in port VLANs should not be aware of + * the port VLAN ID or TPID they are assigned to. + * + * Tell hardware to prevent outer VLAN tag insertion on transmit and only allow + * untagged outer packets from the transmit descriptor. + * + * Also, tell the hardware to insert the port VLAN on transmit. + */ +static int +__ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, u16 vlan_info, u16 tpid) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + u8 tag_type; + int err; + + if (tpid_to_vsi_outer_vlan_type(tpid, &tag_type)) + return -EINVAL; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info = vsi->info; + + ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + + ctxt->info.port_based_outer_vlan = cpu_to_le16(vlan_info); + ctxt->info.outer_vlan_flags = + (ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW << + ICE_AQ_VSI_OUTER_VLAN_EMODE_S) | + ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M) | + ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | + (ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) | + ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID | + ICE_AQ_VSI_PROP_SW_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) { + dev_err(ice_pf_to_dev(vsi->back), "update VSI for setting outer port based VLAN failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + } else { + vsi->info.port_based_outer_vlan = ctxt->info.port_based_outer_vlan; + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + vsi->info.sw_flags2 = ctxt->info.sw_flags2; + } + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_set_outer_port_vlan - public version of __ice_vsi_set_outer_port_vlan + * @vsi: VSI to configure + * @vlan: ice_vlan structure used to set the port VLAN + * + * Set the outer port VLAN via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * This function does not support clearing the port VLAN as there is currently + * no use case for this. + * + * Use the ice_vlan structure passed in to set this VSI in a port VLAN. + */ +int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + u16 port_vlan_info; + + if (vlan->prio > (VLAN_PRIO_MASK >> VLAN_PRIO_SHIFT)) + return -EINVAL; + + port_vlan_info = vlan->vid | (vlan->prio << VLAN_PRIO_SHIFT); + + return __ice_vsi_set_outer_port_vlan(vsi, port_vlan_info, vlan->tpid); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index a10671133e36..f459909490ec 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -23,4 +23,10 @@ int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_ena_tx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_ena_outer_stripping(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_outer_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi); +int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); + #endif /* _ICE_VSI_VLAN_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c index 6a6b49581c70..4a6c850d83ac 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c @@ -1,20 +1,103 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (C) 2019-2021, Intel Corporation. */ -#include "ice_vsi_vlan_ops.h" +#include "ice_pf_vsi_vlan_ops.h" +#include "ice_vf_vsi_vlan_ops.h" +#include "ice_lib.h" #include "ice.h" +static int +op_unsupported_vlan_arg(struct ice_vsi * __always_unused vsi, + struct ice_vlan * __always_unused vlan) +{ + return -EOPNOTSUPP; +} + +static int +op_unsupported_tpid_arg(struct ice_vsi *__always_unused vsi, + u16 __always_unused tpid) +{ + return -EOPNOTSUPP; +} + +static int op_unsupported(struct ice_vsi *__always_unused vsi) +{ + return -EOPNOTSUPP; +} + +/* If any new ops are added to the VSI VLAN ops interface then an unsupported + * implementation should be set here. + */ +static struct ice_vsi_vlan_ops ops_unsupported = { + .add_vlan = op_unsupported_vlan_arg, + .del_vlan = op_unsupported_vlan_arg, + .ena_stripping = op_unsupported_tpid_arg, + .dis_stripping = op_unsupported, + .ena_insertion = op_unsupported_tpid_arg, + .dis_insertion = op_unsupported, + .ena_rx_filtering = op_unsupported, + .dis_rx_filtering = op_unsupported, + .ena_tx_filtering = op_unsupported, + .dis_tx_filtering = op_unsupported, + .set_port_vlan = op_unsupported_vlan_arg, +}; + +/** + * ice_vsi_init_unsupported_vlan_ops - init all VSI VLAN ops to unsupported + * @vsi: VSI to initialize VSI VLAN ops to unsupported for + * + * By default all inner and outer VSI VLAN ops return -EOPNOTSUPP. This was done + * as oppsed to leaving the ops null to prevent unexpected crashes. Instead if + * an unsupported VSI VLAN op is called it will just return -EOPNOTSUPP. + * + */ +static void ice_vsi_init_unsupported_vlan_ops(struct ice_vsi *vsi) +{ + vsi->outer_vlan_ops = ops_unsupported; + vsi->inner_vlan_ops = ops_unsupported; +} + +/** + * ice_vsi_init_vlan_ops - initialize type specific VSI VLAN ops + * @vsi: VSI to initialize ops for + * + * If any VSI types are added and/or require different ops than the PF or VF VSI + * then they will have to add a case here to handle that. Also, VSI type + * specific files should be added in the same manner that was done for PF VSI. + */ void ice_vsi_init_vlan_ops(struct ice_vsi *vsi) { - vsi->vlan_ops.add_vlan = ice_vsi_add_vlan; - vsi->vlan_ops.del_vlan = ice_vsi_del_vlan; - vsi->vlan_ops.ena_stripping = ice_vsi_ena_inner_stripping; - vsi->vlan_ops.dis_stripping = ice_vsi_dis_inner_stripping; - vsi->vlan_ops.ena_insertion = ice_vsi_ena_inner_insertion; - vsi->vlan_ops.dis_insertion = ice_vsi_dis_inner_insertion; - vsi->vlan_ops.ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; - vsi->vlan_ops.dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; - vsi->vlan_ops.ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; - vsi->vlan_ops.dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; - vsi->vlan_ops.set_port_vlan = ice_vsi_set_inner_port_vlan; + /* Initialize all VSI types to have unsupported VSI VLAN ops */ + ice_vsi_init_unsupported_vlan_ops(vsi); + + switch (vsi->type) { + case ICE_VSI_PF: + case ICE_VSI_SWITCHDEV_CTRL: + ice_pf_vsi_init_vlan_ops(vsi); + break; + case ICE_VSI_VF: + ice_vf_vsi_init_vlan_ops(vsi); + break; + default: + dev_dbg(ice_pf_to_dev(vsi->back), "%s does not support VLAN operations\n", + ice_vsi_type_str(vsi->type)); + break; + } +} + +/** + * ice_get_compat_vsi_vlan_ops - Get VSI VLAN ops based on VLAN mode + * @vsi: VSI used to get the VSI VLAN ops + * + * This function is meant to be used when the caller doesn't know which VLAN ops + * to use (i.e. inner or outer). This allows backward compatibility for VLANs + * since most of the Outer VSI VLAN functins are not supported when + * the device is configured in Single VLAN Mode (SVM). + */ +struct ice_vsi_vlan_ops *ice_get_compat_vsi_vlan_ops(struct ice_vsi *vsi) +{ + if (ice_is_dvm_ena(&vsi->back->hw)) + return &vsi->outer_vlan_ops; + else + return &vsi->inner_vlan_ops; } diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 76e55b259bc8..30d02d2b8e5f 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -23,6 +23,12 @@ struct ice_vsi_vlan_ops { int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); }; +static inline bool ice_is_dvm_ena(struct ice_hw __always_unused *hw) +{ + return false; +} + void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); +struct ice_vsi_vlan_ops *ice_get_compat_vsi_vlan_ops(struct ice_vsi *vsi); #endif /* _ICE_VSI_VLAN_OPS_H_ */ -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:39 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:39 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 07/14] ice: Adjust naming for inner VLAN operations In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-7-anthony.l.nguyen@intel.com> From: Brett Creeley Current operations act on inner VLAN fields. To support double VLAN, outer VLAN operations and functions will be implemented. Add the "inner" naming to existing VLAN operations to distinguish them from the upcoming outer values and functions. Some spacing adjustments are made to align values. Note that the inner is not talking about a tunneled VLAN, but the second VLAN in the packet. For SVM the driver uses inner or single VLAN filtering and offloads and in Double VLAN Mode the driver uses the inner filtering and offloads for SR-IOV VFs in port VLANs in order to support offloading the guest VLAN while a port VLAN is configured. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_adminq_cmd.h | 191 +++++++++--------- drivers/net/ethernet/intel/ice/ice_lib.c | 8 +- drivers/net/ethernet/intel/ice/ice_main.c | 6 +- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 57 +++--- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 10 +- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.c | 10 +- 6 files changed, 140 insertions(+), 142 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index f3afbba4a66d..b638f9e9ecd9 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -343,108 +343,113 @@ struct ice_aqc_vsi_props { #define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE BIT(7) u8 sw_flags2; #define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S 0 -#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M \ - (0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S) +#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M (0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S) #define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA BIT(0) #define ICE_AQ_VSI_SW_FLAG_LAN_ENA BIT(4) u8 veb_stat_id; #define ICE_AQ_VSI_SW_VEB_STAT_ID_S 0 -#define ICE_AQ_VSI_SW_VEB_STAT_ID_M (0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S) +#define ICE_AQ_VSI_SW_VEB_STAT_ID_M (0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S) #define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID BIT(5) /* security section */ u8 sec_flags; #define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD BIT(0) #define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF BIT(2) -#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S 4 -#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M (0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S) +#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S 4 +#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M (0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S) #define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA BIT(0) u8 sec_reserved; /* VLAN section */ - __le16 pvid; /* VLANS include priority bits */ - u8 pvlan_reserved[2]; - u8 vlan_flags; -#define ICE_AQ_VSI_VLAN_MODE_S 0 -#define ICE_AQ_VSI_VLAN_MODE_M (0x3 << ICE_AQ_VSI_VLAN_MODE_S) -#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED 0x1 -#define ICE_AQ_VSI_VLAN_MODE_TAGGED 0x2 -#define ICE_AQ_VSI_VLAN_MODE_ALL 0x3 -#define ICE_AQ_VSI_PVLAN_INSERT_PVID BIT(2) -#define ICE_AQ_VSI_VLAN_EMOD_S 3 -#define ICE_AQ_VSI_VLAN_EMOD_M (0x3 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH (0x0 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_STR_UP (0x1 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_STR (0x2 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_NOTHING (0x3 << ICE_AQ_VSI_VLAN_EMOD_S) - u8 pvlan_reserved2[3]; + __le16 port_based_inner_vlan; /* VLANS include priority bits */ + u8 inner_vlan_reserved[2]; + u8 inner_vlan_flags; +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_S 0 +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_M (0x3 << ICE_AQ_VSI_INNER_VLAN_TX_MODE_S) +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED 0x1 +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTTAGGED 0x2 +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL 0x3 +#define ICE_AQ_VSI_INNER_VLAN_INSERT_PVID BIT(2) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_S 3 +#define ICE_AQ_VSI_INNER_VLAN_EMODE_M (0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH (0x0 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_UP (0x1 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR (0x2 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING (0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) + u8 inner_vlan_reserved2[3]; /* ingress egress up sections */ __le32 ingress_table; /* bitmap, 3 bits per up */ -#define ICE_AQ_VSI_UP_TABLE_UP0_S 0 -#define ICE_AQ_VSI_UP_TABLE_UP0_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S) -#define ICE_AQ_VSI_UP_TABLE_UP1_S 3 -#define ICE_AQ_VSI_UP_TABLE_UP1_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S) -#define ICE_AQ_VSI_UP_TABLE_UP2_S 6 -#define ICE_AQ_VSI_UP_TABLE_UP2_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S) -#define ICE_AQ_VSI_UP_TABLE_UP3_S 9 -#define ICE_AQ_VSI_UP_TABLE_UP3_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S) -#define ICE_AQ_VSI_UP_TABLE_UP4_S 12 -#define ICE_AQ_VSI_UP_TABLE_UP4_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S) -#define ICE_AQ_VSI_UP_TABLE_UP5_S 15 -#define ICE_AQ_VSI_UP_TABLE_UP5_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S) -#define ICE_AQ_VSI_UP_TABLE_UP6_S 18 -#define ICE_AQ_VSI_UP_TABLE_UP6_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S) -#define ICE_AQ_VSI_UP_TABLE_UP7_S 21 -#define ICE_AQ_VSI_UP_TABLE_UP7_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S) +#define ICE_AQ_VSI_UP_TABLE_UP0_S 0 +#define ICE_AQ_VSI_UP_TABLE_UP0_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S) +#define ICE_AQ_VSI_UP_TABLE_UP1_S 3 +#define ICE_AQ_VSI_UP_TABLE_UP1_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S) +#define ICE_AQ_VSI_UP_TABLE_UP2_S 6 +#define ICE_AQ_VSI_UP_TABLE_UP2_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S) +#define ICE_AQ_VSI_UP_TABLE_UP3_S 9 +#define ICE_AQ_VSI_UP_TABLE_UP3_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S) +#define ICE_AQ_VSI_UP_TABLE_UP4_S 12 +#define ICE_AQ_VSI_UP_TABLE_UP4_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S) +#define ICE_AQ_VSI_UP_TABLE_UP5_S 15 +#define ICE_AQ_VSI_UP_TABLE_UP5_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S) +#define ICE_AQ_VSI_UP_TABLE_UP6_S 18 +#define ICE_AQ_VSI_UP_TABLE_UP6_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S) +#define ICE_AQ_VSI_UP_TABLE_UP7_S 21 +#define ICE_AQ_VSI_UP_TABLE_UP7_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S) __le32 egress_table; /* same defines as for ingress table */ /* outer tags section */ - __le16 outer_tag; - u8 outer_tag_flags; -#define ICE_AQ_VSI_OUTER_TAG_MODE_S 0 -#define ICE_AQ_VSI_OUTER_TAG_MODE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S) -#define ICE_AQ_VSI_OUTER_TAG_NOTHING 0x0 -#define ICE_AQ_VSI_OUTER_TAG_REMOVE 0x1 -#define ICE_AQ_VSI_OUTER_TAG_COPY 0x2 -#define ICE_AQ_VSI_OUTER_TAG_TYPE_S 2 -#define ICE_AQ_VSI_OUTER_TAG_TYPE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S) -#define ICE_AQ_VSI_OUTER_TAG_NONE 0x0 -#define ICE_AQ_VSI_OUTER_TAG_STAG 0x1 -#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100 0x2 -#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100 0x3 -#define ICE_AQ_VSI_OUTER_TAG_INSERT BIT(4) -#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6) - u8 outer_tag_reserved; + __le16 port_based_outer_vlan; + u8 outer_vlan_flags; +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_S 0 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_M (0x3 << ICE_AQ_VSI_OUTER_VLAN_EMODE_S) +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH 0x0 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_UP 0x1 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW 0x2 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING 0x3 +#define ICE_AQ_VSI_OUTER_TAG_TYPE_S 2 +#define ICE_AQ_VSI_OUTER_TAG_TYPE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S) +#define ICE_AQ_VSI_OUTER_TAG_NONE 0x0 +#define ICE_AQ_VSI_OUTER_TAG_STAG 0x1 +#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100 0x2 +#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100 0x3 +#define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT BIT(4) +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S 5 +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M (0x3 << ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED 0x1 +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTTAGGED 0x2 +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL 0x3 +#define ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC BIT(7) + u8 outer_vlan_reserved; /* queue mapping section */ __le16 mapping_flags; -#define ICE_AQ_VSI_Q_MAP_CONTIG 0x0 -#define ICE_AQ_VSI_Q_MAP_NONCONTIG BIT(0) +#define ICE_AQ_VSI_Q_MAP_CONTIG 0x0 +#define ICE_AQ_VSI_Q_MAP_NONCONTIG BIT(0) __le16 q_mapping[16]; -#define ICE_AQ_VSI_Q_S 0 -#define ICE_AQ_VSI_Q_M (0x7FF << ICE_AQ_VSI_Q_S) +#define ICE_AQ_VSI_Q_S 0 +#define ICE_AQ_VSI_Q_M (0x7FF << ICE_AQ_VSI_Q_S) __le16 tc_mapping[8]; -#define ICE_AQ_VSI_TC_Q_OFFSET_S 0 -#define ICE_AQ_VSI_TC_Q_OFFSET_M (0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S) -#define ICE_AQ_VSI_TC_Q_NUM_S 11 -#define ICE_AQ_VSI_TC_Q_NUM_M (0xF << ICE_AQ_VSI_TC_Q_NUM_S) +#define ICE_AQ_VSI_TC_Q_OFFSET_S 0 +#define ICE_AQ_VSI_TC_Q_OFFSET_M (0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S) +#define ICE_AQ_VSI_TC_Q_NUM_S 11 +#define ICE_AQ_VSI_TC_Q_NUM_M (0xF << ICE_AQ_VSI_TC_Q_NUM_S) /* queueing option section */ u8 q_opt_rss; -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S 0 -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI 0x0 -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF 0x2 -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL 0x3 -#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S 2 -#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M (0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S) -#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S 6 -#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ (0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ (0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_XOR (0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_JHASH (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S 0 +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI 0x0 +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF 0x2 +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL 0x3 +#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S 2 +#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M (0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S) +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S 6 +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ (0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ (0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_XOR (0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_JHASH (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) u8 q_opt_tc; -#define ICE_AQ_VSI_Q_OPT_TC_OVR_S 0 -#define ICE_AQ_VSI_Q_OPT_TC_OVR_M (0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S) -#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR BIT(7) +#define ICE_AQ_VSI_Q_OPT_TC_OVR_S 0 +#define ICE_AQ_VSI_Q_OPT_TC_OVR_M (0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S) +#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR BIT(7) u8 q_opt_flags; -#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN BIT(0) +#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN BIT(0) u8 q_opt_reserved[3]; /* outer up section */ __le32 outer_up_table; /* same structure and defines as ingress tbl */ @@ -452,27 +457,27 @@ struct ice_aqc_vsi_props { __le16 sect_10_reserved; /* flow director section */ __le16 fd_options; -#define ICE_AQ_VSI_FD_ENABLE BIT(0) -#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE BIT(1) -#define ICE_AQ_VSI_FD_PROG_ENABLE BIT(3) +#define ICE_AQ_VSI_FD_ENABLE BIT(0) +#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE BIT(1) +#define ICE_AQ_VSI_FD_PROG_ENABLE BIT(3) __le16 max_fd_fltr_dedicated; __le16 max_fd_fltr_shared; __le16 fd_def_q; -#define ICE_AQ_VSI_FD_DEF_Q_S 0 -#define ICE_AQ_VSI_FD_DEF_Q_M (0x7FF << ICE_AQ_VSI_FD_DEF_Q_S) -#define ICE_AQ_VSI_FD_DEF_GRP_S 12 -#define ICE_AQ_VSI_FD_DEF_GRP_M (0x7 << ICE_AQ_VSI_FD_DEF_GRP_S) +#define ICE_AQ_VSI_FD_DEF_Q_S 0 +#define ICE_AQ_VSI_FD_DEF_Q_M (0x7FF << ICE_AQ_VSI_FD_DEF_Q_S) +#define ICE_AQ_VSI_FD_DEF_GRP_S 12 +#define ICE_AQ_VSI_FD_DEF_GRP_M (0x7 << ICE_AQ_VSI_FD_DEF_GRP_S) __le16 fd_report_opt; -#define ICE_AQ_VSI_FD_REPORT_Q_S 0 -#define ICE_AQ_VSI_FD_REPORT_Q_M (0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S) -#define ICE_AQ_VSI_FD_DEF_PRIORITY_S 12 -#define ICE_AQ_VSI_FD_DEF_PRIORITY_M (0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S) -#define ICE_AQ_VSI_FD_DEF_DROP BIT(15) +#define ICE_AQ_VSI_FD_REPORT_Q_S 0 +#define ICE_AQ_VSI_FD_REPORT_Q_M (0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S) +#define ICE_AQ_VSI_FD_DEF_PRIORITY_S 12 +#define ICE_AQ_VSI_FD_DEF_PRIORITY_M (0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S) +#define ICE_AQ_VSI_FD_DEF_DROP BIT(15) /* PASID section */ __le32 pasid_id; -#define ICE_AQ_VSI_PASID_ID_S 0 -#define ICE_AQ_VSI_PASID_ID_M (0xFFFFF << ICE_AQ_VSI_PASID_ID_S) -#define ICE_AQ_VSI_PASID_ID_VALID BIT(31) +#define ICE_AQ_VSI_PASID_ID_S 0 +#define ICE_AQ_VSI_PASID_ID_M (0xFFFFF << ICE_AQ_VSI_PASID_ID_S) +#define ICE_AQ_VSI_PASID_ID_VALID BIT(31) u8 reserved[24]; }; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 0fff5ec897c9..c8991711b754 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -810,13 +810,13 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE; /* Traffic from VSI can be sent to LAN */ ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; - /* By default bits 3 and 4 in vlan_flags are 0's which results in legacy + /* By default bits 3 and 4 in inner_vlan_flags are 0's which results in legacy * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all * packets untagged/tagged. */ - ctxt->info.vlan_flags = ((ICE_AQ_VSI_VLAN_MODE_ALL & - ICE_AQ_VSI_VLAN_MODE_M) >> - ICE_AQ_VSI_VLAN_MODE_S); + ctxt->info.inner_vlan_flags = ((ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL & + ICE_AQ_VSI_INNER_VLAN_TX_MODE_M) >> + ICE_AQ_VSI_INNER_VLAN_TX_MODE_S); /* Have 1:1 UP mapping for both ingress/egress tables */ table |= ICE_UP_TABLE_TRANSLATE(0, 0); table |= ICE_UP_TABLE_TRANSLATE(1, 1); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 8a0684c0ebd0..6843b8e87441 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -4071,8 +4071,8 @@ static void ice_set_safe_mode_vlan_cfg(struct ice_pf *pf) ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; /* allow all VLANs on Tx and don't strip on Rx */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL | - ICE_AQ_VSI_VLAN_EMOD_NOTHING; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL | + ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; status = ice_update_vsi(hw, vsi->idx, ctxt, NULL); if (status) { @@ -4081,7 +4081,7 @@ static void ice_set_safe_mode_vlan_cfg(struct ice_pf *pf) } else { vsi->info.sec_flags = ctxt->info.sec_flags; vsi->info.sw_flags2 = ctxt->info.sw_flags2; - vsi->info.vlan_flags = ctxt->info.vlan_flags; + vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags; } kfree(ctxt); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 6b7feab0b2a1..0b130505b68a 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -100,14 +100,14 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) return -ENOMEM; /* Here we are configuring the VSI to let the driver add VLAN tags by - * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag + * setting inner_vlan_flags to ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL. The actual VLAN tag * insertion happens in the Tx hot path, in ice_tx_map. */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL; /* Preserve existing VLAN strip setting */ - ctxt->info.vlan_flags |= (vsi->info.vlan_flags & - ICE_AQ_VSI_VLAN_EMOD_M); + ctxt->info.inner_vlan_flags |= (vsi->info.inner_vlan_flags & + ICE_AQ_VSI_INNER_VLAN_EMODE_M); ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); @@ -118,7 +118,7 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) goto out; } - vsi->info.vlan_flags = ctxt->info.vlan_flags; + vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags; out: kfree(ctxt); return err; @@ -138,7 +138,7 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) /* do not allow modifying VLAN stripping when a port VLAN is configured * on this VSI */ - if (vsi->info.pvid) + if (vsi->info.port_based_inner_vlan) return 0; ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); @@ -151,13 +151,13 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) */ if (ena) /* Strip VLAN tag from Rx packet and put it in the desc */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH; else /* Disable stripping. Leave tag in packet */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; /* Allow all packets untagged/tagged */ - ctxt->info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL; + ctxt->info.inner_vlan_flags |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL; ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); @@ -168,13 +168,13 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) goto out; } - vsi->info.vlan_flags = ctxt->info.vlan_flags; + vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags; out: kfree(ctxt); return err; } -int ice_vsi_ena_stripping(struct ice_vsi *vsi, const u16 tpid) +int ice_vsi_ena_inner_stripping(struct ice_vsi *vsi, const u16 tpid) { if (tpid != ETH_P_8021Q) { print_invalid_tpid(vsi, tpid); @@ -184,12 +184,12 @@ int ice_vsi_ena_stripping(struct ice_vsi *vsi, const u16 tpid) return ice_vsi_manage_vlan_stripping(vsi, true); } -int ice_vsi_dis_stripping(struct ice_vsi *vsi) +int ice_vsi_dis_inner_stripping(struct ice_vsi *vsi) { return ice_vsi_manage_vlan_stripping(vsi, false); } -int ice_vsi_ena_insertion(struct ice_vsi *vsi, const u16 tpid) +int ice_vsi_ena_inner_insertion(struct ice_vsi *vsi, const u16 tpid) { if (tpid != ETH_P_8021Q) { print_invalid_tpid(vsi, tpid); @@ -199,18 +199,17 @@ int ice_vsi_ena_insertion(struct ice_vsi *vsi, const u16 tpid) return ice_vsi_manage_vlan_insertion(vsi); } -int ice_vsi_dis_insertion(struct ice_vsi *vsi) +int ice_vsi_dis_inner_insertion(struct ice_vsi *vsi) { return ice_vsi_manage_vlan_insertion(vsi); } /** - * ice_vsi_manage_pvid - Enable or disable port VLAN for VSI + * __ice_vsi_set_inner_port_vlan - set port VLAN VSI context settings to enable a port VLAN * @vsi: the VSI to update * @pvid_info: VLAN ID and QoS used to set the PVID VSI context field - * @enable: true for enable PVID false for disable */ -static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) +static int __ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, u16 pvid_info) { struct ice_hw *hw = &vsi->back->hw; struct ice_aqc_vsi_props *info; @@ -223,18 +222,12 @@ static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) ctxt->info = vsi->info; info = &ctxt->info; - if (enable) { - info->vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | - ICE_AQ_VSI_PVLAN_INSERT_PVID | - ICE_AQ_VSI_VLAN_EMOD_STR; - info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } else { - info->vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING | - ICE_AQ_VSI_VLAN_MODE_ALL; - info->sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } + info->inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED | + ICE_AQ_VSI_INNER_VLAN_INSERT_PVID | + ICE_AQ_VSI_INNER_VLAN_EMODE_STR; + info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - info->pvid = cpu_to_le16(pvid_info); + info->port_based_inner_vlan = cpu_to_le16(pvid_info); info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | ICE_AQ_VSI_PROP_SW_VALID); @@ -245,15 +238,15 @@ static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) goto out; } - vsi->info.vlan_flags = info->vlan_flags; + vsi->info.inner_vlan_flags = info->inner_vlan_flags; vsi->info.sw_flags2 = info->sw_flags2; - vsi->info.pvid = info->pvid; + vsi->info.port_based_inner_vlan = info->port_based_inner_vlan; out: kfree(ctxt); return ret; } -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) +int ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { u16 port_vlan_info; @@ -265,7 +258,7 @@ int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) port_vlan_info = vlan->vid | (vlan->prio << VLAN_PRIO_SHIFT); - return ice_vsi_manage_pvid(vsi, port_vlan_info, true); + return __ice_vsi_set_inner_port_vlan(vsi, port_vlan_info); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index 1bdbf585db7d..a10671133e36 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -12,11 +12,11 @@ struct ice_vsi; int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); -int ice_vsi_ena_stripping(struct ice_vsi *vsi, u16 tpid); -int ice_vsi_dis_stripping(struct ice_vsi *vsi); -int ice_vsi_ena_insertion(struct ice_vsi *vsi, u16 tpid); -int ice_vsi_dis_insertion(struct ice_vsi *vsi); -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_vsi_ena_inner_stripping(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_inner_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_inner_insertion(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_inner_insertion(struct ice_vsi *vsi); +int ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c index 3bab6c025856..6a6b49581c70 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c @@ -8,13 +8,13 @@ void ice_vsi_init_vlan_ops(struct ice_vsi *vsi) { vsi->vlan_ops.add_vlan = ice_vsi_add_vlan; vsi->vlan_ops.del_vlan = ice_vsi_del_vlan; - vsi->vlan_ops.ena_stripping = ice_vsi_ena_stripping; - vsi->vlan_ops.dis_stripping = ice_vsi_dis_stripping; - vsi->vlan_ops.ena_insertion = ice_vsi_ena_insertion; - vsi->vlan_ops.dis_insertion = ice_vsi_dis_insertion; + vsi->vlan_ops.ena_stripping = ice_vsi_ena_inner_stripping; + vsi->vlan_ops.dis_stripping = ice_vsi_dis_inner_stripping; + vsi->vlan_ops.ena_insertion = ice_vsi_ena_inner_insertion; + vsi->vlan_ops.dis_insertion = ice_vsi_dis_inner_insertion; vsi->vlan_ops.ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; vsi->vlan_ops.dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vsi->vlan_ops.ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vsi->vlan_ops.dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; - vsi->vlan_ops.set_port_vlan = ice_vsi_set_port_vlan; + vsi->vlan_ops.set_port_vlan = ice_vsi_set_inner_port_vlan; } -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:44 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:44 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 12/14] ice: Advertise 802.1ad VLAN filtering and offloads for PF netdev In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-12-anthony.l.nguyen@intel.com> From: Brett Creeley In order for the driver to support 802.1ad VLAN filtering and offloads, it needs to advertise those VLAN features and also support modifying those VLAN features, so make the necessary changes to ice_set_netdev_features(). By default, enable CTAG insertion/stripping and CTAG filtering for both Single and Double VLAN Modes (SVM/DVM). Also, in DVM, enable STAG filtering by default. This is done by setting the feature bits in netdev->features. Also, in DVM, support toggling of STAG insertion/stripping, but don't enable them by default. This is done by setting the feature bits in netdev->hw_features. Since 802.1ad VLAN filtering and offloads are only supported in DVM, make sure they are not enabled by default and that they cannot be enabled during runtime, when the device is in SVM. Add an implementation for the ndo_fix_features() callback. This is needed since the hardware cannot support multiple VLAN ethertypes for VLAN insertion/stripping simultaneously and all supported VLAN filtering must either be enabled or disabled together. Disable inner VLAN stripping by default when DVM is enabled. If a VSI supports stripping the inner VLAN in DVM, then it will have to configure that during runtime. For example if a VF is configured in a port VLAN while DVM is enabled it will be allowed to offload inner VLANs. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_lib.c | 27 ++- drivers/net/ethernet/intel/ice/ice_main.c | 260 ++++++++++++++++++---- 2 files changed, 238 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 36507f0dc04e..de37928c2870 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -796,11 +796,12 @@ static void ice_vsi_set_rss_params(struct ice_vsi *vsi) /** * ice_set_dflt_vsi_ctx - Set default VSI context before adding a VSI + * @hw: HW structure used to determine the VLAN mode of the device * @ctxt: the VSI context being set * * This initializes a default VSI context for all sections except the Queues. */ -static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) +static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt) { u32 table = 0; @@ -811,13 +812,27 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE; /* Traffic from VSI can be sent to LAN */ ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; - /* By default bits 3 and 4 in inner_vlan_flags are 0's which results in legacy - * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all - * packets untagged/tagged. - */ + /* allow all untagged/tagged packets by default on Tx */ ctxt->info.inner_vlan_flags = ((ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL & ICE_AQ_VSI_INNER_VLAN_TX_MODE_M) >> ICE_AQ_VSI_INNER_VLAN_TX_MODE_S); + /* SVM - by default bits 3 and 4 in inner_vlan_flags are 0's which + * results in legacy behavior (show VLAN, DEI, and UP) in descriptor. + * + * DVM - leave inner VLAN in packet by default + */ + if (ice_is_dvm_ena(hw)) { + ctxt->info.inner_vlan_flags |= + ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; + ctxt->info.outer_vlan_flags = + (ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M; + ctxt->info.outer_vlan_flags |= + (ICE_AQ_VSI_OUTER_TAG_VLAN_8100 << + ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M; + } /* Have 1:1 UP mapping for both ingress/egress tables */ table |= ICE_UP_TABLE_TRANSLATE(0, 0); table |= ICE_UP_TABLE_TRANSLATE(1, 1); @@ -1094,7 +1109,7 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; } - ice_set_dflt_vsi_ctx(ctxt); + ice_set_dflt_vsi_ctx(hw, ctxt); if (test_bit(ICE_FLAG_FD_ENA, pf->flags)) ice_set_fd_vsi_ctx(ctxt, vsi); /* if the switch is in VEB mode, allow VSI loopback */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 563b597b0a85..851dbd70d809 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -416,7 +416,8 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) IFF_PROMISC; goto out_promisc; } - if (vsi->num_vlan > 1) + if (vsi->current_netdev_flags & + NETIF_F_HW_VLAN_CTAG_FILTER) vlan_ops->ena_rx_filtering(vsi); } } @@ -3240,6 +3241,7 @@ static void ice_set_ops(struct net_device *netdev) static void ice_set_netdev_features(struct net_device *netdev) { struct ice_pf *pf = ice_netdev_to_pf(netdev); + bool is_dvm_ena = ice_is_dvm_ena(&pf->hw); netdev_features_t csumo_features; netdev_features_t vlano_features; netdev_features_t dflt_features; @@ -3266,6 +3268,10 @@ static void ice_set_netdev_features(struct net_device *netdev) NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX; + /* Enable CTAG/STAG filtering by default in Double VLAN Mode (DVM) */ + if (is_dvm_ena) + vlano_features |= NETIF_F_HW_VLAN_STAG_FILTER; + tso_features = NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | @@ -3297,6 +3303,15 @@ static void ice_set_netdev_features(struct net_device *netdev) tso_features; netdev->vlan_features |= dflt_features | csumo_features | tso_features; + + /* advertise support but don't enable by default since only one type of + * VLAN offload can be enabled at a time (i.e. CTAG or STAG). When one + * type turns on the other has to be turned off. This is enforced by the + * ice_fix_features() ndo callback. + */ + if (is_dvm_ena) + netdev->hw_features |= NETIF_F_HW_VLAN_STAG_RX | + NETIF_F_HW_VLAN_STAG_TX; } /** @@ -3432,13 +3447,6 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); - /* Enable VLAN pruning when a VLAN other than 0 is added */ - if (!ice_vsi_is_vlan_pruning_ena(vsi)) { - ret = vlan_ops->ena_rx_filtering(vsi); - if (ret) - return ret; - } - /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ @@ -3481,12 +3489,8 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) if (ret) return ret; - /* Disable pruning when VLAN 0 is the only VLAN rule */ - if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - vlan_ops->dis_rx_filtering(vsi); - set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); - return ret; + return 0; } /** @@ -5596,6 +5600,194 @@ ice_fdb_del(struct ndmsg *ndm, __always_unused struct nlattr *tb[], return err; } +#define NETIF_VLAN_OFFLOAD_FEATURES (NETIF_F_HW_VLAN_CTAG_RX | \ + NETIF_F_HW_VLAN_CTAG_TX | \ + NETIF_F_HW_VLAN_STAG_RX | \ + NETIF_F_HW_VLAN_STAG_TX) + +#define NETIF_VLAN_FILTERING_FEATURES (NETIF_F_HW_VLAN_CTAG_FILTER | \ + NETIF_F_HW_VLAN_STAG_FILTER) + +/** + * ice_fix_features - fix the netdev features flags based on device limitations + * @netdev: ptr to the netdev that flags are being fixed on + * @features: features that need to be checked and possibly fixed + * + * Make sure any fixups are made to features in this callback. This enables the + * driver to not have to check unsupported configurations throughout the driver + * because that's the responsiblity of this callback. + * + * Single VLAN Mode (SVM) Supported Features: + * NETIF_F_HW_VLAN_CTAG_FILTER + * NETIF_F_HW_VLAN_CTAG_RX + * NETIF_F_HW_VLAN_CTAG_TX + * + * Double VLAN Mode (DVM) Supported Features: + * NETIF_F_HW_VLAN_CTAG_FILTER + * NETIF_F_HW_VLAN_CTAG_RX + * NETIF_F_HW_VLAN_CTAG_TX + * + * NETIF_F_HW_VLAN_STAG_FILTER + * NETIF_HW_VLAN_STAG_RX + * NETIF_HW_VLAN_STAG_TX + * + * Features that need fixing: + * Cannot simultaneously enable CTAG and STAG stripping and/or insertion. + * These are mutually exlusive as the VSI context cannot support multiple + * VLAN ethertypes simultaneously for stripping and/or insertion. If this + * is not done, then default to clearing the requested STAG offload + * settings. + * + * All supported filtering has to be enabled or disabled together. For + * example, in DVM, CTAG and STAG filtering have to be enabled and disabled + * together. If this is not done, then default to VLAN filtering disabled. + * These are mutually exclusive as there is currently no way to + * enable/disable VLAN filtering based on VLAN ethertype when using VLAN + * prune rules. + */ +static netdev_features_t +ice_fix_features(struct net_device *netdev, netdev_features_t features) +{ + struct ice_netdev_priv *np = netdev_priv(netdev); + netdev_features_t supported_vlan_filtering; + netdev_features_t requested_vlan_filtering; + struct ice_vsi *vsi = np->vsi; + + requested_vlan_filtering = features & NETIF_VLAN_FILTERING_FEATURES; + + /* make sure supported_vlan_filtering works for both SVM and DVM */ + supported_vlan_filtering = NETIF_F_HW_VLAN_CTAG_FILTER; + if (ice_is_dvm_ena(&vsi->back->hw)) + supported_vlan_filtering |= NETIF_F_HW_VLAN_STAG_FILTER; + + if (requested_vlan_filtering && + requested_vlan_filtering != supported_vlan_filtering) { + if (requested_vlan_filtering & NETIF_F_HW_VLAN_CTAG_FILTER) { + netdev_warn(netdev, "cannot support requested VLAN filtering settings, enabling all supported VLAN filtering settings\n"); + features |= supported_vlan_filtering; + } else { + netdev_warn(netdev, "cannot support requested VLAN filtering settings, clearing all supported VLAN filtering settings\n"); + features &= ~supported_vlan_filtering; + } + } + + if ((features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) && + (features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX))) { + netdev_warn(netdev, "cannot support CTAG and STAG VLAN stripping and/or insertion simultaneously since CTAG and STAG offloads are mutually exclusive, clearing STAG offload settings\n"); + features &= ~(NETIF_F_HW_VLAN_STAG_RX | + NETIF_F_HW_VLAN_STAG_TX); + } + + return features; +} + +/** + * ice_set_vlan_offload_features - set VLAN offload features for the PF VSI + * @vsi: PF's VSI + * @features: features used to determine VLAN offload settings + * + * First, determine the vlan_ethertype based on the VLAN offload bits in + * features. Then determine if stripping and insertion should be enabled or + * disabled. Finally enable or disable VLAN stripping and insertion. + */ +static int +ice_set_vlan_offload_features(struct ice_vsi *vsi, netdev_features_t features) +{ + bool enable_stripping = true, enable_insertion = true; + struct ice_vsi_vlan_ops *vlan_ops; + int strip_err = 0, insert_err = 0; + u16 vlan_ethertype = 0; + + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + + if (features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX)) + vlan_ethertype = ETH_P_8021AD; + else if (features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) + vlan_ethertype = ETH_P_8021Q; + + if (!(features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_CTAG_RX))) + enable_stripping = false; + if (!(features & (NETIF_F_HW_VLAN_STAG_TX | NETIF_F_HW_VLAN_CTAG_TX))) + enable_insertion = false; + + if (enable_stripping) + strip_err = vlan_ops->ena_stripping(vsi, vlan_ethertype); + else + strip_err = vlan_ops->dis_stripping(vsi); + + if (enable_insertion) + insert_err = vlan_ops->ena_insertion(vsi, vlan_ethertype); + else + insert_err = vlan_ops->dis_insertion(vsi); + + if (strip_err || insert_err) + return -EIO; + + return 0; +} + +/** + * ice_set_vlan_filtering_features - set VLAN filtering features for the PF VSI + * @vsi: PF's VSI + * @features: features used to determine VLAN filtering settings + * + * Enable or disable Rx VLAN filtering based on the VLAN filtering bits in the + * features. + */ +static int +ice_set_vlan_filtering_features(struct ice_vsi *vsi, netdev_features_t features) +{ + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + int err = 0; + + /* support Single VLAN Mode (SVM) and Double VLAN Mode (DVM) by checking + * if either bit is set + */ + if (features & + (NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER)) + err = vlan_ops->ena_rx_filtering(vsi); + else + err = vlan_ops->dis_rx_filtering(vsi); + + return err; +} + +/** + * ice_set_vlan_features - set VLAN settings based on suggested feature set + * @netdev: ptr to the netdev being adjusted + * @features: the feature set that the stack is suggesting + * + * Only update VLAN settings if the requested_vlan_features are different than + * the current_vlan_features. + */ +static int +ice_set_vlan_features(struct net_device *netdev, netdev_features_t features) +{ + netdev_features_t current_vlan_features, requested_vlan_features; + struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi *vsi = np->vsi; + int err; + + current_vlan_features = netdev->features & NETIF_VLAN_OFFLOAD_FEATURES; + requested_vlan_features = features & NETIF_VLAN_OFFLOAD_FEATURES; + if (current_vlan_features ^ requested_vlan_features) { + err = ice_set_vlan_offload_features(vsi, features); + if (err) + return err; + } + + current_vlan_features = netdev->features & + NETIF_VLAN_FILTERING_FEATURES; + requested_vlan_features = features & NETIF_VLAN_FILTERING_FEATURES; + if (current_vlan_features ^ requested_vlan_features) { + err = ice_set_vlan_filtering_features(vsi, features); + if (err) + return err; + } + + return 0; +} + /** * ice_set_features - set the netdev feature flags * @netdev: ptr to the netdev being adjusted @@ -5605,7 +5797,6 @@ static int ice_set_features(struct net_device *netdev, netdev_features_t features) { struct ice_netdev_priv *np = netdev_priv(netdev); - struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; int ret = 0; @@ -5622,8 +5813,6 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) return -EBUSY; } - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); - /* Multiple features can be changed in one call so keep features in * separate if/else statements to guarantee each feature is checked */ @@ -5633,26 +5822,9 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) netdev->features & NETIF_F_RXHASH) ice_vsi_manage_rss_lut(vsi, false); - if ((features & NETIF_F_HW_VLAN_CTAG_RX) && - !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vlan_ops->ena_stripping(vsi, ETH_P_8021Q); - else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && - (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vlan_ops->dis_stripping(vsi); - - if ((features & NETIF_F_HW_VLAN_CTAG_TX) && - !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vlan_ops->ena_insertion(vsi, ETH_P_8021Q); - else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && - (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vlan_ops->dis_insertion(vsi); - - if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && - !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vlan_ops->ena_rx_filtering(vsi); - else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && - (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vlan_ops->dis_rx_filtering(vsi); + ret = ice_set_vlan_features(netdev, features); + if (ret) + return ret; if ((features & NETIF_F_NTUPLE) && !(netdev->features & NETIF_F_NTUPLE)) { @@ -5676,7 +5848,7 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) else clear_bit(ICE_FLAG_CLS_FLOWER, pf->flags); - return ret; + return 0; } /** @@ -5685,14 +5857,15 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) */ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) { - struct ice_vsi_vlan_ops *vlan_ops; + int err; - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + err = ice_set_vlan_offload_features(vsi, vsi->netdev->features); + if (err) + return err; - if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - vlan_ops->ena_stripping(vsi, ETH_P_8021Q); - if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - vlan_ops->ena_insertion(vsi, ETH_P_8021Q); + err = ice_set_vlan_filtering_features(vsi, vsi->netdev->features); + if (err) + return err; return ice_vsi_add_vlan_zero(vsi); } @@ -8549,6 +8722,7 @@ static const struct net_device_ops ice_netdev_ops = { .ndo_start_xmit = ice_start_xmit, .ndo_select_queue = ice_select_queue, .ndo_features_check = ice_features_check, + .ndo_fix_features = ice_fix_features, .ndo_set_rx_mode = ice_set_rx_mode, .ndo_set_mac_address = ice_set_mac_address, .ndo_validate_addr = eth_validate_addr, -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:43 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:43 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 11/14] ice: Support configuring the device to Double VLAN Mode In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-11-anthony.l.nguyen@intel.com> In order to support configuring the device in Double VLAN Mode (DVM), the DDP and FW have to support DVM. If both support DVM, the PF that downloads the package needs to update the default recipes, set the VLAN mode, and update boost TCAM entries. To support updating the default recipes in DVM, add support for updating an existing switch recipe's lkup_idx and mask. This is done by first calling the get recipe AQ (0x0292) with the desired recipe ID. Then, if that is successful update one of the lookup indices (lkup_idx) and its associated mask if the mask is valid otherwise the already existing mask will be used. The VLAN mode of the device has to be configured while the global configuration lock is held while downloading the DDP, specifically after the DDP has been downloaded. If supported, the device will default to DVM. Co-developed-by: Dan Nowlin Signed-off-by: Dan Nowlin Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/Makefile | 1 + .../net/ethernet/intel/ice/ice_adminq_cmd.h | 64 ++- drivers/net/ethernet/intel/ice/ice_common.c | 49 +- drivers/net/ethernet/intel/ice/ice_common.h | 3 + .../net/ethernet/intel/ice/ice_flex_pipe.c | 290 ++++++++++-- .../net/ethernet/intel/ice/ice_flex_pipe.h | 13 + .../net/ethernet/intel/ice/ice_flex_type.h | 40 ++ drivers/net/ethernet/intel/ice/ice_main.c | 12 + .../ethernet/intel/ice/ice_pf_vsi_vlan_ops.c | 1 + drivers/net/ethernet/intel/ice/ice_switch.c | 75 +++ drivers/net/ethernet/intel/ice/ice_switch.h | 13 + drivers/net/ethernet/intel/ice/ice_type.h | 5 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 1 + .../net/ethernet/intel/ice/ice_vlan_mode.c | 439 ++++++++++++++++++ .../net/ethernet/intel/ice/ice_vlan_mode.h | 13 + .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 25 +- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 5 - 17 files changed, 990 insertions(+), 59 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_vlan_mode.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vlan_mode.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index 3ece1df919f8..606ff3522bd4 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -23,6 +23,7 @@ ice-y := ice_main.o \ ice_vsi_vlan_lib.o \ ice_fdir.o \ ice_ethtool_fdir.o \ + ice_vlan_mode.o \ ice_flex_pipe.o \ ice_flow.o \ ice_idc.o \ diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index b638f9e9ecd9..a23a9ea10751 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -226,6 +226,15 @@ struct ice_aqc_get_sw_cfg_resp_elem { #define ICE_AQC_GET_SW_CONF_RESP_IS_VF BIT(15) }; +/* Set Port parameters, (direct, 0x0203) */ +struct ice_aqc_set_port_params { + __le16 cmd_flags; +#define ICE_AQC_SET_P_PARAMS_DOUBLE_VLAN_ENA BIT(2) + __le16 bad_frame_vsi; + __le16 swid; + u8 reserved[10]; +}; + /* These resource type defines are used for all switch resource * commands where a resource type is required, such as: * Get Resource Allocation command (indirect 0x0204) @@ -283,6 +292,40 @@ struct ice_aqc_alloc_free_res_elem { struct ice_aqc_res_elem elem[]; }; +/* Request buffer for Set VLAN Mode AQ command (indirect 0x020C) */ +struct ice_aqc_set_vlan_mode { + u8 reserved; + u8 l2tag_prio_tagging; +#define ICE_AQ_VLAN_PRIO_TAG_S 0 +#define ICE_AQ_VLAN_PRIO_TAG_M (0x7 << ICE_AQ_VLAN_PRIO_TAG_S) +#define ICE_AQ_VLAN_PRIO_TAG_NOT_SUPPORTED 0x0 +#define ICE_AQ_VLAN_PRIO_TAG_STAG 0x1 +#define ICE_AQ_VLAN_PRIO_TAG_OUTER_CTAG 0x2 +#define ICE_AQ_VLAN_PRIO_TAG_OUTER_VLAN 0x3 +#define ICE_AQ_VLAN_PRIO_TAG_INNER_CTAG 0x4 +#define ICE_AQ_VLAN_PRIO_TAG_MAX 0x4 +#define ICE_AQ_VLAN_PRIO_TAG_ERROR 0x7 + u8 l2tag_reserved[64]; + u8 rdma_packet; +#define ICE_AQ_VLAN_RDMA_TAG_S 0 +#define ICE_AQ_VLAN_RDMA_TAG_M (0x3F << ICE_AQ_VLAN_RDMA_TAG_S) +#define ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING 0x10 +#define ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING 0x1A + u8 rdma_reserved[2]; + u8 mng_vlan_prot_id; +#define ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER 0x10 +#define ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER 0x11 + u8 prot_id_reserved[30]; +}; + +/* Response buffer for Get VLAN Mode AQ command (indirect 0x020D) */ +struct ice_aqc_get_vlan_mode { + u8 vlan_mode; +#define ICE_AQ_VLAN_MODE_DVM_ENA BIT(0) + u8 l2tag_prio_tagging; + u8 reserved[98]; +}; + /* Add VSI (indirect 0x0210) * Update VSI (indirect 0x0211) * Get VSI (indirect 0x0212) @@ -494,9 +537,13 @@ struct ice_aqc_add_get_recipe { struct ice_aqc_recipe_content { u8 rid; +#define ICE_AQ_RECIPE_ID_S 0 +#define ICE_AQ_RECIPE_ID_M (0x3F << ICE_AQ_RECIPE_ID_S) #define ICE_AQ_RECIPE_ID_IS_ROOT BIT(7) #define ICE_AQ_SW_ID_LKUP_IDX 0 u8 lkup_indx[5]; +#define ICE_AQ_RECIPE_LKUP_DATA_S 0 +#define ICE_AQ_RECIPE_LKUP_DATA_M (0x3F << ICE_AQ_RECIPE_LKUP_DATA_S) #define ICE_AQ_RECIPE_LKUP_IGNORE BIT(7) #define ICE_AQ_SW_ID_LKUP_MASK 0x00FF __le16 mask[5]; @@ -507,15 +554,25 @@ struct ice_aqc_recipe_content { u8 rsvd0[3]; u8 act_ctrl_join_priority; u8 act_ctrl_fwd_priority; +#define ICE_AQ_RECIPE_FWD_PRIORITY_S 0 +#define ICE_AQ_RECIPE_FWD_PRIORITY_M (0xF << ICE_AQ_RECIPE_FWD_PRIORITY_S) u8 act_ctrl; +#define ICE_AQ_RECIPE_ACT_NEED_PASS_L2 BIT(0) +#define ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2 BIT(1) #define ICE_AQ_RECIPE_ACT_INV_ACT BIT(2) +#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_S 4 +#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_M (0x3 << ICE_AQ_RECIPE_ACT_PRUNE_INDX_S) u8 rsvd1; __le32 dflt_act; +#define ICE_AQ_RECIPE_DFLT_ACT_S 0 +#define ICE_AQ_RECIPE_DFLT_ACT_M (0x7FFFF << ICE_AQ_RECIPE_DFLT_ACT_S) +#define ICE_AQ_RECIPE_DFLT_ACT_VALID BIT(31) }; struct ice_aqc_recipe_data_elem { u8 recipe_indx; u8 resp_bits; +#define ICE_AQ_RECIPE_WAS_UPDATED BIT(0) u8 rsvd0[2]; u8 recipe_bitmap[8]; u8 rsvd1[4]; @@ -1906,7 +1963,7 @@ struct ice_aqc_get_clear_fw_log { }; /* Download Package (indirect 0x0C40) */ -/* Also used for Update Package (indirect 0x0C42) */ +/* Also used for Update Package (indirect 0x0C41 and 0x0C42) */ struct ice_aqc_download_pkg { u8 flags; #define ICE_AQC_DOWNLOAD_PKG_LAST_BUF 0x01 @@ -2032,6 +2089,7 @@ struct ice_aq_desc { struct ice_aqc_sff_eeprom read_write_sff_param; struct ice_aqc_set_port_id_led set_port_id_led; struct ice_aqc_get_sw_cfg get_sw_conf; + struct ice_aqc_set_port_params set_port_params; struct ice_aqc_sw_rules sw_rules; struct ice_aqc_add_get_recipe add_get_recipe; struct ice_aqc_recipe_to_profile recipe_to_profile; @@ -2135,10 +2193,13 @@ enum ice_adminq_opc { /* internal switch commands */ ice_aqc_opc_get_sw_cfg = 0x0200, + ice_aqc_opc_set_port_params = 0x0203, /* Alloc/Free/Get Resources */ ice_aqc_opc_alloc_res = 0x0208, ice_aqc_opc_free_res = 0x0209, + ice_aqc_opc_set_vlan_mode_parameters = 0x020C, + ice_aqc_opc_get_vlan_mode_parameters = 0x020D, /* VSI commands */ ice_aqc_opc_add_vsi = 0x0210, @@ -2230,6 +2291,7 @@ enum ice_adminq_opc { /* package commands */ ice_aqc_opc_download_pkg = 0x0C40, + ice_aqc_opc_upload_section = 0x0C41, ice_aqc_opc_update_pkg = 0x0C42, ice_aqc_opc_get_pkg_info_list = 0x0C43, diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 44ed1c9161dc..ede131189a8f 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1518,16 +1518,27 @@ ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf, /* When a package download is in process (i.e. when the firmware's * Global Configuration Lock resource is held), only the Download - * Package, Get Version, Get Package Info List and Release Resource - * (with resource ID set to Global Config Lock) AdminQ commands are - * allowed; all others must block until the package download completes - * and the Global Config Lock is released. See also - * ice_acquire_global_cfg_lock(). + * Package, Get Version, Get Package Info List, Upload Section, + * Update Package, Set Port Parameters, Get/Set VLAN Mode Parameters, + * Add Recipe, Set Recipes to Profile Association, Get Recipe, and Get + * Recipes to Profile Association, and Release Resource (with resource + * ID set to Global Config Lock) AdminQ commands are allowed; all others + * must block until the package download completes and the Global Config + * Lock is released. See also ice_acquire_global_cfg_lock(). */ switch (le16_to_cpu(desc->opcode)) { case ice_aqc_opc_download_pkg: case ice_aqc_opc_get_pkg_info_list: case ice_aqc_opc_get_ver: + case ice_aqc_opc_upload_section: + case ice_aqc_opc_update_pkg: + case ice_aqc_opc_set_port_params: + case ice_aqc_opc_get_vlan_mode_parameters: + case ice_aqc_opc_set_vlan_mode_parameters: + case ice_aqc_opc_add_recipe: + case ice_aqc_opc_recipe_to_profile: + case ice_aqc_opc_get_recipe: + case ice_aqc_opc_get_recipe_to_profile: break; case ice_aqc_opc_release_res: if (le16_to_cpu(cmd->res_id) == ICE_AQC_RES_ID_GLBL_LOCK) @@ -2737,6 +2748,34 @@ void ice_clear_pxe_mode(struct ice_hw *hw) ice_aq_clear_pxe_mode(hw); } +/** + * ice_aq_set_port_params - set physical port parameters. + * @pi: pointer to the port info struct + * @double_vlan: if set double VLAN is enabled + * @cd: pointer to command details structure or NULL + * + * Set Physical port parameters (0x0203) + */ +int +ice_aq_set_port_params(struct ice_port_info *pi, bool double_vlan, + struct ice_sq_cd *cd) + +{ + struct ice_aqc_set_port_params *cmd; + struct ice_hw *hw = pi->hw; + struct ice_aq_desc desc; + u16 cmd_flags = 0; + + cmd = &desc.params.set_port_params; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_params); + if (double_vlan) + cmd_flags |= ICE_AQC_SET_P_PARAMS_DOUBLE_VLAN_ENA; + cmd->cmd_flags = cpu_to_le16(cmd_flags); + + return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); +} + /** * ice_get_link_speed_based_on_phy_type - returns link speed * @phy_type_low: lower part of phy_type diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h index 209a3cc113d4..893333b8b738 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.h +++ b/drivers/net/ethernet/intel/ice/ice_common.h @@ -85,6 +85,9 @@ int ice_aq_send_driver_ver(struct ice_hw *hw, struct ice_driver_ver *dv, struct ice_sq_cd *cd); int +ice_aq_set_port_params(struct ice_port_info *pi, bool double_vlan, + struct ice_sq_cd *cd); +int ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode, struct ice_aqc_get_phy_caps_data *caps, struct ice_sq_cd *cd); diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c index b197d3a72014..434169351052 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c @@ -5,9 +5,17 @@ #include "ice_flex_pipe.h" #include "ice_flow.h" +/* For supporting double VLAN mode, it is necessary to enable or disable certain + * boost tcam entries. The metadata labels names that match the following + * prefixes will be saved to allow enabling double VLAN mode. + */ +#define ICE_DVM_PRE "BOOST_MAC_VLAN_DVM" /* enable these entries */ +#define ICE_SVM_PRE "BOOST_MAC_VLAN_SVM" /* disable these entries */ + /* To support tunneling entries by PF, the package will append the PF number to * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc. */ +#define ICE_TNL_PRE "TNL_" static const struct ice_tunnel_type_scan tnls[] = { { TNL_VXLAN, "TNL_VXLAN_PF" }, { TNL_GENEVE, "TNL_GENEVE_PF" }, @@ -525,6 +533,55 @@ ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state, return label->name; } +/** + * ice_add_tunnel_hint + * @hw: pointer to the HW structure + * @label_name: label text + * @val: value of the tunnel port boost entry + */ +static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val) +{ + if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { + u16 i; + + for (i = 0; tnls[i].type != TNL_LAST; i++) { + size_t len = strlen(tnls[i].label_prefix); + + /* Look for matching label start, before continuing */ + if (strncmp(label_name, tnls[i].label_prefix, len)) + continue; + + /* Make sure this label matches our PF. Note that the PF + * character ('0' - '7') will be located where our + * prefix string's null terminator is located. + */ + if ((label_name[len] - '0') == hw->pf_id) { + hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; + hw->tnl.tbl[hw->tnl.count].valid = false; + hw->tnl.tbl[hw->tnl.count].boost_addr = val; + hw->tnl.tbl[hw->tnl.count].port = 0; + hw->tnl.count++; + break; + } + } + } +} + +/** + * ice_add_dvm_hint + * @hw: pointer to the HW structure + * @val: value of the boost entry + * @enable: true if entry needs to be enabled, or false if needs to be disabled + */ +static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable) +{ + if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) { + hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr = val; + hw->dvm_upd.tbl[hw->dvm_upd.count].enable = enable; + hw->dvm_upd.count++; + } +} + /** * ice_init_pkg_hints * @hw: pointer to the HW structure @@ -551,32 +608,23 @@ static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) label_name = ice_enum_labels(ice_seg, ICE_SID_LBL_RXPARSER_TMEM, &state, &val); - while (label_name && hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { - for (i = 0; tnls[i].type != TNL_LAST; i++) { - size_t len = strlen(tnls[i].label_prefix); + while (label_name) { + if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE))) + /* check for a tunnel entry */ + ice_add_tunnel_hint(hw, label_name, val); - /* Look for matching label start, before continuing */ - if (strncmp(label_name, tnls[i].label_prefix, len)) - continue; + /* check for a dvm mode entry */ + else if (!strncmp(label_name, ICE_DVM_PRE, strlen(ICE_DVM_PRE))) + ice_add_dvm_hint(hw, val, true); - /* Make sure this label matches our PF. Note that the PF - * character ('0' - '7') will be located where our - * prefix string's null terminator is located. - */ - if ((label_name[len] - '0') == hw->pf_id) { - hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; - hw->tnl.tbl[hw->tnl.count].valid = false; - hw->tnl.tbl[hw->tnl.count].boost_addr = val; - hw->tnl.tbl[hw->tnl.count].port = 0; - hw->tnl.count++; - break; - } - } + /* check for a svm mode entry */ + else if (!strncmp(label_name, ICE_SVM_PRE, strlen(ICE_SVM_PRE))) + ice_add_dvm_hint(hw, val, false); label_name = ice_enum_labels(NULL, 0, &state, &val); } - /* Cache the appropriate boost TCAM entry pointers */ + /* Cache the appropriate boost TCAM entry pointers for tunnels */ for (i = 0; i < hw->tnl.count; i++) { ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr, &hw->tnl.tbl[i].boost_entry); @@ -586,6 +634,11 @@ static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) hw->tnl.valid_count[hw->tnl.tbl[i].type]++; } } + + /* Cache the appropriate boost TCAM entry pointers for DVM and SVM */ + for (i = 0; i < hw->dvm_upd.count; i++) + ice_find_boost_entry(ice_seg, hw->dvm_upd.tbl[i].boost_addr, + &hw->dvm_upd.tbl[i].boost_entry); } /* Key creation */ @@ -876,6 +929,27 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, return status; } +/** + * ice_aq_upload_section + * @hw: pointer to the hardware structure + * @pkg_buf: the package buffer which will receive the section + * @buf_size: the size of the package buffer + * @cd: pointer to command details structure or NULL + * + * Upload Section (0x0C41) + */ +int +ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, struct ice_sq_cd *cd) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); +} + /** * ice_aq_update_pkg * @hw: pointer to the hardware structure @@ -960,26 +1034,21 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, } /** - * ice_update_pkg + * ice_update_pkg_no_lock * @hw: pointer to the hardware structure * @bufs: pointer to an array of buffers * @count: the number of buffers in the array - * - * Obtains change lock and updates package. */ static int -ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count) { - u32 offset, info, i; - int status; - - status = ice_acquire_change_lock(hw, ICE_RES_WRITE); - if (status) - return status; + int status = 0; + u32 i; for (i = 0; i < count; i++) { struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i); bool last = ((i + 1) == count); + u32 offset, info; status = ice_aq_update_pkg(hw, bh, le16_to_cpu(bh->data_end), last, &offset, &info, NULL); @@ -991,6 +1060,27 @@ ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) } } + return status; +} + +/** + * ice_update_pkg + * @hw: pointer to the hardware structure + * @bufs: pointer to an array of buffers + * @count: the number of buffers in the array + * + * Obtains change lock and updates package. + */ +static int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +{ + int status; + + status = ice_acquire_change_lock(hw, ICE_RES_WRITE); + if (status) + return status; + + status = ice_update_pkg_no_lock(hw, bufs, count); + ice_release_change_lock(hw); return status; @@ -1085,6 +1175,13 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) break; } + if (!status) { + status = ice_set_vlan_mode(hw); + if (status) + ice_debug(hw, ICE_DBG_PKG, "Failed to set VLAN mode: err %d\n", + status); + } + ice_release_global_cfg_lock(hw); return state; @@ -1122,6 +1219,7 @@ static enum ice_ddp_state ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg) { struct ice_buf_table *ice_buf_tbl; + int status; ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n", ice_seg->hdr.seg_format_ver.major, @@ -1138,8 +1236,12 @@ ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg) ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n", le32_to_cpu(ice_buf_tbl->buf_count)); - return ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, - le32_to_cpu(ice_buf_tbl->buf_count)); + status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, + le32_to_cpu(ice_buf_tbl->buf_count)); + + ice_post_pkg_dwnld_vlan_mode_cfg(hw); + + return status; } /** @@ -1902,7 +2004,7 @@ void ice_init_prof_result_bm(struct ice_hw *hw) * * Frees a package buffer */ -static void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) +void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) { devm_kfree(ice_hw_to_dev(hw), bld); } @@ -2001,6 +2103,43 @@ ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size) return NULL; } +/** + * ice_pkg_buf_alloc_single_section + * @hw: pointer to the HW structure + * @type: the section type value + * @size: the size of the section to reserve (in bytes) + * @section: returns pointer to the section + * + * Allocates a package buffer with a single section. + * Note: all package contents must be in Little Endian form. + */ +struct ice_buf_build * +ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, + void **section) +{ + struct ice_buf_build *buf; + + if (!section) + return NULL; + + buf = ice_pkg_buf_alloc(hw); + if (!buf) + return NULL; + + if (ice_pkg_buf_reserve_section(buf, 1)) + goto ice_pkg_buf_alloc_single_section_err; + + *section = ice_pkg_buf_alloc_section(buf, type, size); + if (!*section) + goto ice_pkg_buf_alloc_single_section_err; + + return buf; + +ice_pkg_buf_alloc_single_section_err: + ice_pkg_buf_free(hw, buf); + return NULL; +} + /** * ice_pkg_buf_get_active_sections * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) @@ -2028,7 +2167,7 @@ static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld) * * Return a pointer to the buffer's header */ -static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) +struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) { if (!bld) return NULL; @@ -2064,6 +2203,89 @@ ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, return res; } +/** + * ice_upd_dvm_boost_entry + * @hw: pointer to the HW structure + * @entry: pointer to double vlan boost entry info + */ +static int +ice_upd_dvm_boost_entry(struct ice_hw *hw, struct ice_dvm_entry *entry) +{ + struct ice_boost_tcam_section *sect_rx, *sect_tx; + int status = -ENOSPC; + struct ice_buf_build *bld; + u8 val, dc, nm; + + bld = ice_pkg_buf_alloc(hw); + if (!bld) + return -ENOMEM; + + /* allocate 2 sections, one for Rx parser, one for Tx parser */ + if (ice_pkg_buf_reserve_section(bld, 2)) + goto ice_upd_dvm_boost_entry_err; + + sect_rx = ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM, + struct_size(sect_rx, tcam, 1)); + if (!sect_rx) + goto ice_upd_dvm_boost_entry_err; + sect_rx->count = cpu_to_le16(1); + + sect_tx = ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM, + struct_size(sect_tx, tcam, 1)); + if (!sect_tx) + goto ice_upd_dvm_boost_entry_err; + sect_tx->count = cpu_to_le16(1); + + /* copy original boost entry to update package buffer */ + memcpy(sect_rx->tcam, entry->boost_entry, sizeof(*sect_rx->tcam)); + + /* re-write the don't care and never match bits accordingly */ + if (entry->enable) { + /* all bits are don't care */ + val = 0x00; + dc = 0xFF; + nm = 0x00; + } else { + /* disable, one never match bit, the rest are don't care */ + val = 0x00; + dc = 0xF7; + nm = 0x08; + } + + ice_set_key((u8 *)§_rx->tcam[0].key, sizeof(sect_rx->tcam[0].key), + &val, NULL, &dc, &nm, 0, sizeof(u8)); + + /* exact copy of entry to Tx section entry */ + memcpy(sect_tx->tcam, sect_rx->tcam, sizeof(*sect_tx->tcam)); + + status = ice_update_pkg_no_lock(hw, ice_pkg_buf(bld), 1); + +ice_upd_dvm_boost_entry_err: + ice_pkg_buf_free(hw, bld); + + return status; +} + +/** + * ice_set_dvm_boost_entries + * @hw: pointer to the HW structure + * + * Enable double vlan by updating the appropriate boost tcam entries. + */ +int ice_set_dvm_boost_entries(struct ice_hw *hw) +{ + int status; + u16 i; + + for (i = 0; i < hw->dvm_upd.count; i++) { + status = ice_upd_dvm_boost_entry(hw, &hw->dvm_upd.tbl[i]); + if (status) + return status; + } + + return 0; +} + /** * ice_tunnel_idx_to_entry - convert linear index to the sparse one * @hw: pointer to the HW structure diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h index dd602285c78e..4f0b151e9e9c 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h @@ -89,6 +89,12 @@ ice_init_prof_result_bm(struct ice_hw *hw); int ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, unsigned long *bm, struct list_head *fv_list); +int +ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count); +u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld); +int +ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, struct ice_sq_cd *cd); bool ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, enum ice_tunnel_type type); @@ -96,6 +102,7 @@ int ice_udp_tunnel_set_port(struct net_device *netdev, unsigned int table, unsigned int idx, struct udp_tunnel_info *ti); int ice_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table, unsigned int idx, struct udp_tunnel_info *ti); +int ice_set_dvm_boost_entries(struct ice_hw *hw); /* Rx parser PTYPE functions */ bool ice_hw_ptype_ena(struct ice_hw *hw, u16 ptype); @@ -120,4 +127,10 @@ void ice_clear_hw_tbls(struct ice_hw *hw); void ice_free_hw_tbls(struct ice_hw *hw); int ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id); +struct ice_buf_build * +ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, + void **section); +struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld); +void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld); + #endif /* _ICE_FLEX_PIPE_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_flex_type.h b/drivers/net/ethernet/intel/ice/ice_flex_type.h index fc087e0b5292..5735e9542a49 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_type.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_type.h @@ -162,6 +162,7 @@ struct ice_meta_sect { #define ICE_SID_RXPARSER_MARKER_PTYPE 55 #define ICE_SID_RXPARSER_BOOST_TCAM 56 +#define ICE_SID_RXPARSER_METADATA_INIT 58 #define ICE_SID_TXPARSER_BOOST_TCAM 66 #define ICE_SID_XLT0_PE 80 @@ -442,6 +443,19 @@ struct ice_tunnel_table { u16 valid_count[__TNL_TYPE_CNT]; }; +struct ice_dvm_entry { + u16 boost_addr; + u16 enable; + struct ice_boost_tcam_entry *boost_entry; +}; + +#define ICE_DVM_MAX_ENTRIES 48 + +struct ice_dvm_table { + struct ice_dvm_entry tbl[ICE_DVM_MAX_ENTRIES]; + u16 count; +}; + struct ice_pkg_es { __le16 count; __le16 offset; @@ -662,4 +676,30 @@ enum ice_prof_type { ICE_PROF_TUN_ALL = 0x6, ICE_PROF_ALL = 0xFF, }; + +/* Number of bits/bytes contained in meta init entry. Note, this should be a + * multiple of 32 bits. + */ +#define ICE_META_INIT_BITS 192 +#define ICE_META_INIT_DW_CNT (ICE_META_INIT_BITS / (sizeof(__le32) * \ + BITS_PER_BYTE)) + +/* The meta init Flag field starts at this bit */ +#define ICE_META_FLAGS_ST 123 + +/* The entry and bit to check for Double VLAN Mode (DVM) support */ +#define ICE_META_VLAN_MODE_ENTRY 0 +#define ICE_META_FLAG_VLAN_MODE 60 +#define ICE_META_VLAN_MODE_BIT (ICE_META_FLAGS_ST + \ + ICE_META_FLAG_VLAN_MODE) + +struct ice_meta_init_entry { + __le32 bm[ICE_META_INIT_DW_CNT]; +}; + +struct ice_meta_init_section { + __le16 count; + __le16 offset; + struct ice_meta_init_entry entry; +}; #endif /* _ICE_FLEX_TYPE_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index ff2b721e0e45..563b597b0a85 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3555,12 +3555,17 @@ static int ice_tc_indir_block_register(struct ice_vsi *vsi) static int ice_setup_pf_sw(struct ice_pf *pf) { struct device *dev = ice_pf_to_dev(pf); + bool dvm = ice_is_dvm_ena(&pf->hw); struct ice_vsi *vsi; int status; if (ice_is_reset_in_progress(pf->state)) return -EBUSY; + status = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); + if (status) + return -EIO; + vsi = ice_pf_vsi_setup(pf, pf->hw.port_info); if (!vsi) return -ENOMEM; @@ -6649,6 +6654,7 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) { struct device *dev = ice_pf_to_dev(pf); struct ice_hw *hw = &pf->hw; + bool dvm; int err; if (test_bit(ICE_DOWN, pf->state)) @@ -6712,6 +6718,12 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) goto err_init_ctrlq; } + dvm = ice_is_dvm_ena(hw); + + err = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); + if (err) + goto err_init_ctrlq; + err = ice_sched_init_port(hw->port_info); if (err) goto err_sched_init_port; diff --git a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c index b00360ca6e92..976a03d3bdd5 100644 --- a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c @@ -3,6 +3,7 @@ #include "ice_vsi_vlan_ops.h" #include "ice_vsi_vlan_lib.h" +#include "ice_vlan_mode.h" #include "ice.h" #include "ice_pf_vsi_vlan_ops.h" diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index f851a81a7240..04308e5fa224 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -1096,6 +1096,64 @@ ice_aq_get_recipe(struct ice_hw *hw, return status; } +/** + * ice_update_recipe_lkup_idx - update a default recipe based on the lkup_idx + * @hw: pointer to the HW struct + * @params: parameters used to update the default recipe + * + * This function only supports updating default recipes and it only supports + * updating a single recipe based on the lkup_idx at a time. + * + * This is done as a read-modify-write operation. First, get the current recipe + * contents based on the recipe's ID. Then modify the field vector index and + * mask if it's valid at the lkup_idx. Finally, use the add recipe AQ to update + * the pre-existing recipe with the modifications. + */ +int +ice_update_recipe_lkup_idx(struct ice_hw *hw, + struct ice_update_recipe_lkup_idx_params *params) +{ + struct ice_aqc_recipe_data_elem *rcp_list; + u16 num_recps = ICE_MAX_NUM_RECIPES; + int status; + + rcp_list = kcalloc(num_recps, sizeof(*rcp_list), GFP_KERNEL); + if (!rcp_list) + return -ENOMEM; + + /* read current recipe list from firmware */ + rcp_list->recipe_indx = params->rid; + status = ice_aq_get_recipe(hw, rcp_list, &num_recps, params->rid, NULL); + if (status) { + ice_debug(hw, ICE_DBG_SW, "Failed to get recipe %d, status %d\n", + params->rid, status); + goto error_out; + } + + /* only modify existing recipe's lkup_idx and mask if valid, while + * leaving all other fields the same, then update the recipe firmware + */ + rcp_list->content.lkup_indx[params->lkup_idx] = params->fv_idx; + if (params->mask_valid) + rcp_list->content.mask[params->lkup_idx] = + cpu_to_le16(params->mask); + + if (params->ignore_valid) + rcp_list->content.lkup_indx[params->lkup_idx] |= + ICE_AQ_RECIPE_LKUP_IGNORE; + + status = ice_aq_add_recipe(hw, &rcp_list[0], 1, NULL); + if (status) + ice_debug(hw, ICE_DBG_SW, "Failed to update recipe %d lkup_idx %d fv_idx %d mask %d mask_valid %s, status %d\n", + params->rid, params->lkup_idx, params->fv_idx, + params->mask, params->mask_valid ? "true" : "false", + status); + +error_out: + kfree(rcp_list); + return status; +} + /** * ice_aq_map_recipe_to_profile - Map recipe to packet profile * @hw: pointer to the HW struct @@ -3873,6 +3931,23 @@ ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, return ICE_MAX_NUM_RECIPES; } +/** + * ice_change_proto_id_to_dvm - change proto id in prot_id_tbl + * + * As protocol id for outer vlan is different in dvm and svm, if dvm is + * supported protocol array record for outer vlan has to be modified to + * reflect the value proper for DVM. + */ +void ice_change_proto_id_to_dvm(void) +{ + u8 i; + + for (i = 0; i < ARRAY_SIZE(ice_prot_id_tbl); i++) + if (ice_prot_id_tbl[i].type == ICE_VLAN_OFOS && + ice_prot_id_tbl[i].protocol_id != ICE_VLAN_OF_HW) + ice_prot_id_tbl[i].protocol_id = ICE_VLAN_OF_HW; +} + /** * ice_prot_type_to_id - get protocol ID from protocol type * @type: protocol type diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index 5000cc8276cd..7b42c51a3eb0 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -118,6 +118,15 @@ struct ice_fltr_info { u8 lan_en; /* Indicate if packet can be forwarded to the uplink */ }; +struct ice_update_recipe_lkup_idx_params { + u16 rid; + u16 fv_idx; + bool ignore_valid; + u16 mask; + bool mask_valid; + u8 lkup_idx; +}; + struct ice_adv_lkup_elem { enum ice_protocol_type type; union ice_prot_hdr h_u; /* Header values */ @@ -360,4 +369,8 @@ void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw); int ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz, u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd); +int +ice_update_recipe_lkup_idx(struct ice_hw *hw, + struct ice_update_recipe_lkup_idx_params *params); +void ice_change_proto_id_to_dvm(void); #endif /* _ICE_SWITCH_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index ef2ef064a74c..1800aee88b9b 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -14,6 +14,7 @@ #include "ice_flex_type.h" #include "ice_protocol_type.h" #include "ice_sbq_cmd.h" +#include "ice_vlan_mode.h" static inline bool ice_is_tc_ena(unsigned long bitmap, u8 tc) { @@ -919,6 +920,9 @@ struct ice_hw { struct udp_tunnel_nic_shared udp_tunnel_shared; struct udp_tunnel_nic_info udp_tunnel_nic; + /* dvm boost update information */ + struct ice_dvm_table dvm_upd; + /* HW block tables */ struct ice_blk_info blk[ICE_BLK_COUNT]; struct mutex fl_profs_locks[ICE_BLK_COUNT]; /* lock fltr profiles */ @@ -942,6 +946,7 @@ struct ice_hw { struct list_head rss_list_head; struct ice_mbx_snapshot mbx_snapshot; DECLARE_BITMAP(hw_ptype, ICE_FLOW_PTYPE_MAX); + u8 dvm_ena; u16 io_expander_handle; }; diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index d89577843d68..4be29f97365c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -3,6 +3,7 @@ #include "ice_vsi_vlan_ops.h" #include "ice_vsi_vlan_lib.h" +#include "ice_vlan_mode.h" #include "ice.h" #include "ice_vf_vsi_vlan_ops.h" #include "ice_virtchnl_pf.h" diff --git a/drivers/net/ethernet/intel/ice/ice_vlan_mode.c b/drivers/net/ethernet/intel/ice/ice_vlan_mode.c new file mode 100644 index 000000000000..1b618de592b7 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vlan_mode.c @@ -0,0 +1,439 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_common.h" + +/** + * ice_pkg_get_supported_vlan_mode - determine if DDP supports Double VLAN mode + * @hw: pointer to the HW struct + * @dvm: output variable to determine if DDP supports DVM(true) or SVM(false) + */ +static int +ice_pkg_get_supported_vlan_mode(struct ice_hw *hw, bool *dvm) +{ + u16 meta_init_size = sizeof(struct ice_meta_init_section); + struct ice_meta_init_section *sect; + struct ice_buf_build *bld; + int status; + + /* if anything fails, we assume there is no DVM support */ + *dvm = false; + + bld = ice_pkg_buf_alloc_single_section(hw, + ICE_SID_RXPARSER_METADATA_INIT, + meta_init_size, (void **)§); + if (!bld) + return -ENOMEM; + + /* only need to read a single section */ + sect->count = cpu_to_le16(1); + sect->offset = cpu_to_le16(ICE_META_VLAN_MODE_ENTRY); + + status = ice_aq_upload_section(hw, + (struct ice_buf_hdr *)ice_pkg_buf(bld), + ICE_PKG_BUF_SIZE, NULL); + if (!status) { + DECLARE_BITMAP(entry, ICE_META_INIT_BITS); + u32 arr[ICE_META_INIT_DW_CNT]; + u16 i; + + /* convert to host bitmap format */ + for (i = 0; i < ICE_META_INIT_DW_CNT; i++) + arr[i] = le32_to_cpu(sect->entry.bm[i]); + + bitmap_from_arr32(entry, arr, (u16)ICE_META_INIT_BITS); + + /* check if DVM is supported */ + *dvm = test_bit(ICE_META_VLAN_MODE_BIT, entry); + } + + ice_pkg_buf_free(hw, bld); + + return status; +} + +/** + * ice_aq_get_vlan_mode - get the VLAN mode of the device + * @hw: pointer to the HW structure + * @get_params: structure FW fills in based on the current VLAN mode config + * + * Get VLAN Mode Parameters (0x020D) + */ +static int +ice_aq_get_vlan_mode(struct ice_hw *hw, + struct ice_aqc_get_vlan_mode *get_params) +{ + struct ice_aq_desc desc; + + if (!get_params) + return -EINVAL; + + ice_fill_dflt_direct_cmd_desc(&desc, + ice_aqc_opc_get_vlan_mode_parameters); + + return ice_aq_send_cmd(hw, &desc, get_params, sizeof(*get_params), + NULL); +} + +/** + * ice_aq_is_dvm_ena - query FW to check if double VLAN mode is enabled + * @hw: pointer to the HW structure + * + * Returns true if the hardware/firmware is configured in double VLAN mode, + * else return false signaling that the hardware/firmware is configured in + * single VLAN mode. + * + * Also, return false if this call fails for any reason (i.e. firmware doesn't + * support this AQ call). + */ +static bool ice_aq_is_dvm_ena(struct ice_hw *hw) +{ + struct ice_aqc_get_vlan_mode get_params = { 0 }; + int status; + + status = ice_aq_get_vlan_mode(hw, &get_params); + if (status) { + ice_debug(hw, ICE_DBG_AQ, "Failed to get VLAN mode, status %d\n", + status); + return false; + } + + return (get_params.vlan_mode & ICE_AQ_VLAN_MODE_DVM_ENA); +} + +/** + * ice_is_dvm_ena - check if double VLAN mode is enabled + * @hw: pointer to the HW structure + * + * The device is configured in single or double VLAN mode on initialization and + * this cannot be dynamically changed during runtime. Based on this there is no + * need to make an AQ call every time the driver needs to know the VLAN mode. + * Instead, use the cached VLAN mode. + */ +bool ice_is_dvm_ena(struct ice_hw *hw) +{ + return hw->dvm_ena; +} + +/** + * ice_cache_vlan_mode - cache VLAN mode after DDP is downloaded + * @hw: pointer to the HW structure + * + * This is only called after downloading the DDP and after the global + * configuration lock has been released because all ports on a device need to + * cache the VLAN mode. + */ +static void ice_cache_vlan_mode(struct ice_hw *hw) +{ + hw->dvm_ena = ice_aq_is_dvm_ena(hw) ? true : false; +} + +/** + * ice_pkg_supports_dvm - find out if DDP supports DVM + * @hw: pointer to the HW structure + */ +static bool ice_pkg_supports_dvm(struct ice_hw *hw) +{ + bool pkg_supports_dvm; + int status; + + status = ice_pkg_get_supported_vlan_mode(hw, &pkg_supports_dvm); + if (status) { + ice_debug(hw, ICE_DBG_PKG, "Failed to get supported VLAN mode, status %d\n", + status); + return false; + } + + return pkg_supports_dvm; +} + +/** + * ice_fw_supports_dvm - find out if FW supports DVM + * @hw: pointer to the HW structure + */ +static bool ice_fw_supports_dvm(struct ice_hw *hw) +{ + struct ice_aqc_get_vlan_mode get_vlan_mode = { 0 }; + int status; + + /* If firmware returns success, then it supports DVM, else it only + * supports SVM + */ + status = ice_aq_get_vlan_mode(hw, &get_vlan_mode); + if (status) { + ice_debug(hw, ICE_DBG_NVM, "Failed to get VLAN mode, status %d\n", + status); + return false; + } + + return true; +} + +/** + * ice_is_dvm_supported - check if Double VLAN Mode is supported + * @hw: pointer to the hardware structure + * + * Returns true if Double VLAN Mode (DVM) is supported and false if only Single + * VLAN Mode (SVM) is supported. In order for DVM to be supported the DDP and + * firmware must support it, otherwise only SVM is supported. This function + * should only be called while the global config lock is held and after the + * package has been successfully downloaded. + */ +static bool ice_is_dvm_supported(struct ice_hw *hw) +{ + if (!ice_pkg_supports_dvm(hw)) { + ice_debug(hw, ICE_DBG_PKG, "DDP doesn't support DVM\n"); + return false; + } + + if (!ice_fw_supports_dvm(hw)) { + ice_debug(hw, ICE_DBG_PKG, "FW doesn't support DVM\n"); + return false; + } + + return true; +} + +#define ICE_EXTERNAL_VLAN_ID_FV_IDX 11 +#define ICE_SW_LKUP_VLAN_LOC_LKUP_IDX 1 +#define ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX 2 +#define ICE_SW_LKUP_PROMISC_VLAN_LOC_LKUP_IDX 2 +#define ICE_PKT_FLAGS_0_TO_15_FV_IDX 1 +#define ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK 0xD000 +static struct ice_update_recipe_lkup_idx_params ice_dvm_dflt_recipes[] = { + { + /* Update recipe ICE_SW_LKUP_VLAN to filter based on the + * outer/single VLAN in DVM + */ + .rid = ICE_SW_LKUP_VLAN, + .fv_idx = ICE_EXTERNAL_VLAN_ID_FV_IDX, + .ignore_valid = true, + .mask = 0, + .mask_valid = false, /* use pre-existing mask */ + .lkup_idx = ICE_SW_LKUP_VLAN_LOC_LKUP_IDX, + }, + { + /* Update recipe ICE_SW_LKUP_VLAN to filter based on the VLAN + * packet flags to support VLAN filtering on multiple VLAN + * ethertypes (i.e. 0x8100 and 0x88a8) in DVM + */ + .rid = ICE_SW_LKUP_VLAN, + .fv_idx = ICE_PKT_FLAGS_0_TO_15_FV_IDX, + .ignore_valid = false, + .mask = ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK, + .mask_valid = true, + .lkup_idx = ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX, + }, + { + /* Update recipe ICE_SW_LKUP_PROMISC_VLAN to filter based on the + * outer/single VLAN in DVM + */ + .rid = ICE_SW_LKUP_PROMISC_VLAN, + .fv_idx = ICE_EXTERNAL_VLAN_ID_FV_IDX, + .ignore_valid = true, + .mask = 0, + .mask_valid = false, /* use pre-existing mask */ + .lkup_idx = ICE_SW_LKUP_PROMISC_VLAN_LOC_LKUP_IDX, + }, +}; + +/** + * ice_dvm_update_dflt_recipes - update default switch recipes in DVM + * @hw: hardware structure used to update the recipes + */ +static int ice_dvm_update_dflt_recipes(struct ice_hw *hw) +{ + unsigned long i; + + for (i = 0; i < ARRAY_SIZE(ice_dvm_dflt_recipes); i++) { + struct ice_update_recipe_lkup_idx_params *params; + int status; + + params = &ice_dvm_dflt_recipes[i]; + + status = ice_update_recipe_lkup_idx(hw, params); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to update RID %d lkup_idx %d fv_idx %d mask_valid %s mask 0x%04x\n", + params->rid, params->lkup_idx, params->fv_idx, + params->mask_valid ? "true" : "false", + params->mask); + return status; + } + } + + return 0; +} + +/** + * ice_aq_set_vlan_mode - set the VLAN mode of the device + * @hw: pointer to the HW structure + * @set_params: requested VLAN mode configuration + * + * Set VLAN Mode Parameters (0x020C) + */ +static int +ice_aq_set_vlan_mode(struct ice_hw *hw, + struct ice_aqc_set_vlan_mode *set_params) +{ + u8 rdma_packet, mng_vlan_prot_id; + struct ice_aq_desc desc; + + if (!set_params) + return -EINVAL; + + if (set_params->l2tag_prio_tagging > ICE_AQ_VLAN_PRIO_TAG_MAX) + return -EINVAL; + + rdma_packet = set_params->rdma_packet; + if (rdma_packet != ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING && + rdma_packet != ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING) + return -EINVAL; + + mng_vlan_prot_id = set_params->mng_vlan_prot_id; + if (mng_vlan_prot_id != ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER && + mng_vlan_prot_id != ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER) + return -EINVAL; + + ice_fill_dflt_direct_cmd_desc(&desc, + ice_aqc_opc_set_vlan_mode_parameters); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + return ice_aq_send_cmd(hw, &desc, set_params, sizeof(*set_params), + NULL); +} + +/** + * ice_set_dvm - sets up software and hardware for double VLAN mode + * @hw: pointer to the hardware structure + */ +static int ice_set_dvm(struct ice_hw *hw) +{ + struct ice_aqc_set_vlan_mode params = { 0 }; + int status; + + params.l2tag_prio_tagging = ICE_AQ_VLAN_PRIO_TAG_OUTER_CTAG; + params.rdma_packet = ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING; + params.mng_vlan_prot_id = ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER; + + status = ice_aq_set_vlan_mode(hw, ¶ms); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set double VLAN mode parameters, status %d\n", + status); + return status; + } + + status = ice_dvm_update_dflt_recipes(hw); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to update default recipes for double VLAN mode, status %d\n", + status); + return status; + } + + status = ice_aq_set_port_params(hw->port_info, true, NULL); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set port in double VLAN mode, status %d\n", + status); + return status; + } + + status = ice_set_dvm_boost_entries(hw); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set boost TCAM entries for double VLAN mode, status %d\n", + status); + return status; + } + + return 0; +} + +/** + * ice_set_svm - set single VLAN mode + * @hw: pointer to the HW structure + */ +static int ice_set_svm(struct ice_hw *hw) +{ + struct ice_aqc_set_vlan_mode *set_params; + int status; + + status = ice_aq_set_port_params(hw->port_info, false, NULL); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set port parameters for single VLAN mode\n"); + return status; + } + + set_params = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*set_params), + GFP_KERNEL); + if (!set_params) + return -ENOMEM; + + /* default configuration for SVM configurations */ + set_params->l2tag_prio_tagging = ICE_AQ_VLAN_PRIO_TAG_INNER_CTAG; + set_params->rdma_packet = ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING; + set_params->mng_vlan_prot_id = ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER; + + status = ice_aq_set_vlan_mode(hw, set_params); + if (status) + ice_debug(hw, ICE_DBG_INIT, "Failed to configure port in single VLAN mode\n"); + + devm_kfree(ice_hw_to_dev(hw), set_params); + return status; +} + +/** + * ice_set_vlan_mode + * @hw: pointer to the HW structure + */ +int ice_set_vlan_mode(struct ice_hw *hw) +{ + if (!ice_is_dvm_supported(hw)) + return 0; + + if (!ice_set_dvm(hw)) + return 0; + + return ice_set_svm(hw); +} + +/** + * ice_print_dvm_not_supported - print if DDP and/or FW doesn't support DVM + * @hw: pointer to the HW structure + * + * The purpose of this function is to print that QinQ is not supported due to + * incompatibilty from the DDP and/or FW. This will give a hint to the user to + * update one and/or both components if they expect QinQ functionality. + */ +static void ice_print_dvm_not_supported(struct ice_hw *hw) +{ + bool pkg_supports_dvm = ice_pkg_supports_dvm(hw); + bool fw_supports_dvm = ice_fw_supports_dvm(hw); + + if (!fw_supports_dvm && !pkg_supports_dvm) + dev_info(ice_hw_to_dev(hw), "QinQ functionality cannot be enabled on this device. Update your DDP package and NVM to versions that support QinQ.\n"); + else if (!pkg_supports_dvm) + dev_info(ice_hw_to_dev(hw), "QinQ functionality cannot be enabled on this device. Update your DDP package to a version that supports QinQ.\n"); + else if (!fw_supports_dvm) + dev_info(ice_hw_to_dev(hw), "QinQ functionality cannot be enabled on this device. Update your NVM to a version that supports QinQ.\n"); +} + +/** + * ice_post_pkg_dwnld_vlan_mode_cfg - configure VLAN mode after DDP download + * @hw: pointer to the HW structure + * + * This function is meant to configure any VLAN mode specific functionality + * after the global configuration lock has been released and the DDP has been + * downloaded. + * + * Since only one PF downloads the DDP and configures the VLAN mode there needs + * to be a way to configure the other PFs after the DDP has been downloaded and + * the global configuration lock has been released. All such code should go in + * this function. + */ +void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw) +{ + ice_cache_vlan_mode(hw); + + if (ice_is_dvm_ena(hw)) + ice_change_proto_id_to_dvm(); + else + ice_print_dvm_not_supported(hw); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vlan_mode.h b/drivers/net/ethernet/intel/ice/ice_vlan_mode.h new file mode 100644 index 000000000000..a0fb743d08e2 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vlan_mode.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VLAN_MODE_H_ +#define _ICE_VLAN_MODE_H_ + +struct ice_hw; + +bool ice_is_dvm_ena(struct ice_hw *hw); +int ice_set_vlan_mode(struct ice_hw *hw); +void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw); + +#endif /* _ICE_VLAN_MODE_H */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 62a2630d6fab..5b4a0abb4607 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -39,20 +39,20 @@ static bool validate_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) */ int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - int err = 0; + int err; if (!validate_vlan(vsi, vlan)) return -EINVAL; - if (!ice_fltr_add_vlan(vsi, vlan)) { - vsi->num_vlan++; - } else { - err = -ENODEV; - dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i\n", - vlan->vid, vsi->vsi_num); + err = ice_fltr_add_vlan(vsi, vlan); + if (err && err != -EEXIST) { + dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i, status %d\n", + vlan->vid, vsi->vsi_num, err); + return err; } - return err; + vsi->num_vlan++; + return 0; } /** @@ -72,16 +72,13 @@ int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) dev = ice_pf_to_dev(pf); err = ice_fltr_remove_vlan(vsi, vlan); - if (!err) { + if (!err) vsi->num_vlan--; - } else if (err == -ENOENT) { - dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist\n", - vlan->vid, vsi->vsi_num); + else if (err == -ENOENT || err == -EBUSY) err = 0; - } else { + else dev_err(dev, "Error removing VLAN %d on VSI %i error: %d\n", vlan->vid, vsi->vsi_num, err); - } return err; } diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 30d02d2b8e5f..5b47568f6256 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -23,11 +23,6 @@ struct ice_vsi_vlan_ops { int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); }; -static inline bool ice_is_dvm_ena(struct ice_hw __always_unused *hw) -{ - return false; -} - void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); struct ice_vsi_vlan_ops *ice_get_compat_vsi_vlan_ops(struct ice_vsi *vsi); -- 2.20.1 From anthony.l.nguyen at intel.com Tue Nov 30 23:51:42 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Tue, 30 Nov 2021 15:51:42 -0800 Subject: [Intel-wired-lan] [PATCH net-next v2 10/14] ice: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 In-Reply-To: <20211130235146.28731-1-anthony.l.nguyen@intel.com> References: <20211130235146.28731-1-anthony.l.nguyen@intel.com> Message-ID: <20211130235146.28731-10-anthony.l.nguyen@intel.com> From: Brett Creeley Add support for the VF driver to be able to request VIRTCHNL_VF_OFFLOAD_VLAN_V2, negotiate its VLAN capabilities via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, add/delete VLAN filters, and enable/disable VLAN offloads. VFs supporting VIRTCHNL_OFFLOAD_VLAN_V2 will be able to use the following virtchnl opcodes: VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS VIRTCHNL_OP_ADD_VLAN_V2 VIRTCHNL_OP_DEL_VLAN_V2 VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 Legacy VF drivers may expect the initial VLAN stripping settings to be configured by the PF, so the PF initializes VLAN stripping based on the VIRTCHNL_OP_GET_VF_RESOURCES opcode. However, with VLAN support via VIRTCHNL_VF_OFFLOAD_VLAN_V2, this function is only expected to be used for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN, which will only be supported when a port VLAN is configured. Update the function based on the new expectations. Also, change the message when the PF can't enable/disable VLAN stripping to a dev_dbg() as this isn't fatal. When a VF isn't in a port VLAN and it only supports VIRTCHNL_VF_OFFLOAD_VLAN when Double VLAN Mode (DVM) is enabled, then the PF needs to reject the VIRTCHNL_VF_OFFLOAD_VLAN capability and configure the VF in software only VLAN mode. To do this add the new function ice_vf_vsi_cfg_legacy_vlan_mode(), which updates the VF's inner and outer ice_vsi_vlan_ops functions and sets up software only VLAN mode. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_base.c | 1 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 115 ++ .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.h | 3 + .../intel/ice/ice_virtchnl_allowlist.c | 10 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 1132 ++++++++++++++++- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 8 + 6 files changed, 1226 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 9ca0ae2bb1dc..0dec7c5463eb 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -5,6 +5,7 @@ #include "ice_base.h" #include "ice_lib.h" #include "ice_dcb_lib.h" +#include "ice_virtchnl_pf.h" /** * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index 741b041606a2..d89577843d68 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -14,9 +14,20 @@ noop_vlan_arg(struct ice_vsi __always_unused *vsi, return 0; } +static int +noop_vlan(struct ice_vsi __always_unused *vsi) +{ + return 0; +} + /** * ice_vf_vsi_init_vlan_ops - Initialize default VSI VLAN ops for VF VSI * @vsi: VF's VSI being configured + * + * If Double VLAN Mode (DVM) is enabled, assume that the VF supports the new + * VIRTCHNL_VF_VLAN_OFFLOAD_V2 capability and set up the VLAN ops accordingly. + * If SVM is enabled maintain the same level of VLAN support previous to + * VIRTCHNL_VF_VLAN_OFFLOAD_V2. */ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) { @@ -44,6 +55,20 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) vlan_ops = &vsi->inner_vlan_ops; vlan_ops->add_vlan = noop_vlan_arg; vlan_ops->del_vlan = noop_vlan_arg; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } else { + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_outer_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + + /* setup inner VLAN ops */ + vlan_ops = &vsi->inner_vlan_ops; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; @@ -70,3 +95,93 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) } } } + +/** + * ice_vf_vsi_cfg_dvm_legacy_vlan_mode - Config VLAN mode for old VFs in DVM + * @vsi: VF's VSI being configured + * + * This should only be called when Double VLAN Mode (DVM) is enabled, there + * is not a port VLAN enabled on this VF, and the VF negotiates + * VIRTCHNL_VF_OFFLOAD_VLAN. + * + * This function sets up the VF VSI's inner and outer ice_vsi_vlan_ops and also + * initializes software only VLAN mode (i.e. allow all VLANs). Also, use no-op + * implementations for any functions that may be called during the lifetime of + * the VF so these methods do nothing and succeed. + */ +void ice_vf_vsi_cfg_dvm_legacy_vlan_mode(struct ice_vsi *vsi) +{ + struct ice_vf *vf = &vsi->back->vf[vsi->vf_id]; + struct device *dev = ice_pf_to_dev(vf->pf); + struct ice_vsi_vlan_ops *vlan_ops; + + if (!ice_is_dvm_ena(&vsi->back->hw) || ice_vf_is_port_vlan_ena(vf)) + return; + + vlan_ops = &vsi->outer_vlan_ops; + + /* Rx VLAN filtering always disabled to allow software offloaded VLANs + * for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN and don't have a + * port VLAN configured + */ + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + /* Don't fail when attempting to enable Rx VLAN filtering */ + vlan_ops->ena_rx_filtering = noop_vlan; + + /* Tx VLAN filtering always disabled to allow software offloaded VLANs + * for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN and don't have a + * port VLAN configured + */ + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + /* Don't fail when attempting to enable Tx VLAN filtering */ + vlan_ops->ena_tx_filtering = noop_vlan; + + if (vlan_ops->dis_rx_filtering(vsi)) + dev_dbg(dev, "Failed to disable Rx VLAN filtering for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + if (vlan_ops->dis_tx_filtering(vsi)) + dev_dbg(dev, "Failed to disable Tx VLAN filtering for old VF without VIRTHCNL_VF_OFFLOAD_VLAN_V2 support\n"); + + /* All outer VLAN offloads must be disabled */ + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + + if (vlan_ops->dis_stripping(vsi)) + dev_dbg(dev, "Failed to disable outer VLAN stripping for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + + if (vlan_ops->dis_insertion(vsi)) + dev_dbg(dev, "Failed to disable outer VLAN insertion for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + + /* All inner VLAN offloads must be disabled */ + vlan_ops = &vsi->inner_vlan_ops; + + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + + if (vlan_ops->dis_stripping(vsi)) + dev_dbg(dev, "Failed to disable inner VLAN stripping for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + + if (vlan_ops->dis_insertion(vsi)) + dev_dbg(dev, "Failed to disable inner VLAN insertion for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); +} + +/** + * ice_vf_vsi_cfg_svm_legacy_vlan_mode - Config VLAN mode for old VFs in SVM + * @vsi: VF's VSI being configured + * + * This should only be called when Single VLAN Mode (SVM) is enabled, there is + * not a port VLAN enabled on this VF, and the VF negotiates + * VIRTCHNL_VF_OFFLOAD_VLAN. + * + * All of the normal SVM VLAN ops are identical for this case. However, by + * default Rx VLAN filtering should be turned off by default in this case. + */ +void ice_vf_vsi_cfg_svm_legacy_vlan_mode(struct ice_vsi *vsi) +{ + struct ice_vf *vf = &vsi->back->vf[vsi->vf_id]; + + if (ice_is_dvm_ena(&vsi->back->hw) || ice_vf_is_port_vlan_ena(vf)) + return; + + if (vsi->inner_vlan_ops.dis_rx_filtering(vsi)) + dev_dbg(ice_pf_to_dev(vf->pf), "Failed to disable Rx VLAN filtering for old VF with VIRTCHNL_VF_OFFLOAD_VLAN support\n"); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h index 8ea13628a5e1..875a4e615f39 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h @@ -8,6 +8,9 @@ struct ice_vsi; +void ice_vf_vsi_cfg_dvm_legacy_vlan_mode(struct ice_vsi *vsi); +void ice_vf_vsi_cfg_svm_legacy_vlan_mode(struct ice_vsi *vsi); + #ifdef CONFIG_PCI_IOV void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi); #else diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c index 9feebe5f556c..5a82216e7d03 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c @@ -55,6 +55,15 @@ static const u32 vlan_allowlist_opcodes[] = { VIRTCHNL_OP_ENABLE_VLAN_STRIPPING, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING, }; +/* VIRTCHNL_VF_OFFLOAD_VLAN_V2 */ +static const u32 vlan_v2_allowlist_opcodes[] = { + VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, VIRTCHNL_OP_ADD_VLAN_V2, + VIRTCHNL_OP_DEL_VLAN_V2, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2, + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2, + VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2, + VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2, +}; + /* VIRTCHNL_VF_OFFLOAD_RSS_PF */ static const u32 rss_pf_allowlist_opcodes[] = { VIRTCHNL_OP_CONFIG_RSS_KEY, VIRTCHNL_OP_CONFIG_RSS_LUT, @@ -89,6 +98,7 @@ static const struct allowlist_opcode_info allowlist_opcodes[] = { ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_RSS_PF, rss_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF, adv_rss_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_FDIR_PF, fdir_pf_allowlist_opcodes), + ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_VLAN_V2, vlan_v2_allowlist_opcodes), }; /** diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 100c86c8ad9a..de74a2b4f846 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -11,6 +11,7 @@ #include "ice_virtchnl_allowlist.h" #include "ice_flex_pipe.h" #include "ice_vf_vsi_vlan_ops.h" +#include "ice_vlan.h" #define FIELD_SELECTOR(proto_hdr_field) \ BIT((proto_hdr_field) & PROTO_HDR_FIELD_MASK) @@ -1458,6 +1459,7 @@ static void ice_vf_set_initialized(struct ice_vf *vf) clear_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states); clear_bit(ICE_VF_STATE_DIS, vf->vf_states); set_bit(ICE_VF_STATE_INIT, vf->vf_states); + memset(&vf->vlan_v2_caps, 0, sizeof(vf->vlan_v2_caps)); } /** @@ -2347,8 +2349,33 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) goto err; } - if (!ice_vf_is_port_vlan_ena(vf)) - vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; + if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN_V2) { + /* VLAN offloads based on current device configuration */ + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN_V2; + } else if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN) { + /* allow VF to negotiate VIRTCHNL_VF_OFFLOAD explicitly for + * these two conditions, which amounts to guest VLAN filtering + * and offloads being based on the inner VLAN or the + * inner/single VLAN respectively and don't allow VF to + * negotiate VIRTCHNL_VF_OFFLOAD in any other cases + */ + if (ice_is_dvm_ena(&pf->hw) && ice_vf_is_port_vlan_ena(vf)) { + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; + } else if (!ice_is_dvm_ena(&pf->hw) && + !ice_vf_is_port_vlan_ena(vf)) { + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; + /* configure backward compatible support for VFs that + * only support VIRTCHNL_VF_OFFLOAD_VLAN, the PF is + * configured in SVM, and no port VLAN is configured + */ + ice_vf_vsi_cfg_svm_legacy_vlan_mode(vsi); + } else if (ice_is_dvm_ena(&pf->hw)) { + /* configure software offloaded VLAN support when DVM + * is enabled, but no port VLAN is enabled + */ + ice_vf_vsi_cfg_dvm_legacy_vlan_mode(vsi); + } + } if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_PF; @@ -4175,6 +4202,62 @@ static bool ice_vf_vlan_offload_ena(u32 caps) return !!(caps & VIRTCHNL_VF_OFFLOAD_VLAN); } +/** + * ice_is_vlan_promisc_allowed - check if VLAN promiscuous config is allowed + * @vf: VF used to determine if VLAN promiscuous config is allowed + */ +static bool ice_is_vlan_promisc_allowed(struct ice_vf *vf) +{ + if ((test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states) || + test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) && + test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, vf->pf->flags)) + return true; + + return false; +} + +/** + * ice_vf_ena_vlan_promisc - Enable Tx/Rx VLAN promiscuous for the VLAN + * @vsi: VF's VSI used to enable VLAN promiscuous mode + * @vlan: VLAN used to enable VLAN promiscuous + * + * This function should only be called if VLAN promiscuous mode is allowed, + * which can be determined via ice_is_vlan_promisc_allowed(). + */ +static int ice_vf_ena_vlan_promisc(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + u8 promisc_m = ICE_PROMISC_VLAN_TX | ICE_PROMISC_VLAN_RX; + int status; + + status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, + vlan->vid); + if (status && status != -EEXIST) + return status; + + return 0; +} + +/** + * ice_vf_dis_vlan_promisc - Disable Tx/Rx VLAN promiscuous for the VLAN + * @vsi: VF's VSI used to disable VLAN promiscuous mode for + * @vlan: VLAN used to disable VLAN promiscuous + * + * This function should only be called if VLAN promiscuous mode is allowed, + * which can be determined via ice_is_vlan_promisc_allowed(). + */ +static int ice_vf_dis_vlan_promisc(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + u8 promisc_m = ICE_PROMISC_VLAN_TX | ICE_PROMISC_VLAN_RX; + int status; + + status = ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, + vlan->vid); + if (status && status != -ENOENT) + return status; + + return 0; +} + /** * ice_vf_has_max_vlans - check if VF already has the max allowed VLAN filters * @vf: VF to check against @@ -4209,14 +4292,11 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; struct virtchnl_vlan_filter_list *vfl = (struct virtchnl_vlan_filter_list *)msg; - struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; bool vlan_promisc = false; struct ice_vsi *vsi; struct device *dev; - struct ice_hw *hw; int status = 0; - u8 promisc_m; int i; dev = ice_pf_to_dev(pf); @@ -4244,7 +4324,6 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) } } - hw = &pf->hw; vsi = ice_get_vf_vsi(vf); if (!vsi) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4260,17 +4339,22 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - if (ice_vf_is_port_vlan_ena(vf)) { + /* in DVM a VF can add/delete inner VLAN filters when + * VIRTCHNL_VF_OFFLOAD_VLAN is negotiated, so only reject in SVM + */ + if (ice_vf_is_port_vlan_ena(vf) && !ice_is_dvm_ena(&pf->hw)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; } - if ((test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states) || - test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) && - test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags)) - vlan_promisc = true; + /* in DVM VLAN promiscuous is based on the outer VLAN, which would be + * the port VLAN if VIRTCHNL_VF_OFFLOAD_VLAN was negotiated, so only + * allow vlan_promisc = true in SVM and if no port VLAN is configured + */ + vlan_promisc = ice_is_vlan_promisc_allowed(vf) && + !ice_is_dvm_ena(&pf->hw) && + !ice_vf_is_port_vlan_ena(vf); - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); if (add_v) { for (i = 0; i < vfl->num_elements; i++) { u16 vid = vfl->vlan_id[i]; @@ -4300,23 +4384,16 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - /* Enable VLAN pruning when non-zero VLAN is added */ - if (!vlan_promisc && vid && - !ice_vsi_is_vlan_pruning_ena(vsi)) { - status = vlan_ops->ena_rx_filtering(vsi); - if (status) { + /* Enable VLAN filtering on first non-zero VLAN */ + if (!vlan_promisc && vid && !ice_is_dvm_ena(&pf->hw)) { + if (vsi->inner_vlan_ops.ena_rx_filtering(vsi)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n", vid, status); goto error_param; } } else if (vlan_promisc) { - /* Enable Ucast/Mcast VLAN promiscuous mode */ - promisc_m = ICE_PROMISC_VLAN_TX | - ICE_PROMISC_VLAN_RX; - - status = ice_set_vsi_promisc(hw, vsi->idx, - promisc_m, vid); + status = ice_vf_ena_vlan_promisc(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable Unicast/multicast promiscuous mode on VLAN ID:%d failed error-%d\n", @@ -4353,19 +4430,12 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - /* Disable VLAN pruning when only VLAN 0 is left */ - if (!ice_vsi_has_non_zero_vlans(vsi) && - ice_vsi_is_vlan_pruning_ena(vsi)) - status = vlan_ops->dis_rx_filtering(vsi); - - /* Disable Unicast/Multicast VLAN promiscuous mode */ - if (vlan_promisc) { - promisc_m = ICE_PROMISC_VLAN_TX | - ICE_PROMISC_VLAN_RX; + /* Disable VLAN filtering when only VLAN 0 is left */ + if (!ice_vsi_has_non_zero_vlans(vsi)) + vsi->inner_vlan_ops.dis_rx_filtering(vsi); - ice_clear_vsi_promisc(hw, vsi->idx, - promisc_m, vid); - } + if (vlan_promisc) + ice_vf_dis_vlan_promisc(vsi, &vlan); } } @@ -4472,11 +4542,8 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf) * ice_vf_init_vlan_stripping - enable/disable VLAN stripping on initialization * @vf: VF to enable/disable VLAN stripping for on initialization * - * If the VIRTCHNL_VF_OFFLOAD_VLAN flag is set enable VLAN stripping, else if - * the flag is cleared then we want to disable stripping. For example, the flag - * will be cleared when port VLANs are configured by the administrator before - * passing the VF to the guest or if the AVF driver doesn't support VLAN - * offloads. + * Set the default for VLAN stripping based on whether a port VLAN is configured + * and the current VLAN mode of the device. */ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) { @@ -4485,8 +4552,10 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) if (!vsi) return -EINVAL; - /* don't modify stripping if port VLAN is configured */ - if (ice_vf_is_port_vlan_ena(vf)) + /* don't modify stripping if port VLAN is configured in SVM since the + * port VLAN is based on the inner/single VLAN in SVM + */ + if (ice_vf_is_port_vlan_ena(vf) && !ice_is_dvm_ena(&vsi->back->hw)) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) @@ -4495,6 +4564,955 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return vsi->inner_vlan_ops.dis_stripping(vsi); } +static u16 ice_vc_get_max_vlan_fltrs(struct ice_vf *vf) +{ + if (vf->trusted) + return VLAN_N_VID; + else + return ICE_MAX_VLAN_PER_VF; +} + +/** + * ice_vf_outer_vlan_not_allowed - check outer VLAN can be used when the device is in DVM + * @vf: VF that being checked for + */ +static bool ice_vf_outer_vlan_not_allowed(struct ice_vf *vf) +{ + if (ice_vf_is_port_vlan_ena(vf)) + return true; + + return false; +} + +/** + * ice_vc_set_dvm_caps - set VLAN capabilities when the device is in DVM + * @vf: VF that capabilities are being set for + * @caps: VLAN capabilities to populate + * + * Determine VLAN capabilities support based on whether a port VLAN is + * configured. If a port VLAN is configured then the VF should use the inner + * filtering/offload capabilities since the port VLAN is using the outer VLAN + * capabilies. + */ +static void +ice_vc_set_dvm_caps(struct ice_vf *vf, struct virtchnl_vlan_caps *caps) +{ + struct virtchnl_vlan_supported_caps *supported_caps; + + if (ice_vf_outer_vlan_not_allowed(vf)) { + /* until support for inner VLAN filtering is added when a port + * VLAN is configured, only support software offloaded inner + * VLANs when a port VLAN is confgured in DVM + */ + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + caps->offloads.ethertype_match = + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + } else { + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100 | + VIRTCHNL_VLAN_ETHERTYPE_AND; + caps->filtering.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100 | + VIRTCHNL_VLAN_ETHERTYPE_XOR | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100 | + VIRTCHNL_VLAN_ETHERTYPE_XOR | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + + caps->offloads.ethertype_match = + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + } + + caps->filtering.max_filters = ice_vc_get_max_vlan_fltrs(vf); +} + +/** + * ice_vc_set_svm_caps - set VLAN capabilities when the device is in SVM + * @vf: VF that capabilities are being set for + * @caps: VLAN capabilities to populate + * + * Determine VLAN capabilities support based on whether a port VLAN is + * configured. If a port VLAN is configured then the VF does not have any VLAN + * filtering or offload capabilities since the port VLAN is using the inner VLAN + * capabilities in single VLAN mode (SVM). Otherwise allow the VF to use inner + * VLAN fitlering and offload capabilities. + */ +static void +ice_vc_set_svm_caps(struct ice_vf *vf, struct virtchnl_vlan_caps *caps) +{ + struct virtchnl_vlan_supported_caps *supported_caps; + + if (ice_vf_is_port_vlan_ena(vf)) { + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_UNSUPPORTED; + caps->offloads.ethertype_match = VIRTCHNL_VLAN_UNSUPPORTED; + caps->filtering.max_filters = 0; + } else { + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + caps->filtering.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + caps->offloads.ethertype_match = + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + caps->filtering.max_filters = ice_vc_get_max_vlan_fltrs(vf); + } +} + +/** + * ice_vc_get_offload_vlan_v2_caps - determine VF's VLAN capabilities + * @vf: VF to determine VLAN capabilities for + * + * This will only be called if the VF and PF successfully negotiated + * VIRTCHNL_VF_OFFLOAD_VLAN_V2. + * + * Set VLAN capabilities based on the current VLAN mode and whether a port VLAN + * is configured or not. + */ +static int ice_vc_get_offload_vlan_v2_caps(struct ice_vf *vf) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_caps *caps = NULL; + int err, len = 0; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + caps = kzalloc(sizeof(*caps), GFP_KERNEL); + if (!caps) { + v_ret = VIRTCHNL_STATUS_ERR_NO_MEMORY; + goto out; + } + len = sizeof(*caps); + + if (ice_is_dvm_ena(&vf->pf->hw)) + ice_vc_set_dvm_caps(vf, caps); + else + ice_vc_set_svm_caps(vf, caps); + + /* store negotiated caps to prevent invalid VF messages */ + memcpy(&vf->vlan_v2_caps, caps, sizeof(*caps)); + +out: + err = ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, + v_ret, (u8 *)caps, len); + kfree(caps); + return err; +} + +/** + * ice_vc_validate_vlan_tpid - validate VLAN TPID + * @filtering_caps: negotiated/supported VLAN filtering capabilities + * @tpid: VLAN TPID used for validation + * + * Convert the VLAN TPID to a VIRTCHNL_VLAN_ETHERTYPE_* and then compare against + * the negotiated/supported filtering caps to see if the VLAN TPID is valid. + */ +static bool ice_vc_validate_vlan_tpid(u16 filtering_caps, u16 tpid) +{ + enum virtchnl_vlan_support vlan_ethertype = VIRTCHNL_VLAN_UNSUPPORTED; + + switch (tpid) { + case ETH_P_8021Q: + vlan_ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100; + break; + case ETH_P_8021AD: + vlan_ethertype = VIRTCHNL_VLAN_ETHERTYPE_88A8; + break; + case ETH_P_QINQ1: + vlan_ethertype = VIRTCHNL_VLAN_ETHERTYPE_9100; + break; + } + + if (!(filtering_caps & vlan_ethertype)) + return false; + + return true; +} + +/** + * ice_vc_is_valid_vlan - validate the virtchnl_vlan + * @vc_vlan: virtchnl_vlan to validate + * + * If the VLAN TCI and VLAN TPID are 0, then this filter is invalid, so return + * false. Otherwise return true. + */ +static bool ice_vc_is_valid_vlan(struct virtchnl_vlan *vc_vlan) +{ + if (!vc_vlan->tci || !vc_vlan->tpid) + return false; + + return true; +} + +/** + * ice_vc_validate_vlan_filter_list - validate the filter list from the VF + * @vfc: negotiated/supported VLAN filtering capabilities + * @vfl: VLAN filter list from VF to validate + * + * Validate all of the filters in the VLAN filter list from the VF. If any of + * the checks fail then return false. Otherwise return true. + */ +static bool +ice_vc_validate_vlan_filter_list(struct virtchnl_vlan_filtering_caps *vfc, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + u16 i; + + if (!vfl->num_elements) + return false; + + for (i = 0; i < vfl->num_elements; i++) { + struct virtchnl_vlan_supported_caps *filtering_support = + &vfc->filtering_support; + struct virtchnl_vlan_filter *vlan_fltr = &vfl->filters[i]; + struct virtchnl_vlan *outer = &vlan_fltr->outer; + struct virtchnl_vlan *inner = &vlan_fltr->inner; + + if ((ice_vc_is_valid_vlan(outer) && + filtering_support->outer == VIRTCHNL_VLAN_UNSUPPORTED) || + (ice_vc_is_valid_vlan(inner) && + filtering_support->inner == VIRTCHNL_VLAN_UNSUPPORTED)) + return false; + + if ((outer->tci_mask && + !(filtering_support->outer & VIRTCHNL_VLAN_FILTER_MASK)) || + (inner->tci_mask && + !(filtering_support->inner & VIRTCHNL_VLAN_FILTER_MASK))) + return false; + + if (((outer->tci & VLAN_PRIO_MASK) && + !(filtering_support->outer & VIRTCHNL_VLAN_PRIO)) || + ((inner->tci & VLAN_PRIO_MASK) && + !(filtering_support->inner & VIRTCHNL_VLAN_PRIO))) + return false; + + if ((ice_vc_is_valid_vlan(outer) && + !ice_vc_validate_vlan_tpid(filtering_support->outer, outer->tpid)) || + (ice_vc_is_valid_vlan(inner) && + !ice_vc_validate_vlan_tpid(filtering_support->inner, inner->tpid))) + return false; + } + + return true; +} + +/** + * ice_vc_to_vlan - transform from struct virtchnl_vlan to struct ice_vlan + * @vc_vlan: struct virtchnl_vlan to transform + */ +static struct ice_vlan ice_vc_to_vlan(struct virtchnl_vlan *vc_vlan) +{ + struct ice_vlan vlan = { 0 }; + + vlan.prio = (vc_vlan->tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + vlan.vid = vc_vlan->tci & VLAN_VID_MASK; + vlan.tpid = vc_vlan->tpid; + + return vlan; +} + +/** + * ice_vc_vlan_action - action to perform on the virthcnl_vlan + * @vsi: VF's VSI used to perform the action + * @vlan_action: function to perform the action with (i.e. add/del) + * @vlan: VLAN filter to perform the action with + */ +static int +ice_vc_vlan_action(struct ice_vsi *vsi, + int (*vlan_action)(struct ice_vsi *, struct ice_vlan *), + struct ice_vlan *vlan) +{ + int err; + + err = vlan_action(vsi, vlan); + if (err) + return err; + + return 0; +} + +/** + * ice_vc_del_vlans - delete VLAN(s) from the virtchnl filter list + * @vf: VF used to delete the VLAN(s) + * @vsi: VF's VSI used to delete the VLAN(s) + * @vfl: virthchnl filter list used to delete the filters + */ +static int +ice_vc_del_vlans(struct ice_vf *vf, struct ice_vsi *vsi, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + bool vlan_promisc = ice_is_vlan_promisc_allowed(vf); + int err; + u16 i; + + for (i = 0; i < vfl->num_elements; i++) { + struct virtchnl_vlan_filter *vlan_fltr = &vfl->filters[i]; + struct virtchnl_vlan *vc_vlan; + + vc_vlan = &vlan_fltr->outer; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->outer_vlan_ops.del_vlan, + &vlan); + if (err) + return err; + + if (vlan_promisc) + ice_vf_dis_vlan_promisc(vsi, &vlan); + } + + vc_vlan = &vlan_fltr->inner; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->inner_vlan_ops.del_vlan, + &vlan); + if (err) + return err; + + /* no support for VLAN promiscuous on inner VLAN unless + * we are in Single VLAN Mode (SVM) + */ + if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc) + ice_vf_dis_vlan_promisc(vsi, &vlan); + } + } + + return 0; +} + +/** + * ice_vc_remove_vlan_v2_msg - virtchnl handler for VIRTCHNL_OP_DEL_VLAN_V2 + * @vf: VF the message was received from + * @msg: message received from the VF + */ +static int ice_vc_remove_vlan_v2_msg(struct ice_vf *vf, u8 *msg) +{ + struct virtchnl_vlan_filter_list_v2 *vfl = + (struct virtchnl_vlan_filter_list_v2 *)msg; + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct ice_vsi *vsi; + + if (!ice_vc_validate_vlan_filter_list(&vf->vlan_v2_caps.filtering, + vfl)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, vfl->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (ice_vc_del_vlans(vf, vsi, vfl)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DEL_VLAN_V2, v_ret, NULL, + 0); +} + +/** + * ice_vc_add_vlans - add VLAN(s) from the virtchnl filter list + * @vf: VF used to add the VLAN(s) + * @vsi: VF's VSI used to add the VLAN(s) + * @vfl: virthchnl filter list used to add the filters + */ +static int +ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + bool vlan_promisc = ice_is_vlan_promisc_allowed(vf); + int err; + u16 i; + + for (i = 0; i < vfl->num_elements; i++) { + struct virtchnl_vlan_filter *vlan_fltr = &vfl->filters[i]; + struct virtchnl_vlan *vc_vlan; + + vc_vlan = &vlan_fltr->outer; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->outer_vlan_ops.add_vlan, + &vlan); + if (err) + return err; + + if (vlan_promisc) { + err = ice_vf_ena_vlan_promisc(vsi, &vlan); + if (err) + return err; + } + } + + vc_vlan = &vlan_fltr->inner; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->inner_vlan_ops.add_vlan, + &vlan); + if (err) + return err; + + /* no support for VLAN promiscuous on inner VLAN unless + * we are in Single VLAN Mode (SVM) + */ + if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc) { + err = ice_vf_ena_vlan_promisc(vsi, &vlan); + if (err) + return err; + } + } + } + + return 0; +} + +/** + * ice_vc_validate_add_vlan_filter_list - validate add filter list from the VF + * @vsi: VF VSI used to get number of existing VLAN filters + * @vfc: negotiated/supported VLAN filtering capabilities + * @vfl: VLAN filter list from VF to validate + * + * Validate all of the filters in the VLAN filter list from the VF during the + * VIRTCHNL_OP_ADD_VLAN_V2 opcode. If any of the checks fail then return false. + * Otherwise return true. + */ +static bool +ice_vc_validate_add_vlan_filter_list(struct ice_vsi *vsi, + struct virtchnl_vlan_filtering_caps *vfc, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + u16 num_requested_filters = vsi->num_vlan + vfl->num_elements; + + if (num_requested_filters > vfc->max_filters) + return false; + + return ice_vc_validate_vlan_filter_list(vfc, vfl); +} + +/** + * ice_vc_add_vlan_v2_msg - virtchnl handler for VIRTCHNL_OP_ADD_VLAN_V2 + * @vf: VF the message was received from + * @msg: message received from the VF + */ +static int ice_vc_add_vlan_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_filter_list_v2 *vfl = + (struct virtchnl_vlan_filter_list_v2 *)msg; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, vfl->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_validate_add_vlan_filter_list(vsi, + &vf->vlan_v2_caps.filtering, + vfl)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (ice_vc_add_vlans(vf, vsi, vfl)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_VLAN_V2, v_ret, NULL, + 0); +} + +/** + * ice_vc_valid_vlan_setting - validate VLAN setting + * @negotiated_settings: negotiated VLAN settings during VF init + * @ethertype_setting: ethertype(s) requested for the VLAN setting + */ +static bool +ice_vc_valid_vlan_setting(u32 negotiated_settings, u32 ethertype_setting) +{ + if (ethertype_setting && !(negotiated_settings & ethertype_setting)) + return false; + + /* only allow a single VIRTCHNL_VLAN_ETHERTYPE if + * VIRTHCNL_VLAN_ETHERTYPE_AND is not negotiated/supported + */ + if (!(negotiated_settings & VIRTCHNL_VLAN_ETHERTYPE_AND) && + hweight32(ethertype_setting) > 1) + return false; + + /* ability to modify the VLAN setting was not negotiated */ + if (!(negotiated_settings & VIRTCHNL_VLAN_TOGGLE)) + return false; + + return true; +} + +/** + * ice_vc_valid_vlan_setting_msg - validate the VLAN setting message + * @caps: negotiated VLAN settings during VF init + * @msg: message to validate + * + * Used to validate any VLAN virtchnl message sent as a + * virtchnl_vlan_setting structure. Validates the message against the + * negotiated/supported caps during VF driver init. + */ +static bool +ice_vc_valid_vlan_setting_msg(struct virtchnl_vlan_supported_caps *caps, + struct virtchnl_vlan_setting *msg) +{ + if ((!msg->outer_ethertype_setting && + !msg->inner_ethertype_setting) || + (!caps->outer && !caps->inner)) + return false; + + if (msg->outer_ethertype_setting && + !ice_vc_valid_vlan_setting(caps->outer, + msg->outer_ethertype_setting)) + return false; + + if (msg->inner_ethertype_setting && + !ice_vc_valid_vlan_setting(caps->inner, + msg->inner_ethertype_setting)) + return false; + + return true; +} + +/** + * ice_vc_get_tpid - transform from VIRTCHNL_VLAN_ETHERTYPE_* to VLAN TPID + * @ethertype_setting: VIRTCHNL_VLAN_ETHERTYPE_* used to get VLAN TPID + * @tpid: VLAN TPID to populate + */ +static int ice_vc_get_tpid(u32 ethertype_setting, u16 *tpid) +{ + switch (ethertype_setting) { + case VIRTCHNL_VLAN_ETHERTYPE_8100: + *tpid = ETH_P_8021Q; + break; + case VIRTCHNL_VLAN_ETHERTYPE_88A8: + *tpid = ETH_P_8021AD; + break; + case VIRTCHNL_VLAN_ETHERTYPE_9100: + *tpid = ETH_P_QINQ1; + break; + default: + *tpid = 0; + return -EINVAL; + } + + return 0; +} + +/** + * ice_vc_ena_vlan_offload - enable VLAN offload based on the ethertype_setting + * @vsi: VF's VSI used to enable the VLAN offload + * @ena_offload: function used to enable the VLAN offload + * @ethertype_setting: VIRTCHNL_VLAN_ETHERTYPE_* to enable offloads for + */ +static int +ice_vc_ena_vlan_offload(struct ice_vsi *vsi, + int (*ena_offload)(struct ice_vsi *vsi, u16 tpid), + u32 ethertype_setting) +{ + u16 tpid; + int err; + + err = ice_vc_get_tpid(ethertype_setting, &tpid); + if (err) + return err; + + err = ena_offload(vsi, tpid); + if (err) + return err; + + return 0; +} + +#define ICE_L2TSEL_QRX_CONTEXT_REG_IDX 3 +#define ICE_L2TSEL_BIT_OFFSET 23 +enum ice_l2tsel { + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG2_2ND, + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG1, +}; + +/** + * ice_vsi_update_l2tsel - update l2tsel field for all Rx rings on this VSI + * @vsi: VSI used to update l2tsel on + * @l2tsel: l2tsel setting requested + * + * Use the l2tsel setting to update all of the Rx queue context bits for l2tsel. + * This will modify which descriptor field the first offloaded VLAN will be + * stripped into. + */ +static void ice_vsi_update_l2tsel(struct ice_vsi *vsi, enum ice_l2tsel l2tsel) +{ + struct ice_hw *hw = &vsi->back->hw; + u32 l2tsel_bit; + int i; + + if (l2tsel == ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG2_2ND) + l2tsel_bit = 0; + else + l2tsel_bit = BIT(ICE_L2TSEL_BIT_OFFSET); + + for (i = 0; i < vsi->alloc_rxq; i++) { + u16 pfq = vsi->rxq_map[i]; + u32 qrx_context_offset; + u32 regval; + + qrx_context_offset = + QRX_CONTEXT(ICE_L2TSEL_QRX_CONTEXT_REG_IDX, pfq); + + regval = rd32(hw, qrx_context_offset); + regval &= ~BIT(ICE_L2TSEL_BIT_OFFSET); + regval |= l2tsel_bit; + wr32(hw, qrx_context_offset, regval); + } +} + +/** + * ice_vc_ena_vlan_stripping_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 + */ +static int ice_vc_ena_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *stripping_support; + struct virtchnl_vlan_setting *strip_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, strip_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + stripping_support = &vf->vlan_v2_caps.offloads.stripping_support; + if (!ice_vc_valid_vlan_setting_msg(stripping_support, strip_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = strip_msg->outer_ethertype_setting; + if (ethertype_setting) { + if (ice_vc_ena_vlan_offload(vsi, + vsi->outer_vlan_ops.ena_stripping, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } else { + enum ice_l2tsel l2tsel = + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG2_2ND; + + /* PF tells the VF that the outer VLAN tag is always + * extracted to VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 and + * inner is always extracted to + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1. This is needed to + * support outer stripping so the first tag always ends + * up in L2TAG2_2ND and the second/inner tag, if + * enabled, is extracted in L2TAG1. + */ + ice_vsi_update_l2tsel(vsi, l2tsel); + } + } + + ethertype_setting = strip_msg->inner_ethertype_setting; + if (ethertype_setting && + ice_vc_ena_vlan_offload(vsi, vsi->inner_vlan_ops.ena_stripping, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2, v_ret, NULL, 0); +} + +/** + * ice_vc_dis_vlan_stripping_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 + */ +static int ice_vc_dis_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *stripping_support; + struct virtchnl_vlan_setting *strip_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, strip_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + stripping_support = &vf->vlan_v2_caps.offloads.stripping_support; + if (!ice_vc_valid_vlan_setting_msg(stripping_support, strip_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = strip_msg->outer_ethertype_setting; + if (ethertype_setting) { + if (vsi->outer_vlan_ops.dis_stripping(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } else { + enum ice_l2tsel l2tsel = + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG1; + + /* PF tells the VF that the outer VLAN tag is always + * extracted to VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 and + * inner is always extracted to + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1. This is needed to + * support inner stripping while outer stripping is + * disabled so that the first and only tag is extracted + * in L2TAG1. + */ + ice_vsi_update_l2tsel(vsi, l2tsel); + } + } + + ethertype_setting = strip_msg->inner_ethertype_setting; + if (ethertype_setting && vsi->inner_vlan_ops.dis_stripping(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2, v_ret, NULL, 0); +} + +/** + * ice_vc_ena_vlan_insertion_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 + */ +static int ice_vc_ena_vlan_insertion_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *insertion_support; + struct virtchnl_vlan_setting *insertion_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, insertion_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + insertion_support = &vf->vlan_v2_caps.offloads.insertion_support; + if (!ice_vc_valid_vlan_setting_msg(insertion_support, insertion_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->outer_ethertype_setting; + if (ethertype_setting && + ice_vc_ena_vlan_offload(vsi, vsi->outer_vlan_ops.ena_insertion, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->inner_ethertype_setting; + if (ethertype_setting && + ice_vc_ena_vlan_offload(vsi, vsi->inner_vlan_ops.ena_insertion, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2, v_ret, NULL, 0); +} + +/** + * ice_vc_dis_vlan_insertion_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 + */ +static int ice_vc_dis_vlan_insertion_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *insertion_support; + struct virtchnl_vlan_setting *insertion_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, insertion_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + insertion_support = &vf->vlan_v2_caps.offloads.insertion_support; + if (!ice_vc_valid_vlan_setting_msg(insertion_support, insertion_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->outer_ethertype_setting; + if (ethertype_setting && vsi->outer_vlan_ops.dis_insertion(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->inner_ethertype_setting; + if (ethertype_setting && vsi->inner_vlan_ops.dis_insertion(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2, v_ret, NULL, 0); +} + static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { .get_ver_msg = ice_vc_get_ver_msg, .get_vf_res_msg = ice_vc_get_vf_res_msg, @@ -4517,6 +5535,13 @@ static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { .handle_rss_cfg_msg = ice_vc_handle_rss_cfg, .add_fdir_fltr_msg = ice_vc_add_fdir_fltr, .del_fdir_fltr_msg = ice_vc_del_fdir_fltr, + .get_offload_vlan_v2_caps = ice_vc_get_offload_vlan_v2_caps, + .add_vlan_v2_msg = ice_vc_add_vlan_v2_msg, + .remove_vlan_v2_msg = ice_vc_remove_vlan_v2_msg, + .ena_vlan_stripping_v2_msg = ice_vc_ena_vlan_stripping_v2_msg, + .dis_vlan_stripping_v2_msg = ice_vc_dis_vlan_stripping_v2_msg, + .ena_vlan_insertion_v2_msg = ice_vc_ena_vlan_insertion_v2_msg, + .dis_vlan_insertion_v2_msg = ice_vc_dis_vlan_insertion_v2_msg, }; void ice_vc_set_dflt_vf_ops(struct ice_vc_vf_ops *ops) @@ -4745,7 +5770,7 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event) case VIRTCHNL_OP_GET_VF_RESOURCES: err = ops->get_vf_res_msg(vf, msg); if (ice_vf_init_vlan_stripping(vf)) - dev_err(dev, "Failed to initialize VLAN stripping for VF %d\n", + dev_dbg(dev, "Failed to initialize VLAN stripping for VF %d\n", vf->vf_id); ice_vc_notify_vf_link_state(vf); break; @@ -4810,6 +5835,27 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event) case VIRTCHNL_OP_DEL_RSS_CFG: err = ops->handle_rss_cfg_msg(vf, msg, false); break; + case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: + err = ops->get_offload_vlan_v2_caps(vf); + break; + case VIRTCHNL_OP_ADD_VLAN_V2: + err = ops->add_vlan_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_DEL_VLAN_V2: + err = ops->remove_vlan_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + err = ops->ena_vlan_stripping_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + err = ops->dis_vlan_stripping_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2: + err = ops->ena_vlan_insertion_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: + err = ops->dis_vlan_insertion_v2_msg(vf, msg); + break; case VIRTCHNL_OP_UNKNOWN: default: dev_err(dev, "Unsupported opcode %d from VF %d\n", v_opcode, diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 4110847e0699..4f4961043638 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -95,6 +95,13 @@ struct ice_vc_vf_ops { int (*handle_rss_cfg_msg)(struct ice_vf *vf, u8 *msg, bool add); int (*add_fdir_fltr_msg)(struct ice_vf *vf, u8 *msg); int (*del_fdir_fltr_msg)(struct ice_vf *vf, u8 *msg); + int (*get_offload_vlan_v2_caps)(struct ice_vf *vf); + int (*add_vlan_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*remove_vlan_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*ena_vlan_stripping_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*dis_vlan_stripping_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*ena_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*dis_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); }; /* VF information structure */ @@ -121,6 +128,7 @@ struct ice_vf { DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); struct ice_vlan port_vlan_info; /* Port VLAN ID, QoS, and TPID */ + struct virtchnl_vlan_caps vlan_v2_caps; u8 pf_set_mac:1; /* VF MAC address set by VMM admin */ u8 trusted:1; u8 spoofchk:1; -- 2.20.1 From kuba at kernel.org Wed Dec 1 00:51:10 2021 From: kuba at kernel.org (Jakub Kicinski) Date: Tue, 30 Nov 2021 16:51:10 -0800 Subject: [Intel-wired-lan] [PATCH net] igb: fix deadlock caused by taking RTNL in RPM resume path In-Reply-To: References: <6bb28d2f-4884-7696-0582-c26c35534bae@gmail.com> <20211129171712.500e37cb@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <6edc23a1-5907-3a41-7b46-8d53c5664a56@gmail.com> <20211130091206.488a541f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> Message-ID: <20211130165110.291af62a@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Tue, 30 Nov 2021 22:35:27 +0100 Heiner Kallweit wrote: > On 30.11.2021 18:12, Jakub Kicinski wrote: > >> Not sure whether igb uses RPM the same way as r8169. There the device > >> is runtime-suspended (D3hot) w/o link. Once cable is plugged in the PHY > >> triggers a PME, and PCI core runtime-resumes the device (MAC). > >> In this case RTNL isn't held by the caller. Therefore I don't think > >> it's safe to assume that all callers hold RTNL. > > > > No, no - I meant to leave the locking in but add ASSERT_RTNL() to catch > > if rpm == true && rtnl_held() == false. > > > This is a valid case. Maybe it's not my day today, I still don't get > how we would benefit from adding an ASSERT_RTNL(). > > Based on the following I think that RPM resume and device open() > can't collide, because RPM resume is finished before open() > starts its actual work. > > static int __igb_open(struct net_device *netdev, bool resuming) > { > ... > if (!resuming) > pm_runtime_get_sync(&pdev->dev); Ah, thanks, gotta start looking at the code before I say things.. From kuba at kernel.org Wed Dec 1 03:09:26 2021 From: kuba at kernel.org (Jakub Kicinski) Date: Tue, 30 Nov 2021 19:09:26 -0800 Subject: [Intel-wired-lan] [RFC PATCH 0/4] r8169: support dash In-Reply-To: <918d75ea873a453ab2ba588a35d66ab6@realtek.com> References: <20211129101315.16372-381-nic_swsd@realtek.com> <20211129095947.547a765f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <918d75ea873a453ab2ba588a35d66ab6@realtek.com> Message-ID: <20211130190926.7c1d735d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Wed, 1 Dec 2021 02:57:00 +0000 Hayes Wang wrote: > Jakub Kicinski > > Sent: Tuesday, November 30, 2021 2:00 AM > > Subject: Re: [RFC PATCH 0/4] r8169: support dash > > > > On Mon, 29 Nov 2021 18:13:11 +0800 Hayes Wang wrote: > > > These patches are used to support dash for RTL8111EP and > > > RTL8111FP(RTL81117). > > > > If I understand correctly DASH is a DMTF standard for remote control. > > > > Since it's a standard I think we should have a common way of > > configuring it across drivers. > > Excuse me. I am not familiar with it. > What document or sample code could I start? > > > Is enable/disable the only configuration > > that we will need? > > I don't think I could answer it before I understand the above way > you mentioned. > > > We don't use sysfs too much these days, can we move the knob to > > devlink, please? (If we only need an on/off switch generic devlink param > > should be fine). > > Thanks. I would study devlink. I'm not sure how relevant it will be to you but this is the documentation we have: https://www.kernel.org/doc/html/latest/networking/devlink/index.html https://www.kernel.org/doc/html/latest/networking/devlink/devlink-params.html You'll need to add a generic parameter (define + a short description) like 325e0d0aa683 ("devlink: Add 'enable_iwarp' generic device param") In terms of driver changes I think the most relevant example to you will be: drivers/net/ethernet/ti/cpsw_new.c You need to call devlink_alloc(), devlink_register and devlink_params_register() (and the inverse functions). From lkp at intel.com Wed Dec 1 07:00:29 2021 From: lkp at intel.com (kernel test robot) Date: Wed, 01 Dec 2021 15:00:29 +0800 Subject: [Intel-wired-lan] [tnguy-next-queue:40GbE] BUILD SUCCESS 64430f70ba6fcd5872ac190f4ae3ddee3f48f00d Message-ID: <61a71d8d.E5+5nLAb2vvC3tFw%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git 40GbE branch HEAD: 64430f70ba6fcd5872ac190f4ae3ddee3f48f00d iavf: Fix displaying queue statistics shown by ethtool elapsed time: 725m configs tested: 118 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm defconfig arm allyesconfig arm allmodconfig arm64 allyesconfig arm64 defconfig i386 randconfig-c001-20211130 powerpc maple_defconfig sh se7722_defconfig sh titan_defconfig mips maltaup_xpa_defconfig arm colibri_pxa270_defconfig arm vt8500_v6_v7_defconfig mips rt305x_defconfig m68k mvme16x_defconfig arm xcep_defconfig powerpc ppc6xx_defconfig powerpc warp_defconfig powerpc bamboo_defconfig arc alldefconfig m68k m5272c3_defconfig arm qcom_defconfig sh se7712_defconfig arm randconfig-c002-20211129 arm randconfig-c002-20211128 ia64 defconfig ia64 allmodconfig ia64 allyesconfig m68k defconfig m68k allyesconfig m68k allmodconfig nios2 defconfig arc allyesconfig nds32 allnoconfig nds32 defconfig nios2 allyesconfig csky defconfig alpha defconfig alpha allyesconfig arc defconfig sh allmodconfig h8300 allyesconfig xtensa allyesconfig parisc defconfig s390 allmodconfig parisc allyesconfig s390 defconfig s390 allyesconfig i386 allyesconfig sparc allyesconfig sparc defconfig i386 defconfig i386 debian-10.3-kselftests i386 debian-10.3 mips allmodconfig mips allyesconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig i386 randconfig-a005-20211130 i386 randconfig-a002-20211130 i386 randconfig-a006-20211130 i386 randconfig-a004-20211130 i386 randconfig-a003-20211130 i386 randconfig-a001-20211130 x86_64 randconfig-a011-20211128 x86_64 randconfig-a014-20211128 x86_64 randconfig-a012-20211128 x86_64 randconfig-a016-20211128 x86_64 randconfig-a013-20211128 x86_64 randconfig-a015-20211128 i386 randconfig-a015-20211128 i386 randconfig-a016-20211128 i386 randconfig-a013-20211128 i386 randconfig-a012-20211128 i386 randconfig-a014-20211128 i386 randconfig-a011-20211128 arc randconfig-r043-20211128 s390 randconfig-r044-20211128 riscv randconfig-r042-20211128 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig um x86_64_defconfig um i386_defconfig x86_64 allyesconfig x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec x86_64 rhel-8.3-kselftests clang tested configs: x86_64 randconfig-a001-20211128 x86_64 randconfig-a006-20211128 x86_64 randconfig-a003-20211128 x86_64 randconfig-a005-20211128 x86_64 randconfig-a004-20211128 x86_64 randconfig-a002-20211128 x86_64 randconfig-a014-20211130 x86_64 randconfig-a013-20211130 x86_64 randconfig-a012-20211130 x86_64 randconfig-a011-20211130 x86_64 randconfig-a016-20211130 x86_64 randconfig-a015-20211130 i386 randconfig-a013-20211129 i386 randconfig-a012-20211129 i386 randconfig-a011-20211129 i386 randconfig-a015-20211129 i386 randconfig-a014-20211129 i386 randconfig-a016-20211129 hexagon randconfig-r045-20211129 hexagon randconfig-r041-20211129 s390 randconfig-r044-20211129 riscv randconfig-r042-20211129 hexagon randconfig-r045-20211128 hexagon randconfig-r041-20211128 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From lkp at intel.com Wed Dec 1 07:31:46 2021 From: lkp at intel.com (kernel test robot) Date: Wed, 01 Dec 2021 15:31:46 +0800 Subject: [Intel-wired-lan] [tnguy-next-queue:100GbE] BUILD SUCCESS 244714da8d5d088faa2e0e32ba84ca1913a093ef Message-ID: <61a724e2.E1bMfAI0ToCgQ59Y%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git 100GbE branch HEAD: 244714da8d5d088faa2e0e32ba84ca1913a093ef net/ice: Remove unused enum elapsed time: 757m configs tested: 126 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm allmodconfig arm defconfig arm64 allyesconfig arm64 defconfig arm allyesconfig i386 randconfig-c001-20211130 um alldefconfig mips ci20_defconfig sh rsk7201_defconfig powerpc maple_defconfig sh se7722_defconfig sh titan_defconfig mips maltaup_xpa_defconfig arm colibri_pxa270_defconfig sh polaris_defconfig sh landisk_defconfig ia64 bigsur_defconfig powerpc stx_gp3_defconfig mips lemote2f_defconfig m68k defconfig arm vt8500_v6_v7_defconfig mips rt305x_defconfig m68k mvme16x_defconfig arm xcep_defconfig powerpc ppc6xx_defconfig powerpc warp_defconfig arc vdk_hs38_defconfig sh shmin_defconfig arm bcm2835_defconfig arm randconfig-c002-20211129 arm randconfig-c002-20211128 ia64 allmodconfig ia64 defconfig ia64 allyesconfig m68k allyesconfig m68k allmodconfig nios2 defconfig arc allyesconfig nds32 allnoconfig nds32 defconfig nios2 allyesconfig csky defconfig alpha defconfig alpha allyesconfig xtensa allyesconfig h8300 allyesconfig arc defconfig sh allmodconfig parisc defconfig s390 allyesconfig s390 allmodconfig parisc allyesconfig s390 defconfig i386 allyesconfig sparc allyesconfig sparc defconfig i386 defconfig i386 debian-10.3-kselftests i386 debian-10.3 mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig i386 randconfig-a005-20211130 i386 randconfig-a002-20211130 i386 randconfig-a006-20211130 i386 randconfig-a004-20211130 i386 randconfig-a003-20211130 i386 randconfig-a001-20211130 i386 randconfig-a001-20211129 i386 randconfig-a002-20211129 i386 randconfig-a005-20211129 i386 randconfig-a004-20211129 i386 randconfig-a003-20211129 i386 randconfig-a006-20211129 x86_64 randconfig-a011-20211128 x86_64 randconfig-a014-20211128 x86_64 randconfig-a012-20211128 x86_64 randconfig-a016-20211128 x86_64 randconfig-a013-20211128 x86_64 randconfig-a015-20211128 i386 randconfig-a015-20211128 i386 randconfig-a016-20211128 i386 randconfig-a013-20211128 i386 randconfig-a012-20211128 i386 randconfig-a014-20211128 i386 randconfig-a011-20211128 arc randconfig-r043-20211129 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig um x86_64_defconfig um i386_defconfig x86_64 allyesconfig x86_64 rhel-8.3-kselftests x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec clang tested configs: x86_64 randconfig-a001-20211128 x86_64 randconfig-a003-20211128 x86_64 randconfig-a005-20211128 x86_64 randconfig-a004-20211128 x86_64 randconfig-a002-20211128 x86_64 randconfig-a006-20211128 x86_64 randconfig-a014-20211130 x86_64 randconfig-a016-20211130 x86_64 randconfig-a013-20211130 x86_64 randconfig-a012-20211130 x86_64 randconfig-a015-20211130 x86_64 randconfig-a011-20211130 i386 randconfig-a015-20211129 i386 randconfig-a016-20211129 i386 randconfig-a013-20211129 i386 randconfig-a012-20211129 i386 randconfig-a014-20211129 i386 randconfig-a011-20211129 hexagon randconfig-r045-20211129 hexagon randconfig-r041-20211129 s390 randconfig-r044-20211129 riscv randconfig-r042-20211129 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From sassmann at kpanic.de Wed Dec 1 08:14:34 2021 From: sassmann at kpanic.de (Stefan Assmann) Date: Wed, 1 Dec 2021 09:14:34 +0100 Subject: [Intel-wired-lan] [PATCH net] iavf: do not override the adapter state in the watchdog task (again) Message-ID: <20211201081434.3977672-1-sassmann@kpanic.de> The watchdog task incorrectly changes the state to __IAVF_RESETTING, instead of letting the reset task take care of that. This was already resolved by 22c8fd71d3a5 iavf: do not override the adapter state in the watchdog task but the problem was reintroduced by the recent code refactoring in 45eebd62999d. Fixes: 45eebd62999d ("iavf: Refactor iavf state machine tracking") Signed-off-by: Stefan Assmann --- drivers/net/ethernet/intel/iavf/iavf_main.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 14934a7a13ef..360dfb7594cb 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -2085,7 +2085,6 @@ static void iavf_watchdog_task(struct work_struct *work) /* check for hw reset */ reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK; if (!reg_val) { - iavf_change_state(adapter, __IAVF_RESETTING); adapter->flags |= IAVF_FLAG_RESET_PENDING; adapter->aq_required = 0; adapter->current_op = VIRTCHNL_OP_UNKNOWN; -- 2.31.1 From lkp at intel.com Wed Dec 1 10:08:27 2021 From: lkp at intel.com (kernel test robot) Date: Wed, 01 Dec 2021 18:08:27 +0800 Subject: [Intel-wired-lan] [tnguy-net-queue:dev-queue] BUILD SUCCESS 605ac048e635ee6418555e65ab9228de5e1524b8 Message-ID: <61a7499b./f+5cFsG0VDQwm93%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue.git dev-queue branch HEAD: 605ac048e635ee6418555e65ab9228de5e1524b8 i40e: Fix queues reservation for XDP elapsed time: 729m configs tested: 161 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm defconfig arm64 allyesconfig arm64 defconfig arm allyesconfig arm allmodconfig i386 randconfig-c001-20211128 sh se7750_defconfig powerpc pasemi_defconfig sh alldefconfig arm imx_v6_v7_defconfig powerpc acadia_defconfig m68k m5272c3_defconfig sh polaris_defconfig powerpc adder875_defconfig sh edosk7760_defconfig powerpc holly_defconfig arm footbridge_defconfig xtensa cadence_csp_defconfig sh rsk7264_defconfig sh ecovec24-romimage_defconfig arm mmp2_defconfig arm tegra_defconfig mips ci20_defconfig sh se7206_defconfig sh apsh4a3a_defconfig parisc generic-32bit_defconfig arm s3c2410_defconfig arm corgi_defconfig s390 alldefconfig arm multi_v4t_defconfig sh landisk_defconfig ia64 bigsur_defconfig powerpc stx_gp3_defconfig mips lemote2f_defconfig mips workpad_defconfig nios2 alldefconfig arm socfpga_defconfig mips decstation_defconfig arm vf610m4_defconfig arm mainstone_defconfig powerpc pq2fads_defconfig alpha alldefconfig powerpc mpc8313_rdb_defconfig mips rm200_defconfig arm randconfig-c002-20211128 ia64 allmodconfig ia64 defconfig ia64 allyesconfig m68k allmodconfig m68k defconfig m68k allyesconfig nios2 defconfig arc allyesconfig nds32 allnoconfig csky defconfig alpha defconfig nds32 defconfig alpha allyesconfig nios2 allyesconfig xtensa allyesconfig h8300 allyesconfig arc defconfig sh allmodconfig parisc defconfig s390 allyesconfig s390 allmodconfig parisc allyesconfig s390 defconfig i386 allyesconfig sparc allyesconfig sparc defconfig i386 defconfig i386 debian-10.3-kselftests i386 debian-10.3 mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig x86_64 randconfig-a006-20211201 x86_64 randconfig-a005-20211201 x86_64 randconfig-a001-20211201 x86_64 randconfig-a002-20211201 x86_64 randconfig-a004-20211201 x86_64 randconfig-a003-20211201 i386 randconfig-a005-20211130 i386 randconfig-a002-20211130 i386 randconfig-a006-20211130 i386 randconfig-a004-20211130 i386 randconfig-a003-20211130 i386 randconfig-a001-20211130 i386 randconfig-a001-20211201 i386 randconfig-a005-20211201 i386 randconfig-a003-20211201 i386 randconfig-a002-20211201 i386 randconfig-a006-20211201 i386 randconfig-a004-20211201 x86_64 randconfig-a011-20211128 x86_64 randconfig-a014-20211128 x86_64 randconfig-a012-20211128 x86_64 randconfig-a016-20211128 x86_64 randconfig-a013-20211128 x86_64 randconfig-a015-20211128 i386 randconfig-a015-20211128 i386 randconfig-a016-20211128 i386 randconfig-a013-20211128 i386 randconfig-a012-20211128 i386 randconfig-a014-20211128 i386 randconfig-a011-20211128 arc randconfig-r043-20211128 s390 randconfig-r044-20211128 riscv randconfig-r042-20211128 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig x86_64 rhel-8.3-kselftests um x86_64_defconfig um i386_defconfig x86_64 allyesconfig x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec clang tested configs: arm randconfig-c002-20211201 x86_64 randconfig-c007-20211201 riscv randconfig-c006-20211201 mips randconfig-c004-20211201 i386 randconfig-c001-20211201 powerpc randconfig-c003-20211201 s390 randconfig-c005-20211201 s390 randconfig-c005-20211128 i386 randconfig-c001-20211128 riscv randconfig-c006-20211128 arm randconfig-c002-20211128 powerpc randconfig-c003-20211128 x86_64 randconfig-c007-20211128 mips randconfig-c004-20211128 i386 randconfig-a001-20211128 i386 randconfig-a002-20211128 i386 randconfig-a006-20211128 i386 randconfig-a005-20211128 i386 randconfig-a004-20211128 i386 randconfig-a003-20211128 i386 randconfig-a011-20211130 i386 randconfig-a015-20211130 i386 randconfig-a012-20211130 i386 randconfig-a013-20211130 i386 randconfig-a014-20211130 i386 randconfig-a016-20211130 x86_64 randconfig-a001-20211128 x86_64 randconfig-a006-20211128 x86_64 randconfig-a003-20211128 x86_64 randconfig-a005-20211128 x86_64 randconfig-a004-20211128 x86_64 randconfig-a002-20211128 hexagon randconfig-r045-20211129 hexagon randconfig-r041-20211129 s390 randconfig-r044-20211129 riscv randconfig-r042-20211129 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From lkp at intel.com Wed Dec 1 10:53:56 2021 From: lkp at intel.com (kernel test robot) Date: Wed, 01 Dec 2021 18:53:56 +0800 Subject: [Intel-wired-lan] [tnguy-next-queue:1GbE] BUILD SUCCESS f51b5e2b5943d26387d2abb463bce2b4bd0a4a8d Message-ID: <61a75444.gcwuawwpH8Rfe5RE%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git 1GbE branch HEAD: f51b5e2b5943d26387d2abb463bce2b4bd0a4a8d igc: enable XDP metadata in driver elapsed time: 959m configs tested: 134 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm defconfig arm64 allyesconfig arm64 defconfig arm allyesconfig arm allmodconfig i386 randconfig-c001-20211130 powerpc adder875_defconfig sh edosk7760_defconfig powerpc holly_defconfig arm footbridge_defconfig xtensa cadence_csp_defconfig sh polaris_defconfig powerpc maple_defconfig sh se7722_defconfig sh titan_defconfig mips maltaup_xpa_defconfig arm colibri_pxa270_defconfig ia64 bigsur_defconfig powerpc stx_gp3_defconfig mips lemote2f_defconfig sh landisk_defconfig arm vt8500_v6_v7_defconfig mips rt305x_defconfig m68k mvme16x_defconfig arm xcep_defconfig powerpc ppc6xx_defconfig powerpc warp_defconfig arm mainstone_defconfig alpha alldefconfig powerpc mpc8313_rdb_defconfig mips rm200_defconfig powerpc pq2fads_defconfig arm randconfig-c002-20211129 arm randconfig-c002-20211128 ia64 defconfig ia64 allmodconfig ia64 allyesconfig m68k defconfig m68k allyesconfig m68k allmodconfig nios2 defconfig arc allyesconfig nds32 allnoconfig nds32 defconfig nios2 allyesconfig csky defconfig alpha defconfig alpha allyesconfig xtensa allyesconfig h8300 allyesconfig arc defconfig sh allmodconfig parisc defconfig s390 allyesconfig s390 allmodconfig parisc allyesconfig s390 defconfig i386 allyesconfig sparc allyesconfig sparc defconfig i386 defconfig i386 debian-10.3-kselftests i386 debian-10.3 mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig x86_64 randconfig-a006-20211201 x86_64 randconfig-a005-20211201 x86_64 randconfig-a001-20211201 x86_64 randconfig-a002-20211201 x86_64 randconfig-a004-20211201 x86_64 randconfig-a003-20211201 i386 randconfig-a005-20211130 i386 randconfig-a002-20211130 i386 randconfig-a006-20211130 i386 randconfig-a004-20211130 i386 randconfig-a003-20211130 i386 randconfig-a001-20211130 x86_64 randconfig-a011-20211128 x86_64 randconfig-a014-20211128 x86_64 randconfig-a012-20211128 x86_64 randconfig-a016-20211128 x86_64 randconfig-a013-20211128 x86_64 randconfig-a015-20211128 i386 randconfig-a015-20211128 i386 randconfig-a016-20211128 i386 randconfig-a013-20211128 i386 randconfig-a012-20211128 i386 randconfig-a014-20211128 i386 randconfig-a011-20211128 arc randconfig-r043-20211128 s390 randconfig-r044-20211128 riscv randconfig-r042-20211128 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig um x86_64_defconfig um i386_defconfig x86_64 allyesconfig x86_64 rhel-8.3-kselftests x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec clang tested configs: x86_64 randconfig-a001-20211128 x86_64 randconfig-a006-20211128 x86_64 randconfig-a003-20211128 x86_64 randconfig-a005-20211128 x86_64 randconfig-a004-20211128 x86_64 randconfig-a002-20211128 i386 randconfig-a001-20211128 i386 randconfig-a002-20211128 i386 randconfig-a003-20211128 i386 randconfig-a005-20211128 i386 randconfig-a006-20211128 i386 randconfig-a004-20211128 i386 randconfig-a011-20211130 i386 randconfig-a015-20211130 i386 randconfig-a012-20211130 i386 randconfig-a013-20211130 i386 randconfig-a014-20211130 i386 randconfig-a016-20211130 hexagon randconfig-r045-20211129 hexagon randconfig-r041-20211129 s390 randconfig-r044-20211129 riscv randconfig-r042-20211129 hexagon randconfig-r045-20211128 hexagon randconfig-r041-20211128 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From lkp at intel.com Wed Dec 1 13:12:57 2021 From: lkp at intel.com (kernel test robot) Date: Wed, 01 Dec 2021 21:12:57 +0800 Subject: [Intel-wired-lan] [tnguy-next-queue:dev-queue] BUILD SUCCESS aa9e3fb7a6d237a034d97ae4daccce6b7e44392e Message-ID: <61a774d9.0x5ubOilEJvgNWmF%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue branch HEAD: aa9e3fb7a6d237a034d97ae4daccce6b7e44392e iavf: Restrict maximum VLAN filters for VIRTCHNL_VF_OFFLOAD_VLAN_V2 elapsed time: 1025m configs tested: 139 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm defconfig arm64 allyesconfig arm64 defconfig arm allyesconfig arm allmodconfig um alldefconfig mips ci20_defconfig sh rsk7201_defconfig h8300 allyesconfig mips rs90_defconfig arm netwinder_defconfig arm versatile_defconfig arm magician_defconfig powerpc currituck_defconfig sh polaris_defconfig sh landisk_defconfig ia64 bigsur_defconfig powerpc stx_gp3_defconfig mips lemote2f_defconfig m68k defconfig arm qcom_defconfig arc alldefconfig m68k m5272c3_defconfig powerpc bamboo_defconfig sh se7712_defconfig arm randconfig-c002-20211128 ia64 allmodconfig ia64 defconfig ia64 allyesconfig m68k allmodconfig m68k allyesconfig nios2 defconfig arc allyesconfig nds32 allnoconfig nds32 defconfig nios2 allyesconfig csky defconfig alpha defconfig alpha allyesconfig xtensa allyesconfig arc defconfig sh allmodconfig parisc defconfig s390 allyesconfig s390 allmodconfig parisc allyesconfig s390 defconfig i386 defconfig i386 debian-10.3-kselftests i386 debian-10.3 i386 allyesconfig sparc defconfig sparc allyesconfig mips allmodconfig mips allyesconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig i386 randconfig-a005-20211130 i386 randconfig-a002-20211130 i386 randconfig-a006-20211130 i386 randconfig-a004-20211130 i386 randconfig-a003-20211130 i386 randconfig-a001-20211130 i386 randconfig-a001-20211129 i386 randconfig-a002-20211129 i386 randconfig-a006-20211129 i386 randconfig-a005-20211129 i386 randconfig-a004-20211129 i386 randconfig-a003-20211129 x86_64 randconfig-a011-20211128 x86_64 randconfig-a014-20211128 x86_64 randconfig-a012-20211128 x86_64 randconfig-a016-20211128 x86_64 randconfig-a013-20211128 x86_64 randconfig-a015-20211128 i386 randconfig-a015-20211128 i386 randconfig-a016-20211128 i386 randconfig-a013-20211128 i386 randconfig-a012-20211128 i386 randconfig-a014-20211128 i386 randconfig-a011-20211128 arc randconfig-r043-20211130 arc randconfig-r043-20211128 s390 randconfig-r044-20211128 riscv randconfig-r042-20211128 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig um x86_64_defconfig um i386_defconfig x86_64 allyesconfig x86_64 rhel-8.3-kselftests x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec clang tested configs: x86_64 randconfig-a001-20211128 x86_64 randconfig-a006-20211128 x86_64 randconfig-a003-20211128 x86_64 randconfig-a005-20211128 x86_64 randconfig-a004-20211128 x86_64 randconfig-a002-20211128 i386 randconfig-a001-20211128 i386 randconfig-a002-20211128 i386 randconfig-a006-20211128 i386 randconfig-a005-20211128 i386 randconfig-a004-20211128 i386 randconfig-a003-20211128 x86_64 randconfig-a014-20211130 x86_64 randconfig-a016-20211130 x86_64 randconfig-a013-20211130 x86_64 randconfig-a012-20211130 x86_64 randconfig-a015-20211130 x86_64 randconfig-a011-20211130 x86_64 randconfig-a016-20211201 x86_64 randconfig-a011-20211201 x86_64 randconfig-a013-20211201 x86_64 randconfig-a015-20211201 x86_64 randconfig-a012-20211201 x86_64 randconfig-a014-20211201 i386 randconfig-a015-20211129 i386 randconfig-a016-20211129 i386 randconfig-a013-20211129 i386 randconfig-a012-20211129 i386 randconfig-a014-20211129 i386 randconfig-a011-20211129 hexagon randconfig-r045-20211129 hexagon randconfig-r041-20211129 s390 randconfig-r044-20211129 riscv randconfig-r042-20211129 hexagon randconfig-r045-20211130 s390 randconfig-r044-20211130 hexagon randconfig-r041-20211130 riscv randconfig-r042-20211130 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From hayeswang at realtek.com Wed Dec 1 02:57:00 2021 From: hayeswang at realtek.com (Hayes Wang) Date: Wed, 1 Dec 2021 02:57:00 +0000 Subject: [Intel-wired-lan] [RFC PATCH 0/4] r8169: support dash In-Reply-To: <20211129095947.547a765f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> References: <20211129101315.16372-381-nic_swsd@realtek.com> <20211129095947.547a765f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> Message-ID: <918d75ea873a453ab2ba588a35d66ab6@realtek.com> Jakub Kicinski > Sent: Tuesday, November 30, 2021 2:00 AM > Subject: Re: [RFC PATCH 0/4] r8169: support dash > > On Mon, 29 Nov 2021 18:13:11 +0800 Hayes Wang wrote: > > These patches are used to support dash for RTL8111EP and > > RTL8111FP(RTL81117). > > If I understand correctly DASH is a DMTF standard for remote control. > > Since it's a standard I think we should have a common way of > configuring it across drivers. Excuse me. I am not familiar with it. What document or sample code could I start? > Is enable/disable the only configuration > that we will need? I don't think I could answer it before I understand the above way you mentioned. > We don't use sysfs too much these days, can we move the knob to > devlink, please? (If we only need an on/off switch generic devlink param > should be fine). Thanks. I would study devlink. Best Regards, Hayes From regressions at leemhuis.info Wed Dec 1 11:45:38 2021 From: regressions at leemhuis.info (Thorsten Leemhuis) Date: Wed, 1 Dec 2021 12:45:38 +0100 Subject: [Intel-wired-lan] [REGRESSION] Kernel 5.15 reboots / freezes upon ifup/ifdown In-Reply-To: <227af6b0692a0a57f5fb349d4d9c914301753209.camel@gmx.de> References: <924175a188159f4e03bd69908a91e606b574139b.camel@gmx.de> <8119066974f099aa11f08a4dad3653ac0ba32cd6.camel@gmx.de> <20211124153449.72c9cfcd@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <87a6htm4aj.fsf@intel.com> <227af6b0692a0a57f5fb349d4d9c914301753209.camel@gmx.de> Message-ID: Hi, this is your Linux kernel regression tracker speaking. On 25.11.21 09:41, Stefan Dietrich wrote: > > thanks - this was spot-on: disabling CONFIG_PCIE_PTM resolves the issue > for latest 5.15.4 (stable from git) for both manual and network-manager > NIC configuration. > > Let me know if I may assist in debugging this further. What is the status here? There afaics hasn't been any progress since nearly a week. Vinicius, do you still have this on your radar? Or was there some progress? Or is this really related to another issue, as Jakub suspected? Then it might be solved by the patch here: https://bugzilla.kernel.org/show_bug.cgi?id=215129 Ciao, Thorsten > On Wed, 2021-11-24 at 17:07 -0800, Vinicius Costa Gomes wrote: >> Hi Stefan, >> >> Jakub Kicinski writes: >> >>> On Wed, 24 Nov 2021 18:20:40 +0100 Stefan Dietrich wrote: >>>> Hi all, >>>> >>>> six exciting hours and a lot of learning later, here it is. >>>> Symptomatically, the critical commit appears for me between >>>> 5.14.21- >>>> 051421-generic and 5.15.0-051500rc2-generic - I did not find an >>>> amd64 >>>> build for rc1. >>>> >>>> Please see the git-bisect output below and let me know how I may >>>> further assist in debugging! >>> >>> Well, let's CC those involved, shall we? :) >>> >>> Thanks for working thru the bisection! >>> >>>> a90ec84837325df4b9a6798c2cc0df202b5680bd is the first bad commit >>>> commit a90ec84837325df4b9a6798c2cc0df202b5680bd >>>> Author: Vinicius Costa Gomes >>>> Date: Mon Jul 26 20:36:57 2021 -0700 >>>> >>>> igc: Add support for PTP getcrosststamp() >> >> Oh! That's interesting. >> >> Can you try disabling CONFIG_PCIE_PTM in your kernel config? If it >> works, then it's a point in favor that this commit is indeed the >> problematic one. >> >> I am still trying to think of what could be causing the lockup you >> are >> seeing. >> >> P.S.: As a Linux kernel regression tracker I'm getting a lot of reports on my table. I can only look briefly into most of them. Unfortunately therefore I sometimes will get things wrong or miss something important. I hope that's not the case here; if you think it is, don't hesitate to tell me about it in a public reply. That's in everyone's interest, as what I wrote above might be misleading to everyone reading this; any suggestion I gave they thus might sent someone reading this down the wrong rabbit hole, which none of us wants. BTW, I have no personal interest in this issue, which is tracked using regzbot, my Linux kernel regression tracking bot (https://linux-regtracking.leemhuis.info/regzbot/). I'm only posting this mail to get things rolling again and hence don't need to be CC on all further activities wrt to this regression. #regzbot poke From dima.ruinskiy at intel.com Wed Dec 1 16:38:06 2021 From: dima.ruinskiy at intel.com (Ruinskiy, Dima) Date: Wed, 1 Dec 2021 18:38:06 +0200 Subject: [Intel-wired-lan] [External] Re: [PATCH 3/3] Revert "e1000e: Add handshake with the CSME to support S0ix" In-Reply-To: <3fad0b95-fe97-8c4a-3ca9-3ed2a9fa2134@lenovo.com> References: <20211122161927.874291-1-kai.heng.feng@canonical.com> <20211122161927.874291-3-kai.heng.feng@canonical.com> <0ba36a30-95d3-a5f4-93c2-443cf2259756@intel.com> <3fad0b95-fe97-8c4a-3ca9-3ed2a9fa2134@lenovo.com> Message-ID: On 30/11/2021 17:52, Mark Pearson wrote: > Hi Sasha > > On 2021-11-28 08:23, Sasha Neftin wrote: >> On 11/22/2021 18:19, Kai-Heng Feng wrote: >>> This reverts commit 3e55d231716ea361b1520b801c6778c4c48de102. >>> >>> Bugzilla: >>> https://bugzilla.kernel.org/show_bug.cgi?id=214821>>> >>> Signed-off-by: Kai-Heng Feng >>> --- > >>> >> Hello Kai-Heng, >> I believe it is the wrong approach. Reverting this patch will put >> corporate systems in an unpredictable state. SW will perform s0ix flow >> independent to CSME. (The CSME firmware will continue run >> independently.) LAN controller could be in an unknown state. >> Please, afford us to continue to debug the problem (it is could be >> incredible complexity) >> >> You always can skip the s0ix flow on problematic corporate systems by >> using privilege flag: ethtool --set-priv-flags enp0s31f6 s0ix-enabled off >> >> Also, there is no impact on consumer systems. >> Sasha > > I know we've discussed this offline, and your team are working on the > correct fix but I wanted to check based on your comments above that "it > was complex". I thought, and maybe misunderstood, that it was going to > be relatively simple to disable the change for older CPUs - which is the > biggest problem caused by the patch. > > Right now it's breaking networking for folk who happen to have a vPro > Tigerlake (and I believe even potentially Cometlake or older) system. I > think the impact of that could potentially be quite severe. > > I understand not wanting to revert the change for the ADL platforms I > believe this is targeting and to fix this instead - but your comment > made me nervous that Linux users on older Intel based platforms are in > for a long and painful wait - it is likely a lot of users.... > > Can you or Dima confirm the fix for older platforms will be available > soon? I appreciate the ADL platform might take a bit more work and time > to get right. > > Thanks > Mark > Hi Mark, What we currently see is that the issue manifests itself similarly on ADL and TGL platforms. Thus, the fix will likely be the same for both. If we cannot find a proper fix soon, we will provide a workaround (for example by temporary disabling the feature on vPro platforms until we do have a fix). This can be done without reverting the patch series, and I don't see much value in selectively disabling it for CML/TGL while leaving it on for ADL, unless our ongoing debug shows otherwise. --Dima From vinicius.gomes at intel.com Wed Dec 1 17:47:52 2021 From: vinicius.gomes at intel.com (Vinicius Costa Gomes) Date: Wed, 01 Dec 2021 09:47:52 -0800 Subject: [Intel-wired-lan] [REGRESSION] Kernel 5.15 reboots / freezes upon ifup/ifdown In-Reply-To: References: <924175a188159f4e03bd69908a91e606b574139b.camel@gmx.de> <8119066974f099aa11f08a4dad3653ac0ba32cd6.camel@gmx.de> <20211124153449.72c9cfcd@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <87a6htm4aj.fsf@intel.com> <227af6b0692a0a57f5fb349d4d9c914301753209.camel@gmx.de> Message-ID: <87r1awtdx3.fsf@intel.com> Hi, Thorsten Leemhuis writes: > Hi, this is your Linux kernel regression tracker speaking. > > On 25.11.21 09:41, Stefan Dietrich wrote: >> >> thanks - this was spot-on: disabling CONFIG_PCIE_PTM resolves the issue >> for latest 5.15.4 (stable from git) for both manual and network-manager >> NIC configuration. >> >> Let me know if I may assist in debugging this further. > > What is the status here? There afaics hasn't been any progress since > nearly a week. > > Vinicius, do you still have this on your radar? Or was there some progress? > > Or is this really related to another issue, as Jakub suspected? Then it > might be solved by the patch here: > > https://bugzilla.kernel.org/show_bug.cgi?id=215129 What I am thinking right now is that we are facing a similar problem as the bug above, only in the igc driver. The difference is that it's the PCIe PTM messages (from the PCIe root) that are triggering the deadlock in the suspend/resume path in igc. I will produce a patch in a few moments, very similar to the one in the bug report, let's see if it helps. > > Ciao, Thorsten > >> On Wed, 2021-11-24 at 17:07 -0800, Vinicius Costa Gomes wrote: >>> Hi Stefan, >>> >>> Jakub Kicinski writes: >>> >>>> On Wed, 24 Nov 2021 18:20:40 +0100 Stefan Dietrich wrote: >>>>> Hi all, >>>>> >>>>> six exciting hours and a lot of learning later, here it is. >>>>> Symptomatically, the critical commit appears for me between >>>>> 5.14.21- >>>>> 051421-generic and 5.15.0-051500rc2-generic - I did not find an >>>>> amd64 >>>>> build for rc1. >>>>> >>>>> Please see the git-bisect output below and let me know how I may >>>>> further assist in debugging! >>>> >>>> Well, let's CC those involved, shall we? :) >>>> >>>> Thanks for working thru the bisection! >>>> >>>>> a90ec84837325df4b9a6798c2cc0df202b5680bd is the first bad commit >>>>> commit a90ec84837325df4b9a6798c2cc0df202b5680bd >>>>> Author: Vinicius Costa Gomes >>>>> Date: Mon Jul 26 20:36:57 2021 -0700 >>>>> >>>>> igc: Add support for PTP getcrosststamp() >>> >>> Oh! That's interesting. >>> >>> Can you try disabling CONFIG_PCIE_PTM in your kernel config? If it >>> works, then it's a point in favor that this commit is indeed the >>> problematic one. >>> >>> I am still trying to think of what could be causing the lockup you >>> are >>> seeing. >>> >>> > > P.S.: As a Linux kernel regression tracker I'm getting a lot of reports > on my table. I can only look briefly into most of them. Unfortunately > therefore I sometimes will get things wrong or miss something important. > I hope that's not the case here; if you think it is, don't hesitate to > tell me about it in a public reply. That's in everyone's interest, as > what I wrote above might be misleading to everyone reading this; any > suggestion I gave they thus might sent someone reading this down the > wrong rabbit hole, which none of us wants. > > BTW, I have no personal interest in this issue, which is tracked using > regzbot, my Linux kernel regression tracking bot > (https://linux-regtracking.leemhuis.info/regzbot/). I'm only posting > this mail to get things rolling again and hence don't need to be CC on > all further activities wrt to this regression. > > #regzbot poke -- Vinicius From maciej.machnikowski at intel.com Wed Dec 1 18:02:04 2021 From: maciej.machnikowski at intel.com (Maciej Machnikowski) Date: Wed, 1 Dec 2021 19:02:04 +0100 Subject: [Intel-wired-lan] [PATCH v4 net-next 0/4] Add ethtool interface for SyncE Message-ID: <20211201180208.640179-1-maciej.machnikowski@intel.com> Synchronous Ethernet networks use a physical layer clock to syntonize the frequency across different network elements. Basic SyncE node defined in the ITU-T G.8264 consist of an Ethernet Equipment Clock (EEC) and have the ability to recover synchronization from the synchronization inputs - either traffic interfaces or external frequency sources. The EEC can synchronize its frequency (syntonize) to any of those sources. It is also able to select synchronization source through priority tables and synchronization status messaging. It also provides neccessary filtering and holdover capabilities This patch series introduces basic interface for reading and configuring recover clocks on a SyncE capable device v4: - Dropped EEC_STATE reporting (TBD: DPLL subsystem) - moved recovered clock configuration to ethtool netlink v3: - remove RTM_GETRCLKRANGE - return state of all possible pins in the RTM_GETRCLKSTATE - clarify documentation v2: - improved documentation - fixed kdoc warning RFC history: v2: - removed whitespace changes - fix issues reported by test robot v3: - Changed naming from SyncE to EEC - Clarify cover letter and commit message for patch 1 v4: - Removed sync_source and pin_idx info - Changed one structure to attributes - Added EEC_SRC_PORT flag to indicate that the EEC is synchronized to the recovered clock of a port that returns the state v5: - add EEC source as an optiona attribute - implement support for recovered clocks - align states returned by EEC to ITU-T G.781 v6: - fix EEC clock state reporting - add documentation - fix descriptions in code comments Maciej Machnikowski (4): ice: add support detecting features based on netlist ethtool: Add ability to configure recovered clock for SyncE feature ice: add support for monitoring SyncE DPLL state ice: add support for SyncE recovered clocks Documentation/networking/ethtool-netlink.rst | 67 +++++ drivers/net/ethernet/intel/ice/ice.h | 7 + .../net/ethernet/intel/ice/ice_adminq_cmd.h | 70 ++++- drivers/net/ethernet/intel/ice/ice_common.c | 224 +++++++++++++++ drivers/net/ethernet/intel/ice/ice_common.h | 20 +- drivers/net/ethernet/intel/ice/ice_devids.h | 3 + drivers/net/ethernet/intel/ice/ice_ethtool.c | 97 +++++++ drivers/net/ethernet/intel/ice/ice_lib.c | 6 +- drivers/net/ethernet/intel/ice/ice_ptp.c | 35 +++ drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 49 ++++ drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 36 +++ drivers/net/ethernet/intel/ice/ice_type.h | 1 + include/linux/ethtool.h | 9 + include/uapi/linux/ethtool_netlink.h | 21 ++ net/ethtool/Makefile | 3 +- net/ethtool/netlink.c | 20 ++ net/ethtool/netlink.h | 4 + net/ethtool/synce.c | 267 ++++++++++++++++++ 18 files changed, 935 insertions(+), 4 deletions(-) create mode 100644 net/ethtool/synce.c -- 2.26.3 From maciej.machnikowski at intel.com Wed Dec 1 18:02:05 2021 From: maciej.machnikowski at intel.com (Maciej Machnikowski) Date: Wed, 1 Dec 2021 19:02:05 +0100 Subject: [Intel-wired-lan] [PATCH v4 net-next 1/4] ice: add support detecting features based on netlist In-Reply-To: <20211201180208.640179-1-maciej.machnikowski@intel.com> References: <20211201180208.640179-1-maciej.machnikowski@intel.com> Message-ID: <20211201180208.640179-2-maciej.machnikowski@intel.com> Add new functions to check netlist of a given board for: - Recovered Clock device, - Clock Generation Unit, - Clock Multiplexer, Initialize feature bits depending on detected components. Signed-off-by: Maciej Machnikowski --- drivers/net/ethernet/intel/ice/ice.h | 2 + .../net/ethernet/intel/ice/ice_adminq_cmd.h | 7 +- drivers/net/ethernet/intel/ice/ice_common.c | 123 ++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_common.h | 9 ++ drivers/net/ethernet/intel/ice/ice_lib.c | 6 +- drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 1 + drivers/net/ethernet/intel/ice/ice_type.h | 1 + 7 files changed, 147 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index b67ad51cbcc9..cb6b4c53584b 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -183,6 +183,8 @@ enum ice_feature { ICE_F_DSCP, + ICE_F_CGU, + ICE_F_PHY_RCLK, ICE_F_SMA_CTRL, ICE_F_MAX }; diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index 4eef3488d86f..339c2a86f680 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -1297,6 +1297,8 @@ struct ice_aqc_link_topo_params { #define ICE_AQC_LINK_TOPO_NODE_TYPE_CAGE 6 #define ICE_AQC_LINK_TOPO_NODE_TYPE_MEZZ 7 #define ICE_AQC_LINK_TOPO_NODE_TYPE_ID_EEPROM 8 +#define ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL 9 +#define ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_MUX 10 #define ICE_AQC_LINK_TOPO_NODE_CTX_S 4 #define ICE_AQC_LINK_TOPO_NODE_CTX_M \ (0xF << ICE_AQC_LINK_TOPO_NODE_CTX_S) @@ -1333,7 +1335,10 @@ struct ice_aqc_link_topo_addr { struct ice_aqc_get_link_topo { struct ice_aqc_link_topo_addr addr; u8 node_part_num; -#define ICE_AQC_GET_LINK_TOPO_NODE_NR_PCA9575 0x21 +#define ICE_AQC_GET_LINK_TOPO_NODE_NR_PCA9575 0x21 +#define ICE_ACQ_GET_LINK_TOPO_NODE_NR_ZL30632_80032 0x24 +#define ICE_ACQ_GET_LINK_TOPO_NODE_NR_PKVL 0x31 +#define ICE_ACQ_GET_LINK_TOPO_NODE_NR_GEN_CLK_MUX 0x47 u8 rsvd[9]; }; diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index b3066d0fea8b..35903b282885 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -274,6 +274,79 @@ ice_aq_get_link_topo_handle(struct ice_port_info *pi, u8 node_type, return ice_aq_send_cmd(pi->hw, &desc, NULL, 0, cd); } +/** + * ice_aq_get_netlist_node + * @hw: pointer to the hw struct + * @cmd: get_link_topo AQ structure + * @node_part_number: output node part number if node found + * @node_handle: output node handle parameter if node found + */ +enum ice_status +ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd, + u8 *node_part_number, u16 *node_handle) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo); + desc.params.get_link_topo = *cmd; + + if (ice_aq_send_cmd(hw, &desc, NULL, 0, NULL)) + return ICE_ERR_NOT_SUPPORTED; + + if (node_handle) + *node_handle = + le16_to_cpu(desc.params.get_link_topo.addr.handle); + if (node_part_number) + *node_part_number = desc.params.get_link_topo.node_part_num; + + return ICE_SUCCESS; +} + +#define MAX_NETLIST_SIZE 10 +/** + * ice_find_netlist_node + * @hw: pointer to the hw struct + * @node_type_ctx: type of netlist node to look for + * @node_part_number: node part number to look for + * @node_handle: output parameter if node found - optional + * + * Find and return the node handle for a given node type and part number in the + * netlist. When found ICE_SUCCESS is returned, ICE_ERR_DOES_NOT_EXIST + * otherwise. If @node_handle provided, it would be set to found node handle. + */ +enum ice_status +ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx, u8 node_part_number, + u16 *node_handle) +{ + struct ice_aqc_get_link_topo cmd; + u8 rec_node_part_number; + enum ice_status status; + u16 rec_node_handle; + u8 idx; + + for (idx = 0; idx < MAX_NETLIST_SIZE; idx++) { + memset(&cmd, 0, sizeof(cmd)); + + cmd.addr.topo_params.node_type_ctx = + (node_type_ctx << ICE_AQC_LINK_TOPO_NODE_TYPE_S); + cmd.addr.topo_params.index = idx; + + status = ice_aq_get_netlist_node(hw, &cmd, + &rec_node_part_number, + &rec_node_handle); + if (status) + return status; + + if (rec_node_part_number == node_part_number) { + if (node_handle) + *node_handle = rec_node_handle; + return ICE_SUCCESS; + } + } + + return ICE_ERR_DOES_NOT_EXIST; +} + /** * ice_is_media_cage_present * @pi: port information structure @@ -5083,3 +5156,53 @@ bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw) } return false; } + +/** + * ice_is_phy_rclk_present_e810t + * @hw: pointer to the hw struct + * + * Check if the PHY Recovered Clock device is present in the netlist + */ +bool ice_is_phy_rclk_present_e810t(struct ice_hw *hw) +{ + if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL, + ICE_ACQ_GET_LINK_TOPO_NODE_NR_PKVL, NULL)) + return false; + + return true; +} + +/** + * ice_is_cgu_present_e810t + * @hw: pointer to the hw struct + * + * Check if the Clock Generation Unit (CGU) device is present in the netlist + */ +bool ice_is_cgu_present_e810t(struct ice_hw *hw) +{ + if (!ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL, + ICE_ACQ_GET_LINK_TOPO_NODE_NR_ZL30632_80032, + NULL)) { + hw->cgu_part_number = + ICE_ACQ_GET_LINK_TOPO_NODE_NR_ZL30632_80032; + return true; + } + return false; +} + +/** + * ice_is_clock_mux_present_e810t + * @hw: pointer to the hw struct + * + * Check if the Clock Multiplexer device is present in the netlist + */ +bool ice_is_clock_mux_present_e810t(struct ice_hw *hw) +{ + if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_MUX, + ICE_ACQ_GET_LINK_TOPO_NODE_NR_GEN_CLK_MUX, + NULL)) + return false; + + return true; +} + diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h index 65c1b3244264..b20a5c085246 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.h +++ b/drivers/net/ethernet/intel/ice/ice_common.h @@ -89,6 +89,12 @@ ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode, struct ice_aqc_get_phy_caps_data *caps, struct ice_sq_cd *cd); enum ice_status +ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd, + u8 *node_part_number, u16 *node_handle); +enum ice_status +ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx, u8 node_part_number, + u16 *node_handle); +enum ice_status ice_aq_list_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count, enum ice_adminq_opc opc, struct ice_sq_cd *cd); enum ice_status @@ -206,4 +212,7 @@ bool ice_fw_supports_lldp_fltr_ctrl(struct ice_hw *hw); enum ice_status ice_lldp_fltr_add_remove(struct ice_hw *hw, u16 vsi_num, bool add); bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw); +bool ice_is_phy_rclk_present_e810t(struct ice_hw *hw); +bool ice_is_cgu_present_e810t(struct ice_hw *hw); +bool ice_is_clock_mux_present_e810t(struct ice_hw *hw); #endif /* _ICE_COMMON_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 09a3297cd63c..18c30b2912e3 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -4188,8 +4188,12 @@ void ice_init_feature_support(struct ice_pf *pf) case ICE_DEV_ID_E810C_QSFP: case ICE_DEV_ID_E810C_SFP: ice_set_feature_support(pf, ICE_F_DSCP); - if (ice_is_e810t(&pf->hw)) + if (ice_is_clock_mux_present_e810t(&pf->hw)) ice_set_feature_support(pf, ICE_F_SMA_CTRL); + if (ice_is_phy_rclk_present_e810t(&pf->hw)) + ice_set_feature_support(pf, ICE_F_PHY_RCLK); + if (ice_is_cgu_present_e810t(&pf->hw)) + ice_set_feature_support(pf, ICE_F_CGU); break; default: break; diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c index 29f947c0cd2e..aa257db36765 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c @@ -800,3 +800,4 @@ bool ice_is_pca9575_present(struct ice_hw *hw) return !status && handle; } + diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index 9e0c2923c62e..a9dc16641bd4 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -920,6 +920,7 @@ struct ice_hw { struct list_head rss_list_head; struct ice_mbx_snapshot mbx_snapshot; u16 io_expander_handle; + u8 cgu_part_number; }; /* Statistics collected by each port, VSI, VEB, and S-channel */ -- 2.26.3 From maciej.machnikowski at intel.com Wed Dec 1 18:02:06 2021 From: maciej.machnikowski at intel.com (Maciej Machnikowski) Date: Wed, 1 Dec 2021 19:02:06 +0100 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: <20211201180208.640179-1-maciej.machnikowski@intel.com> References: <20211201180208.640179-1-maciej.machnikowski@intel.com> Message-ID: <20211201180208.640179-3-maciej.machnikowski@intel.com> Add netlink ethtool messages: - ETHTOOL_MSG_RCLK_GET - ETHTOOL_MSG_RCLK_SET Required for controling basic SyncE functionality - configuration of recovered reference clocks from PHY port on netdevice supporting SyncE. - ETHTOOL_MSG_RCLK_GET - will read the current state if pin index is given or will return allowed range if it's not - ETHTOOL_MSG_RCLK_SET - will configure given recovered clock pin to output recovered clock Co-developed-by: Arkadiusz Kubalewski Signed-off-by: Arkadiusz Kubalewski Signed-off-by: Maciej Machnikowski --- Documentation/networking/ethtool-netlink.rst | 67 +++++ include/linux/ethtool.h | 9 + include/uapi/linux/ethtool_netlink.h | 21 ++ net/ethtool/Makefile | 3 +- net/ethtool/netlink.c | 20 ++ net/ethtool/netlink.h | 4 + net/ethtool/synce.c | 267 +++++++++++++++++++ 7 files changed, 390 insertions(+), 1 deletion(-) create mode 100644 net/ethtool/synce.c diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 9d98e0511249..51db9338785a 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -220,6 +220,8 @@ Userspace to kernel: ``ETHTOOL_MSG_PHC_VCLOCKS_GET`` get PHC virtual clocks info ``ETHTOOL_MSG_MODULE_SET`` set transceiver module parameters ``ETHTOOL_MSG_MODULE_GET`` get transceiver module parameters + ``ETHTOOL_RCLK_GET`` get recovered clock parameters + ``ETHTOOL_RCLK_SET`` set recovered clock parameters ===================================== ================================= Kernel to userspace: @@ -260,6 +262,8 @@ Kernel to userspace: ``ETHTOOL_MSG_STATS_GET_REPLY`` standard statistics ``ETHTOOL_MSG_PHC_VCLOCKS_GET_REPLY`` PHC virtual clocks info ``ETHTOOL_MSG_MODULE_GET_REPLY`` transceiver module parameters + ``ETHTOOL_MSG_RCLK_GET_REPLY`` reference recovered clock config + ``ETHTOOL_MSG_RCLK_NTF`` reference recovered clock config ======================================== ================================= ``GET`` requests are sent by userspace applications to retrieve device @@ -1598,6 +1602,67 @@ For SFF-8636 modules, low power mode is forced by the host according to table For CMIS modules, low power mode is forced by the host according to table 6-12 in revision 5.0 of the specification. +RCLK_GET +======== + +Get status of an output pin for PHY recovered frequency clock. + +Request contents: + + ====================================== ====== ========================== + ``ETHTOOL_A_RCLK_HEADER`` nested request header + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin + ====================================== ====== ========================== + +Kernel response contents: + + ====================================== ====== ========================== + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin + ``ETHTOOL_A_RCLK_PIN_FLAGS`` u32 state of a pin + ``ETHTOOL_A_RCLK_RANGE_MIN_PIN`` u32 min index of RCLK pins + ``ETHTOOL_A_RCLK_RANGE_MAX_PIN`` u32 max index of RCLK pins + ====================================== ====== ========================== + +Supported device can have mulitple reference recover clock pins available +to be used as source of frequency for a DPLL. +Once a pin on given port is enabled. The PHY recovered frequency is being +fed onto that pin, and can be used by DPLL to synchonize with its signal. +Pins don't have to start with index equal 0 - device can also have different +external sources pins. + +The ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is optional parameter. If present in +the RCLK_GET request, the ``ETHTOOL_A_RCLK_PIN_ENABLED`` is provided in a +response, it contatins state of the pin pointed by the index. Values are: + +.. kernel-doc:: include/uapi/linux/ethtool.h + :identifiers: ethtool_rclk_pin_state + +If ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is not present in the RCLK_GET request, +the range of available pins is returned: +``ETHTOOL_A_RCLK_RANGE_MIN_PIN`` is lowest possible index of a pin available +for recovering frequency from PHY. +``ETHTOOL_A_RCLK_RANGE_MAX_PIN`` is highest possible index of a pin available +for recovering frequency from PHY. + +RCLK_SET +========== + +Set status of an output pin for PHY recovered frequency clock. + +Request contents: + + ====================================== ====== ======================== + ``ETHTOOL_A_RCLK_HEADER`` nested request header + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin + ``ETHTOOL_A_RCLK_PIN_FLAGS`` u32 requested state + ====================================== ====== ======================== + +``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is a index of a pin for which the change of +state is requested. Values of ``ETHTOOL_A_RCLK_PIN_ENABLED`` are: + +.. kernel-doc:: include/uapi/linux/ethtool.h + :identifiers: ethtool_rclk_pin_state + Request translation =================== @@ -1699,4 +1764,6 @@ are netlink only. n/a ``ETHTOOL_MSG_PHC_VCLOCKS_GET`` n/a ``ETHTOOL_MSG_MODULE_GET`` n/a ``ETHTOOL_MSG_MODULE_SET`` + n/a ``ETHTOOL_MSG_RCLK_GET`` + n/a ``ETHTOOL_MSG_RCLK_SET`` =================================== ===================================== diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h index a26f37a27167..e344e5153f9b 100644 --- a/include/linux/ethtool.h +++ b/include/linux/ethtool.h @@ -614,6 +614,9 @@ struct ethtool_module_power_mode_params { * plugged-in. * @set_module_power_mode: Set the power mode policy for the plug-in module * used by the network device. + * @get_rclk_range: Get range of valid Reference Clock input pins for the port. + * @get_rclk_state: Get state of Reference Clock input signal pin. + * @set_rclk_out: Enable/disable Reference Clock input signal pin. * * All operations are optional (i.e. the function pointer may be set * to %NULL) and callers must take this into account. Callers must @@ -750,6 +753,12 @@ struct ethtool_ops { int (*set_module_power_mode)(struct net_device *dev, const struct ethtool_module_power_mode_params *params, struct netlink_ext_ack *extack); + int (*get_rclk_range)(struct net_device *dev, u32 *min_idx, + u32 *max_idx, struct netlink_ext_ack *extack); + int (*get_rclk_state)(struct net_device *dev, u32 out_idx, + bool *ena, struct netlink_ext_ack *extack); + int (*set_rclk_out)(struct net_device *dev, u32 out_idx, bool ena, + struct netlink_ext_ack *extack); }; int ethtool_check_ops(const struct ethtool_ops *ops); diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index cca6e474a085..9a860f052a36 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -49,6 +49,8 @@ enum { ETHTOOL_MSG_PHC_VCLOCKS_GET, ETHTOOL_MSG_MODULE_GET, ETHTOOL_MSG_MODULE_SET, + ETHTOOL_MSG_RCLK_GET, + ETHTOOL_MSG_RCLK_SET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, @@ -94,6 +96,8 @@ enum { ETHTOOL_MSG_PHC_VCLOCKS_GET_REPLY, ETHTOOL_MSG_MODULE_GET_REPLY, ETHTOOL_MSG_MODULE_NTF, + ETHTOOL_MSG_RCLK_GET_REPLY, + ETHTOOL_MSG_RCLK_NTF, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, @@ -853,6 +857,23 @@ enum { ETHTOOL_A_MODULE_MAX = (__ETHTOOL_A_MODULE_CNT - 1) }; +/* REF CLK */ + +enum { + ETHTOOL_A_RCLK_UNSPEC, + ETHTOOL_A_RCLK_HEADER, /* nest - _A_HEADER_* */ + ETHTOOL_A_RCLK_OUT_PIN_IDX, /* u32 */ + ETHTOOL_A_RCLK_PIN_FLAGS, /* u32 */ + ETHTOOL_A_RCLK_PIN_MIN, /* u32 */ + ETHTOOL_A_RCLK_PIN_MAX, /* u32 */ + + /* add new constants above here */ + __ETHTOOL_A_RCLK_CNT, + ETHTOOL_A_RCLK_MAX = (__ETHTOOL_A_RCLK_CNT - 1) +}; + +#define ETHTOOL_RCLK_PIN_FLAGS_ENA (1 << 0) + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index b76432e70e6b..dd6de311a9c2 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -7,4 +7,5 @@ obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o linkmodes.o \ linkstate.o debug.o wol.o features.o privflags.o rings.o \ channels.o coalesce.o pause.o eee.o tsinfo.o cabletest.o \ - tunnels.o fec.o eeprom.o stats.o phc_vclocks.o module.o + tunnels.o fec.o eeprom.o stats.o phc_vclocks.o module.o \ + synce.o diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 38b44c0291b1..76ee82c687fe 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -283,6 +283,7 @@ ethnl_default_requests[__ETHTOOL_MSG_USER_CNT] = { [ETHTOOL_MSG_STATS_GET] = ðnl_stats_request_ops, [ETHTOOL_MSG_PHC_VCLOCKS_GET] = ðnl_phc_vclocks_request_ops, [ETHTOOL_MSG_MODULE_GET] = ðnl_module_request_ops, + [ETHTOOL_MSG_RCLK_GET] = ðnl_rclk_request_ops, }; static struct ethnl_dump_ctx *ethnl_dump_context(struct netlink_callback *cb) @@ -595,6 +596,7 @@ ethnl_default_notify_ops[ETHTOOL_MSG_KERNEL_MAX + 1] = { [ETHTOOL_MSG_EEE_NTF] = ðnl_eee_request_ops, [ETHTOOL_MSG_FEC_NTF] = ðnl_fec_request_ops, [ETHTOOL_MSG_MODULE_NTF] = ðnl_module_request_ops, + [ETHTOOL_MSG_RCLK_NTF] = ðnl_rclk_request_ops, }; /* default notification handler */ @@ -689,6 +691,7 @@ static const ethnl_notify_handler_t ethnl_notify_handlers[] = { [ETHTOOL_MSG_EEE_NTF] = ethnl_default_notify, [ETHTOOL_MSG_FEC_NTF] = ethnl_default_notify, [ETHTOOL_MSG_MODULE_NTF] = ethnl_default_notify, + [ETHTOOL_MSG_RCLK_NTF] = ethnl_default_notify, }; void ethtool_notify(struct net_device *dev, unsigned int cmd, const void *data) @@ -1018,6 +1021,23 @@ static const struct genl_ops ethtool_genl_ops[] = { .policy = ethnl_module_set_policy, .maxattr = ARRAY_SIZE(ethnl_module_set_policy) - 1, }, + { + .cmd = ETHTOOL_MSG_RCLK_GET, + .flags = GENL_UNS_ADMIN_PERM, + .doit = ethnl_default_doit, + .start = ethnl_default_start, + .dumpit = ethnl_default_dumpit, + .done = ethnl_default_done, + .policy = ethnl_rclk_get_policy, + .maxattr = ARRAY_SIZE(ethnl_rclk_get_policy) - 1, + }, + { + .cmd = ETHTOOL_MSG_RCLK_SET, + .flags = GENL_UNS_ADMIN_PERM, + .doit = ethnl_set_rclk, + .policy = ethnl_rclk_set_policy, + .maxattr = ARRAY_SIZE(ethnl_rclk_set_policy) - 1, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index 490598e5eedd..bdeb559f0db7 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -338,6 +338,7 @@ extern const struct ethnl_request_ops ethnl_module_eeprom_request_ops; extern const struct ethnl_request_ops ethnl_stats_request_ops; extern const struct ethnl_request_ops ethnl_phc_vclocks_request_ops; extern const struct ethnl_request_ops ethnl_module_request_ops; +extern const struct ethnl_request_ops ethnl_rclk_request_ops; extern const struct nla_policy ethnl_header_policy[ETHTOOL_A_HEADER_FLAGS + 1]; extern const struct nla_policy ethnl_header_policy_stats[ETHTOOL_A_HEADER_FLAGS + 1]; @@ -376,6 +377,8 @@ extern const struct nla_policy ethnl_stats_get_policy[ETHTOOL_A_STATS_GROUPS + 1 extern const struct nla_policy ethnl_phc_vclocks_get_policy[ETHTOOL_A_PHC_VCLOCKS_HEADER + 1]; extern const struct nla_policy ethnl_module_get_policy[ETHTOOL_A_MODULE_HEADER + 1]; extern const struct nla_policy ethnl_module_set_policy[ETHTOOL_A_MODULE_POWER_MODE_POLICY + 1]; +extern const struct nla_policy ethnl_rclk_get_policy[ETHTOOL_A_RCLK_OUT_PIN_IDX + 1]; +extern const struct nla_policy ethnl_rclk_set_policy[ETHTOOL_A_RCLK_PIN_FLAGS + 1]; int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info); @@ -395,6 +398,7 @@ int ethnl_tunnel_info_start(struct netlink_callback *cb); int ethnl_tunnel_info_dumpit(struct sk_buff *skb, struct netlink_callback *cb); int ethnl_set_fec(struct sk_buff *skb, struct genl_info *info); int ethnl_set_module(struct sk_buff *skb, struct genl_info *info); +int ethnl_set_rclk(struct sk_buff *skb, struct genl_info *info); extern const char stats_std_names[__ETHTOOL_STATS_CNT][ETH_GSTRING_LEN]; extern const char stats_eth_phy_names[__ETHTOOL_A_STATS_ETH_PHY_CNT][ETH_GSTRING_LEN]; diff --git a/net/ethtool/synce.c b/net/ethtool/synce.c new file mode 100644 index 000000000000..f4ebb4c57d4d --- /dev/null +++ b/net/ethtool/synce.c @@ -0,0 +1,267 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include "netlink.h" + +struct rclk_out_pin_info { + u32 idx; + bool valid; +}; + +struct rclk_request_data { + struct ethnl_req_info base; + struct rclk_out_pin_info out_pin; +}; + +struct rclk_pin_state_info { + u32 range_min; + u32 range_max; + u32 flags; + u32 idx; +}; + +struct rclk_reply_data { + struct ethnl_reply_data base; + struct rclk_pin_state_info pin_state; +}; + +#define RCLK_REPDATA(__reply_base) \ + container_of(__reply_base, struct rclk_reply_data, base) + +#define RCLK_REQDATA(__req_base) \ + container_of(__req_base, struct rclk_request_data, base) + +/* RCLK_GET */ + +const struct nla_policy +ethnl_rclk_get_policy[ETHTOOL_A_RCLK_OUT_PIN_IDX + 1] = { + [ETHTOOL_A_RCLK_HEADER] = NLA_POLICY_NESTED(ethnl_header_policy), + [ETHTOOL_A_RCLK_OUT_PIN_IDX] = { .type = NLA_U32 }, +}; + +static int rclk_parse_request(struct ethnl_req_info *req_base, + struct nlattr **tb, + struct netlink_ext_ack *extack) +{ + struct rclk_request_data *req = RCLK_REQDATA(req_base); + + if (tb[ETHTOOL_A_RCLK_OUT_PIN_IDX]) { + req->out_pin.idx = nla_get_u32(tb[ETHTOOL_A_RCLK_OUT_PIN_IDX]); + req->out_pin.valid = true; + } + + return 0; +} + +static int rclk_state_get(struct net_device *dev, + struct rclk_reply_data *data, + struct netlink_ext_ack *extack, + u32 out_idx) +{ + const struct ethtool_ops *ops = dev->ethtool_ops; + bool pin_state; + int ret; + + if (!ops->get_rclk_state) + return -EOPNOTSUPP; + + ret = ops->get_rclk_state(dev, out_idx, &pin_state, extack); + if (ret) + return ret; + + data->pin_state.flags = pin_state ? ETHTOOL_RCLK_PIN_FLAGS_ENA : 0; + data->pin_state.idx = out_idx; + + return ret; +} + +static int rclk_range_get(struct net_device *dev, + struct rclk_reply_data *data, + struct netlink_ext_ack *extack) +{ + const struct ethtool_ops *ops = dev->ethtool_ops; + u32 min_idx, max_idx; + int ret; + + if (!ops->get_rclk_range) + return -EOPNOTSUPP; + + ret = ops->get_rclk_range(dev, &min_idx, &max_idx, extack); + if (ret) + return ret; + + data->pin_state.range_min = min_idx; + data->pin_state.range_max = max_idx; + + return ret; +} + +static int rclk_prepare_data(const struct ethnl_req_info *req_base, + struct ethnl_reply_data *reply_base, + struct genl_info *info) +{ + struct rclk_reply_data *reply = RCLK_REPDATA(reply_base); + struct rclk_request_data *request = RCLK_REQDATA(req_base); + struct netlink_ext_ack *extack = info ? info->extack : NULL; + struct net_device *dev = reply_base->dev; + int ret; + + memset(&reply->pin_state, 0, sizeof(reply->pin_state)); + ret = ethnl_ops_begin(dev); + if (ret < 0) + return ret; + + if (request->out_pin.valid) + ret = rclk_state_get(dev, reply, extack, + request->out_pin.idx); + else + ret = rclk_range_get(dev, reply, extack); + + ethnl_ops_complete(dev); + + return ret; +} + +static int rclk_fill_reply(struct sk_buff *skb, + const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct rclk_reply_data *reply = RCLK_REPDATA(reply_base); + const struct rclk_request_data *request = RCLK_REQDATA(req_base); + + if (request->out_pin.valid) { + if (nla_put_u32(skb, ETHTOOL_A_RCLK_PIN_FLAGS, + reply->pin_state.flags)) + return -EMSGSIZE; + if (nla_put_u32(skb, ETHTOOL_A_RCLK_OUT_PIN_IDX, + reply->pin_state.idx)) + return -EMSGSIZE; + } else { + if (nla_put_u32(skb, ETHTOOL_A_RCLK_PIN_MIN, + reply->pin_state.range_min)) + return -EMSGSIZE; + if (nla_put_u32(skb, ETHTOOL_A_RCLK_PIN_MAX, + reply->pin_state.range_max)) + return -EMSGSIZE; + } + + return 0; +} + +static int rclk_reply_size(const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct rclk_request_data *request = RCLK_REQDATA(req_base); + + if (request->out_pin.valid) + return nla_total_size(sizeof(u32)) + /* ETHTOOL_A_RCLK_PIN_FLAGS */ + nla_total_size(sizeof(u32)); /* ETHTOOL_A_RCLK_OUT_PIN_IDX */ + else + return nla_total_size(sizeof(u32)) + /* ETHTOOL_A_RCLK_PIN_MIN */ + nla_total_size(sizeof(u32)); /* ETHTOOL_A_RCLK_PIN_MAX */ +} + +const struct ethnl_request_ops ethnl_rclk_request_ops = { + .request_cmd = ETHTOOL_MSG_RCLK_GET, + .reply_cmd = ETHTOOL_MSG_RCLK_GET_REPLY, + .hdr_attr = ETHTOOL_A_RCLK_HEADER, + .req_info_size = sizeof(struct rclk_request_data), + .reply_data_size = sizeof(struct rclk_reply_data), + + .parse_request = rclk_parse_request, + .prepare_data = rclk_prepare_data, + .reply_size = rclk_reply_size, + .fill_reply = rclk_fill_reply, +}; + +/* RCLK SET */ + +const struct nla_policy +ethnl_rclk_set_policy[ETHTOOL_A_RCLK_PIN_FLAGS + 1] = { + [ETHTOOL_A_RCLK_HEADER] = NLA_POLICY_NESTED(ethnl_header_policy), + [ETHTOOL_A_RCLK_OUT_PIN_IDX] = { .type = NLA_U32 }, + [ETHTOOL_A_RCLK_PIN_FLAGS] = { .type = NLA_U32 }, +}; + +static int rclk_set_state(struct net_device *dev, struct nlattr **tb, + bool *p_mod, struct netlink_ext_ack *extack) +{ + const struct ethtool_ops *ops = dev->ethtool_ops; + bool old_state, new_state; + u32 min_idx, max_idx; + u32 out_idx; + int ret; + + if (!tb[ETHTOOL_A_RCLK_PIN_FLAGS] && + !tb[ETHTOOL_A_RCLK_OUT_PIN_IDX]) + return 0; + + if (!ops->set_rclk_out || !ops->get_rclk_range) { + NL_SET_ERR_MSG_ATTR(extack, + tb[ETHTOOL_A_RCLK_PIN_FLAGS], + "Setting recovered clock state is not supported by this device"); + return -EOPNOTSUPP; + } + + ret = ops->get_rclk_range(dev, &min_idx, &max_idx, extack); + if (ret) + return ret; + + out_idx = nla_get_u32(tb[ETHTOOL_A_RCLK_OUT_PIN_IDX]); + if (out_idx < min_idx || out_idx > max_idx) { + NL_SET_ERR_MSG_ATTR(extack, + tb[ETHTOOL_A_RCLK_OUT_PIN_IDX], + "Requested recovered clock pin index is out of range"); + return -EINVAL; + } + + ret = ops->get_rclk_state(dev, out_idx, &old_state, extack); + if (ret < 0) + return ret; + + new_state = !!(nla_get_u32(tb[ETHTOOL_A_RCLK_PIN_FLAGS]) & + ETHTOOL_RCLK_PIN_FLAGS_ENA); + + /* If state changed - flag need for sending the notification */ + *p_mod = old_state != new_state; + + return ops->set_rclk_out(dev, out_idx, new_state, extack); +} + +int ethnl_set_rclk(struct sk_buff *skb, struct genl_info *info) +{ + struct ethnl_req_info req_info = {}; + struct nlattr **tb = info->attrs; + struct net_device *dev; + bool mod = false; + int ret; + + ret = ethnl_parse_header_dev_get(&req_info, tb[ETHTOOL_A_RCLK_HEADER], + genl_info_net(info), info->extack, + true); + if (ret < 0) + return ret; + dev = req_info.dev; + + rtnl_lock(); + ret = ethnl_ops_begin(dev); + if (ret < 0) + goto out_rtnl; + + ret = rclk_set_state(dev, tb, &mod, info->extack); + if (ret < 0) + goto out_ops; + + if (!mod) + goto out_ops; + + ethtool_notify(dev, ETHTOOL_MSG_RCLK_NTF, NULL); + +out_ops: + ethnl_ops_complete(dev); +out_rtnl: + rtnl_unlock(); + dev_put(dev); + return ret; +} + -- 2.26.3 From maciej.machnikowski at intel.com Wed Dec 1 18:02:07 2021 From: maciej.machnikowski at intel.com (Maciej Machnikowski) Date: Wed, 1 Dec 2021 19:02:07 +0100 Subject: [Intel-wired-lan] [PATCH v4 net-next 3/4] ice: add support for monitoring SyncE DPLL state In-Reply-To: <20211201180208.640179-1-maciej.machnikowski@intel.com> References: <20211201180208.640179-1-maciej.machnikowski@intel.com> Message-ID: <20211201180208.640179-4-maciej.machnikowski@intel.com> Implement SyncE DPLL monitoring for E810-T devices. Poll loop will periodically check the state of the DPLL and cache it in the pf structure. State changes will be logged in the system log. Signed-off-by: Maciej Machnikowski --- drivers/net/ethernet/intel/ice/ice.h | 5 ++ .../net/ethernet/intel/ice/ice_adminq_cmd.h | 34 +++++++++++++ drivers/net/ethernet/intel/ice/ice_common.c | 36 ++++++++++++++ drivers/net/ethernet/intel/ice/ice_common.h | 5 +- drivers/net/ethernet/intel/ice/ice_devids.h | 3 ++ drivers/net/ethernet/intel/ice/ice_ptp.c | 35 ++++++++++++++ drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 48 +++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 34 +++++++++++++ 8 files changed, 199 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index cb6b4c53584b..2dcc8fd6dff5 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -607,6 +607,11 @@ struct ice_pf { #define ICE_VF_AGG_NODE_ID_START 65 #define ICE_MAX_VF_AGG_NODES 32 struct ice_agg_node vf_agg_node[ICE_MAX_VF_AGG_NODES]; + + enum ice_eec_state synce_dpll_state; + u8 synce_dpll_pin; + enum ice_eec_state ptp_dpll_state; + u8 ptp_dpll_pin; }; struct ice_netdev_priv { diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index 339c2a86f680..11226af7a9a4 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -1808,6 +1808,36 @@ struct ice_aqc_add_rdma_qset_data { struct ice_aqc_add_tx_rdma_qset_entry rdma_qsets[]; }; +/* Get CGU DPLL status (direct 0x0C66) */ +struct ice_aqc_get_cgu_dpll_status { + u8 dpll_num; + u8 ref_state; +#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_LOS BIT(0) +#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_SCM BIT(1) +#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_CFM BIT(2) +#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_GST BIT(3) +#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_PFM BIT(4) +#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_ESYNC BIT(6) +#define ICE_AQC_GET_CGU_DPLL_STATUS_FAST_LOCK_EN BIT(7) + __le16 dpll_state; +#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_LOCK BIT(0) +#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO BIT(1) +#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO_READY BIT(2) +#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_FLHIT BIT(5) +#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_PSLHIT BIT(7) +#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SHIFT 8 +#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SEL \ + ICE_M(0x1F, ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SHIFT) +#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_MODE_SHIFT 13 +#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_MODE \ + ICE_M(0x7, ICE_AQC_GET_CGU_DPLL_STATUS_STATE_MODE_SHIFT) + __le32 phase_offset_h; + __le32 phase_offset_l; + u8 eec_mode; + u8 rsvd[1]; + __le16 node_handle; +}; + /* Configure Firmware Logging Command (indirect 0xFF09) * Logging Information Read Response (indirect 0xFF10) * Note: The 0xFF10 command has no input parameters. @@ -2039,6 +2069,7 @@ struct ice_aq_desc { struct ice_aqc_fw_logging fw_logging; struct ice_aqc_get_clear_fw_log get_clear_fw_log; struct ice_aqc_download_pkg download_pkg; + struct ice_aqc_get_cgu_dpll_status get_cgu_dpll_status; struct ice_aqc_driver_shared_params drv_shared_params; struct ice_aqc_set_mac_lb set_mac_lb; struct ice_aqc_alloc_free_res_cmd sw_res_ctrl; @@ -2205,6 +2236,9 @@ enum ice_adminq_opc { ice_aqc_opc_update_pkg = 0x0C42, ice_aqc_opc_get_pkg_info_list = 0x0C43, + /* 1588/SyncE commands/events */ + ice_aqc_opc_get_cgu_dpll_status = 0x0C66, + ice_aqc_opc_driver_shared_params = 0x0C90, /* Standalone Commands/Events */ diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 35903b282885..8069141ac105 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -4644,6 +4644,42 @@ ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid, return ice_status_to_errno(status); } +/** + * ice_aq_get_cgu_dpll_status + * @hw: pointer to the HW struct + * @dpll_num: DPLL index + * @ref_state: Reference clock state + * @dpll_state: DPLL state + * @phase_offset: Phase offset in ps + * @eec_mode: EEC_mode + * + * Get CGU DPLL status (0x0C66) + */ +enum ice_status +ice_aq_get_cgu_dpll_status(struct ice_hw *hw, u8 dpll_num, u8 *ref_state, + u16 *dpll_state, u64 *phase_offset, u8 *eec_mode) +{ + struct ice_aqc_get_cgu_dpll_status *cmd; + struct ice_aq_desc desc; + enum ice_status status; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cgu_dpll_status); + cmd = &desc.params.get_cgu_dpll_status; + cmd->dpll_num = dpll_num; + + status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL); + if (!status) { + *ref_state = cmd->ref_state; + *dpll_state = le16_to_cpu(cmd->dpll_state); + *phase_offset = le32_to_cpu(cmd->phase_offset_h); + *phase_offset <<= 32; + *phase_offset += le32_to_cpu(cmd->phase_offset_l); + *eec_mode = cmd->eec_mode; + } + + return status; +} + /** * ice_replay_pre_init - replay pre initialization * @hw: pointer to the HW struct diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h index b20a5c085246..aaed388a40a8 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.h +++ b/drivers/net/ethernet/intel/ice/ice_common.h @@ -106,6 +106,7 @@ enum ice_status ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags, struct ice_sq_cd *cd); bool ice_is_e810(struct ice_hw *hw); +bool ice_is_e810t(struct ice_hw *hw); enum ice_status ice_clear_pf_cfg(struct ice_hw *hw); enum ice_status ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi, @@ -162,6 +163,9 @@ ice_cfg_vsi_rdma(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, int ice_ena_vsi_rdma_qset(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 *rdma_qset, u16 num_qsets, u32 *qset_teid); +enum ice_status +ice_aq_get_cgu_dpll_status(struct ice_hw *hw, u8 dpll_num, u8 *ref_state, + u16 *dpll_state, u64 *phase_offset, u8 *eec_mode); int ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid, u16 *q_id); @@ -189,7 +193,6 @@ ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded, void ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat); -bool ice_is_e810t(struct ice_hw *hw); enum ice_status ice_sched_query_elem(struct ice_hw *hw, u32 node_teid, struct ice_aqc_txsched_elem_data *buf); diff --git a/drivers/net/ethernet/intel/ice/ice_devids.h b/drivers/net/ethernet/intel/ice/ice_devids.h index 61dd2f18dee8..0b654d417d29 100644 --- a/drivers/net/ethernet/intel/ice/ice_devids.h +++ b/drivers/net/ethernet/intel/ice/ice_devids.h @@ -58,4 +58,7 @@ /* Intel(R) Ethernet Connection E822-L 1GbE */ #define ICE_DEV_ID_E822L_SGMII 0x189A +#define ICE_SUBDEV_ID_E810T 0x000E +#define ICE_SUBDEV_ID_E810T2 0x000F + #endif /* _ICE_DEVIDS_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index bf7247c6f58e..bb502c19d53a 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -1766,6 +1766,36 @@ static void ice_ptp_tx_tstamp_cleanup(struct ice_ptp_tx *tx) } } +static void ice_handle_cgu_state(struct ice_pf *pf) +{ + enum ice_eec_state cgu_state; + u8 pin; + + cgu_state = ice_get_zl_dpll_state(&pf->hw, ICE_CGU_DPLL_SYNCE, &pin); + if (pf->synce_dpll_state != cgu_state) { + pf->synce_dpll_state = cgu_state; + pf->synce_dpll_pin = pin; + + dev_warn(ice_pf_to_dev(pf), + " state changed to: %d, pin %d", + ICE_CGU_DPLL_SYNCE, + pf->synce_dpll_state, + pin); + } + + cgu_state = ice_get_zl_dpll_state(&pf->hw, ICE_CGU_DPLL_PTP, &pin); + if (pf->ptp_dpll_state != cgu_state) { + pf->ptp_dpll_state = cgu_state; + pf->ptp_dpll_pin = pin; + + dev_warn(ice_pf_to_dev(pf), + " state changed to: %d, pin %d", + ICE_CGU_DPLL_PTP, + pf->ptp_dpll_state, + pin); + } +} + static void ice_ptp_periodic_work(struct kthread_work *work) { struct ice_ptp *ptp = container_of(work, struct ice_ptp, work.work); @@ -1774,6 +1804,10 @@ static void ice_ptp_periodic_work(struct kthread_work *work) if (!test_bit(ICE_FLAG_PTP, pf->flags)) return; + if (ice_is_feature_supported(pf, ICE_F_CGU) && + pf->hw.func_caps.ts_func_info.src_tmr_owned) + ice_handle_cgu_state(pf); + ice_ptp_update_cached_phctime(pf); ice_ptp_tx_tstamp_cleanup(&pf->ptp.port.tx); @@ -1958,3 +1992,4 @@ void ice_ptp_release(struct ice_pf *pf) dev_info(ice_pf_to_dev(pf), "Removed PTP clock\n"); } + diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c index aa257db36765..b4300bf3e4ce 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c @@ -375,6 +375,54 @@ static int ice_ptp_port_cmd_e810(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd) return 0; } +/** + * ice_get_zl_dpll_state - get the state of the DPLL + * @hw: pointer to the hw struct + * @dpll_idx: Index of internal DPLL unit + * @pin: pointer to a buffer for returning currently active pin + * + * This function will read the state of the DPLL(dpll_idx). If optional + * parameter pin is given it'll be used to retrieve currently active pin. + * + * Return: state of the DPLL + */ +enum ice_eec_state +ice_get_zl_dpll_state(struct ice_hw *hw, u8 dpll_idx, u8 *pin) +{ + enum ice_status status; + u64 phase_offset; + u16 dpll_state; + u8 ref_state; + u8 eec_mode; + + if (dpll_idx >= ICE_CGU_DPLL_MAX) + return ICE_EEC_STATE_INVALID; + + status = ice_aq_get_cgu_dpll_status(hw, dpll_idx, &ref_state, + &dpll_state, &phase_offset, + &eec_mode); + if (status) + return ICE_EEC_STATE_INVALID; + + if (pin) { + /* current ref pin in dpll_state_refsel_status_X register */ + *pin = (dpll_state & + ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SEL) >> + ICE_AQC_GET_CGU_DPLL_STATUS_STATE_CLK_REF_SHIFT; + } + + if (dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_LOCK) { + if (dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO_READY) + return ICE_EEC_STATE_LOCKED_HO_ACQ; + else + return ICE_EEC_STATE_LOCKED; + } else if ((dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO) && + (dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO_READY)) { + return ICE_EEC_STATE_HOLDOVER; + } + return ICE_EEC_STATE_FREERUN; +} + /* Device agnostic functions * * The following functions implement useful behavior to hide the differences diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h index b2984b5c22c1..28b04ec40bae 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h @@ -12,6 +12,18 @@ enum ice_ptp_tmr_cmd { READ_TIME }; +enum ice_eec_state { + ICE_EEC_STATE_INVALID = 0, /* state is not valid */ + ICE_EEC_STATE_FREERUN, /* clock is free-running */ + ICE_EEC_STATE_LOCKED, /* clock is locked to the reference, + * but the holdover memory is not valid + */ + ICE_EEC_STATE_LOCKED_HO_ACQ, /* clock is locked to the reference + * and holdover memory is valid + */ + ICE_EEC_STATE_HOLDOVER, /* clock is in holdover mode */ +}; + /* Increment value to generate nanoseconds in the GLTSYN_TIME_L register for * the E810 devices. Based off of a PLL with an 812.5 MHz frequency. */ @@ -33,6 +45,8 @@ int ice_ptp_init_phy_e810(struct ice_hw *hw); int ice_read_sma_ctrl_e810t(struct ice_hw *hw, u8 *data); int ice_write_sma_ctrl_e810t(struct ice_hw *hw, u8 data); bool ice_is_pca9575_present(struct ice_hw *hw); +enum ice_eec_state +ice_get_zl_dpll_state(struct ice_hw *hw, u8 dpll_idx, u8 *pin); #define PFTSYN_SEM_BYTES 4 @@ -98,4 +112,24 @@ bool ice_is_pca9575_present(struct ice_hw *hw); #define ICE_SMA_MAX_BIT_E810T 7 #define ICE_PCA9575_P1_OFFSET 8 +enum ice_e810t_cgu_dpll { + ICE_CGU_DPLL_SYNCE, + ICE_CGU_DPLL_PTP, + ICE_CGU_DPLL_MAX +}; + +enum ice_e810t_cgu_pins { + REF0P, + REF0N, + REF1P, + REF1N, + REF2P, + REF2N, + REF3P, + REF3N, + REF4P, + REF4N, + NUM_E810T_CGU_PINS +}; + #endif /* _ICE_PTP_HW_H_ */ -- 2.26.3 From maciej.machnikowski at intel.com Wed Dec 1 18:02:08 2021 From: maciej.machnikowski at intel.com (Maciej Machnikowski) Date: Wed, 1 Dec 2021 19:02:08 +0100 Subject: [Intel-wired-lan] [PATCH v4 net-next 4/4] ice: add support for SyncE recovered clocks In-Reply-To: <20211201180208.640179-1-maciej.machnikowski@intel.com> References: <20211201180208.640179-1-maciej.machnikowski@intel.com> Message-ID: <20211201180208.640179-5-maciej.machnikowski@intel.com> Implement ethtool netlink functions for handling SyncE recovered clocks configuration on ice driver: - ETHTOOL_MSG_RCLK_SET - ETHTOOL_MSG_RCLK_GET Co-developed-by: Arkadiusz Kubalewski Signed-off-by: Arkadiusz Kubalewski Signed-off-by: Maciej Machnikowski --- .../net/ethernet/intel/ice/ice_adminq_cmd.h | 29 ++++++ drivers/net/ethernet/intel/ice/ice_common.c | 65 +++++++++++++ drivers/net/ethernet/intel/ice/ice_common.h | 6 ++ drivers/net/ethernet/intel/ice/ice_ethtool.c | 97 +++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 2 + 5 files changed, 199 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index 11226af7a9a4..aed03200bb99 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -1281,6 +1281,31 @@ struct ice_aqc_set_mac_lb { u8 reserved[15]; }; +/* Set PHY recovered clock output (direct 0x0630) */ +struct ice_aqc_set_phy_rec_clk_out { + u8 phy_output; + u8 port_num; + u8 flags; +#define ICE_AQC_SET_PHY_REC_CLK_OUT_OUT_EN BIT(0) +#define ICE_AQC_SET_PHY_REC_CLK_OUT_CURR_PORT 0xFF + u8 rsvd; + __le32 freq; + u8 rsvd2[6]; + __le16 node_handle; +}; + +/* Get PHY recovered clock output (direct 0x0631) */ +struct ice_aqc_get_phy_rec_clk_out { + u8 phy_output; + u8 port_num; + u8 flags; +#define ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN BIT(0) + u8 rsvd; + __le32 freq; + u8 rsvd2[6]; + __le16 node_handle; +}; + struct ice_aqc_link_topo_params { u8 lport_num; u8 lport_num_valid; @@ -2033,6 +2058,8 @@ struct ice_aq_desc { struct ice_aqc_get_phy_caps get_phy; struct ice_aqc_set_phy_cfg set_phy; struct ice_aqc_restart_an restart_an; + struct ice_aqc_set_phy_rec_clk_out set_phy_rec_clk_out; + struct ice_aqc_get_phy_rec_clk_out get_phy_rec_clk_out; struct ice_aqc_gpio read_write_gpio; struct ice_aqc_sff_eeprom read_write_sff_param; struct ice_aqc_set_port_id_led set_port_id_led; @@ -2188,6 +2215,8 @@ enum ice_adminq_opc { ice_aqc_opc_get_link_status = 0x0607, ice_aqc_opc_set_event_mask = 0x0613, ice_aqc_opc_set_mac_lb = 0x0620, + ice_aqc_opc_set_phy_rec_clk_out = 0x0630, + ice_aqc_opc_get_phy_rec_clk_out = 0x0631, ice_aqc_opc_get_link_topo = 0x06E0, ice_aqc_opc_set_port_id_led = 0x06E9, ice_aqc_opc_set_gpio = 0x06EC, diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 8069141ac105..29d302ea1e56 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -5242,3 +5242,68 @@ bool ice_is_clock_mux_present_e810t(struct ice_hw *hw) return true; } +/** + * ice_aq_set_phy_rec_clk_out - set RCLK phy out + * @hw: pointer to the HW struct + * @phy_output: PHY reference clock output pin + * @enable: GPIO state to be applied + * @freq: PHY output frequency + * + * Set CGU reference priority (0x0630) + * Return 0 on success or negative value on failure. + */ +enum ice_status +ice_aq_set_phy_rec_clk_out(struct ice_hw *hw, u8 phy_output, bool enable, + u32 *freq) +{ + struct ice_aqc_set_phy_rec_clk_out *cmd; + struct ice_aq_desc desc; + enum ice_status status; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_rec_clk_out); + cmd = &desc.params.set_phy_rec_clk_out; + cmd->phy_output = phy_output; + cmd->port_num = ICE_AQC_SET_PHY_REC_CLK_OUT_CURR_PORT; + cmd->flags = enable & ICE_AQC_SET_PHY_REC_CLK_OUT_OUT_EN; + cmd->freq = cpu_to_le32(*freq); + + status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL); + if (!status) + *freq = le32_to_cpu(cmd->freq); + + return status; +} + +/** + * ice_aq_get_phy_rec_clk_out + * @hw: pointer to the HW struct + * @phy_output: PHY reference clock output pin + * @port_num: Port number + * @flags: PHY flags + * @freq: PHY output frequency + * + * Get PHY recovered clock output (0x0631) + */ +enum ice_status +ice_aq_get_phy_rec_clk_out(struct ice_hw *hw, u8 phy_output, u8 *port_num, + u8 *flags, u32 *freq) +{ + struct ice_aqc_get_phy_rec_clk_out *cmd; + struct ice_aq_desc desc; + enum ice_status status; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_rec_clk_out); + cmd = &desc.params.get_phy_rec_clk_out; + cmd->phy_output = phy_output; + cmd->port_num = *port_num; + + status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL); + if (!status) { + *port_num = cmd->port_num; + *flags = cmd->flags; + *freq = le32_to_cpu(cmd->freq); + } + + return status; +} + diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h index aaed388a40a8..8a99c8364173 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.h +++ b/drivers/net/ethernet/intel/ice/ice_common.h @@ -166,6 +166,12 @@ ice_ena_vsi_rdma_qset(struct ice_port_info *pi, u16 vsi_handle, u8 tc, enum ice_status ice_aq_get_cgu_dpll_status(struct ice_hw *hw, u8 dpll_num, u8 *ref_state, u16 *dpll_state, u64 *phase_offset, u8 *eec_mode); +enum ice_status +ice_aq_set_phy_rec_clk_out(struct ice_hw *hw, u8 phy_output, bool enable, + u32 *freq); +enum ice_status +ice_aq_get_phy_rec_clk_out(struct ice_hw *hw, u8 phy_output, u8 *port_num, + u8 *flags, u32 *freq); int ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid, u16 *q_id); diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 5af2faaa21e1..c9e16bb9470e 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -4076,6 +4076,100 @@ ice_get_module_eeprom(struct net_device *netdev, return 0; } +/** + * ice_get_rclk_range - get range of recovered clock indices + * @netdev: network interface device structure + * @min_idx: min rclk index + * @max_idx: max rclk index + * @ena_mask: bitmask of pin states + * @extack: netlink extended ack + */ +static int +ice_get_rclk_range(struct net_device *netdev, u32 *min_idx, u32 *max_idx, + struct netlink_ext_ack *extack) +{ + struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi *vsi = np->vsi; + struct ice_pf *pf = vsi->back; + + if (!ice_is_feature_supported(pf, ICE_F_CGU)) + return -EOPNOTSUPP; + + *min_idx = 0; + *max_idx = ICE_RCLK_PIN_MAX; + + return 0; +} + +/** + * ice_get_rclk_state - get state of a recovered frequency output pin + * @netdev: network interface device structure + * @out_idx: index of a questioned pin + * @ena: returned state of a pin + * @extack: netlink extended ack + */ +static int +ice_get_rclk_state(struct net_device *netdev, u32 out_idx, + bool *ena, struct netlink_ext_ack *extack) +{ + u8 port_num = ICE_AQC_SET_PHY_REC_CLK_OUT_CURR_PORT, flags; + struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi *vsi = np->vsi; + struct ice_pf *pf = vsi->back; + u32 freq; + int ret; + + if (!ice_is_feature_supported(pf, ICE_F_CGU)) + return -EOPNOTSUPP; + + if (out_idx > ICE_RCLK_PIN_MAX) + return -EINVAL; + + ret = ice_aq_get_phy_rec_clk_out(&pf->hw, out_idx, + &port_num, &flags, &freq); + if (ret) + return ret; + + if (flags & ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN) + *ena = true; + else + *ena = false; + + return ret; +} + +/** + * ice_set_rclk_out - enable/disable recovered clock redirection to the + * output pin + * @netdev: network interface device structure + * @out_idx: index of pin being configured + * @ena: requested state of a pin + * @extack: netlink extended ack + */ +static int +ice_set_rclk_out(struct net_device *netdev, u32 out_idx, bool ena, + struct netlink_ext_ack *extack) +{ + struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi *vsi = np->vsi; + struct ice_pf *pf = vsi->back; + enum ice_status ret; + u32 freq; + + if (!ice_is_feature_supported(pf, ICE_F_CGU)) + return -EOPNOTSUPP; + + if (out_idx > ICE_RCLK_PIN_MAX) + return -EINVAL; + + ret = ice_aq_set_phy_rec_clk_out(&pf->hw, out_idx, + ena, &freq); + if (ret) + return ret; + + return ret; +} + static const struct ethtool_ops ice_ethtool_ops = { .supported_coalesce_params = ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_USE_ADAPTIVE | @@ -4121,6 +4215,9 @@ static const struct ethtool_ops ice_ethtool_ops = { .set_fecparam = ice_set_fecparam, .get_module_info = ice_get_module_info, .get_module_eeprom = ice_get_module_eeprom, + .get_rclk_range = ice_get_rclk_range, + .get_rclk_state = ice_get_rclk_state, + .set_rclk_out = ice_set_rclk_out, }; static const struct ethtool_ops ice_ethtool_safe_mode_ops = { diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h index 28b04ec40bae..865ca680b62e 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h @@ -132,4 +132,6 @@ enum ice_e810t_cgu_pins { NUM_E810T_CGU_PINS }; +#define ICE_RCLK_PIN_MAX (REF1N - REF1P) + #endif /* _ICE_PTP_HW_H_ */ -- 2.26.3 From vinicius.gomes at intel.com Wed Dec 1 18:57:31 2021 From: vinicius.gomes at intel.com (Vinicius Costa Gomes) Date: Wed, 1 Dec 2021 10:57:31 -0800 Subject: [Intel-wired-lan] [PATCH] igc: Avoid possible deadlock during suspend/resume In-Reply-To: <87r1awtdx3.fsf@intel.com> References: <87r1awtdx3.fsf@intel.com> Message-ID: <20211201185731.236130-1-vinicius.gomes@intel.com> Inspired by: https://bugzilla.kernel.org/show_bug.cgi?id=215129 Signed-off-by: Vinicius Costa Gomes --- Just to see if it's indeed the same problem as the bug report above. drivers/net/ethernet/intel/igc/igc_main.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 0e19b4d02e62..c58bf557a2a1 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -6619,7 +6619,7 @@ static void igc_deliver_wake_packet(struct net_device *netdev) netif_rx(skb); } -static int __maybe_unused igc_resume(struct device *dev) +static int __maybe_unused __igc_resume(struct device *dev, bool rpm) { struct pci_dev *pdev = to_pci_dev(dev); struct net_device *netdev = pci_get_drvdata(pdev); @@ -6661,20 +6661,27 @@ static int __maybe_unused igc_resume(struct device *dev) wr32(IGC_WUS, ~0); - rtnl_lock(); + if (!rpm) + rtnl_lock(); if (!err && netif_running(netdev)) err = __igc_open(netdev, true); if (!err) netif_device_attach(netdev); - rtnl_unlock(); + if (!rpm) + rtnl_unlock(); return err; } static int __maybe_unused igc_runtime_resume(struct device *dev) { - return igc_resume(dev); + return __igc_resume(dev, true); +} + +static int __maybe_unused igc_resume(struct device *dev) +{ + return __igc_resume(dev, false); } static int __maybe_unused igc_suspend(struct device *dev) @@ -6738,7 +6745,7 @@ static pci_ers_result_t igc_io_error_detected(struct pci_dev *pdev, * @pdev: Pointer to PCI device * * Restart the card from scratch, as if from a cold-boot. Implementation - * resembles the first-half of the igc_resume routine. + * resembles the first-half of the __igc_resume routine. **/ static pci_ers_result_t igc_io_slot_reset(struct pci_dev *pdev) { @@ -6777,7 +6784,7 @@ static pci_ers_result_t igc_io_slot_reset(struct pci_dev *pdev) * * This callback is called when the error recovery driver tells us that * its OK to resume normal operation. Implementation resembles the - * second-half of the igc_resume routine. + * second-half of the __igc_resume routine. */ static void igc_io_resume(struct pci_dev *pdev) { -- 2.33.1 From markpearson at lenovo.com Wed Dec 1 19:00:40 2021 From: markpearson at lenovo.com (Mark Pearson) Date: Wed, 1 Dec 2021 14:00:40 -0500 Subject: [Intel-wired-lan] [External] Re: [PATCH 3/3] Revert "e1000e: Add handshake with the CSME to support S0ix" In-Reply-To: References: <20211122161927.874291-1-kai.heng.feng@canonical.com> <20211122161927.874291-3-kai.heng.feng@canonical.com> <0ba36a30-95d3-a5f4-93c2-443cf2259756@intel.com> <3fad0b95-fe97-8c4a-3ca9-3ed2a9fa2134@lenovo.com> Message-ID: <809af77d-493a-cba4-a1fe-def12dabe602@lenovo.com> On 2021-12-01 11:38, Ruinskiy, Dima wrote: > On 30/11/2021 17:52, Mark Pearson wrote: >> Hi Sasha >> >> On 2021-11-28 08:23, Sasha Neftin wrote: >>> On 11/22/2021 18:19, Kai-Heng Feng wrote: >>>> This reverts commit 3e55d231716ea361b1520b801c6778c4c48de102. >>>> >>>> Bugzilla: >>>> https://bugzilla.kernel.org/show_bug.cgi?id=214821>>>>> >>>> Signed-off-by: Kai-Heng Feng >>>> --- >> >>>> >>> Hello Kai-Heng, >>> I believe it is the wrong approach. Reverting this patch will put >>> corporate systems in an unpredictable state. SW will perform s0ix flow >>> independent to CSME. (The CSME firmware will continue run >>> independently.) LAN controller could be in an unknown state. >>> Please, afford us to continue to debug the problem (it is could be >>> incredible complexity) >>> >>> You always can skip the s0ix flow on problematic corporate systems by >>> using privilege flag: ethtool --set-priv-flags enp0s31f6 s0ix-enabled >>> off >>> >>> Also, there is no impact on consumer systems. >>> Sasha >> >> I know we've discussed this offline, and your team are working on the >> correct fix but I wanted to check based on your comments above that "it >> was complex". I thought, and maybe misunderstood, that it was going to >> be relatively simple to disable the change for older CPUs - which is the >> biggest problem caused by the patch. >> >> Right now it's breaking networking for folk who happen to have a vPro >> Tigerlake (and I believe even potentially Cometlake or older) system. I >> think the impact of that could potentially be quite severe. >> >> I understand not wanting to revert the change for the ADL platforms I >> believe this is targeting and to fix this instead - but your comment >> made me nervous that Linux users on older Intel based platforms are in >> for a long and painful wait - it is likely a lot of users.... >> >> Can you or Dima confirm the fix for older platforms will be available >> soon? I appreciate the ADL platform might take a bit more work and time >> to get right. >> >> Thanks >> Mark >> > Hi Mark, > > What we currently see is that the issue manifests itself similarly on > ADL and TGL platforms. Thus, the fix will likely be the same for both. > > If we cannot find a proper fix soon, we will provide a workaround (for > example by temporary disabling the feature on vPro platforms until we do > have a fix). > > This can be done without reverting the patch series, and I don't see > much value in selectively disabling it for CML/TGL while leaving it on > for ADL, unless our ongoing debug shows otherwise. > Got it - thanks Dima. As a note - the obvious advantage of selectively disabling for CML/TGL is there is a ton of those platforms out there in users hands, whereas the ADL platforms won't be landing for a few more months (at least in our case). I'm OK if the fixes take a touch longer with ADL (though we'll want them soon so they have time to make it upstream and down into the distro's) - but there's going to be a lot of unhappy Intel users as soon as they start picking up the updates (that are landing in some distro's) and finding that networking is broken. I'd expect TGL/CML to be a priority... Keep us posted when the fix is ready please. Mark From kuba at kernel.org Thu Dec 2 01:56:15 2021 From: kuba at kernel.org (Jakub Kicinski) Date: Wed, 1 Dec 2021 17:56:15 -0800 Subject: [Intel-wired-lan] [PATCH v4 net-next 4/4] ice: add support for SyncE recovered clocks In-Reply-To: <20211201180208.640179-5-maciej.machnikowski@intel.com> References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-5-maciej.machnikowski@intel.com> Message-ID: <20211201175615.4b403560@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Wed, 1 Dec 2021 19:02:08 +0100 Maciej Machnikowski wrote: > Implement ethtool netlink functions for handling SyncE recovered clocks > configuration on ice driver: > - ETHTOOL_MSG_RCLK_SET > - ETHTOOL_MSG_RCLK_GET > > Co-developed-by: Arkadiusz Kubalewski > Signed-off-by: Arkadiusz Kubalewski > Signed-off-by: Maciej Machnikowski drivers/net/ethernet/intel/ice/ice_ethtool.c:4090: warning: Excess function parameter 'ena_mask' description in 'ice_get_rclk_range' drivers/net/ethernet/intel/ice/ice_dcb_nl.c:66:6: warning: variable 'bwcfg' set but not used [-Wunused-but-set-variable] int bwcfg = 0, bwrec = 0; ^ From gregkh at linuxfoundation.org Thu Dec 2 06:41:34 2021 From: gregkh at linuxfoundation.org (Greg KH) Date: Thu, 2 Dec 2021 07:41:34 +0100 Subject: [Intel-wired-lan] [PATCH] igc: Avoid possible deadlock during suspend/resume In-Reply-To: <20211201185731.236130-1-vinicius.gomes@intel.com> References: <87r1awtdx3.fsf@intel.com> <20211201185731.236130-1-vinicius.gomes@intel.com> Message-ID: On Wed, Dec 01, 2021 at 10:57:31AM -0800, Vinicius Costa Gomes wrote: > Inspired by: > https://bugzilla.kernel.org/show_bug.cgi?id=215129 > This changelog does not say anything at all, sorry. Please explain what is happening here as the kernel documentation asks you to. > Signed-off-by: Vinicius Costa Gomes > --- > Just to see if it's indeed the same problem as the bug report above. This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly. From vinicius.gomes at intel.com Thu Dec 2 06:50:36 2021 From: vinicius.gomes at intel.com (Vinicius Costa Gomes) Date: Wed, 01 Dec 2021 22:50:36 -0800 Subject: [Intel-wired-lan] [PATCH] igc: Avoid possible deadlock during suspend/resume In-Reply-To: References: <87r1awtdx3.fsf@intel.com> <20211201185731.236130-1-vinicius.gomes@intel.com> Message-ID: <87ilw7ts8z.fsf@intel.com> Greg KH writes: > On Wed, Dec 01, 2021 at 10:57:31AM -0800, Vinicius Costa Gomes wrote: >> Inspired by: >> https://bugzilla.kernel.org/show_bug.cgi?id=215129 >> > > This changelog does not say anything at all, sorry. Please explain what > is happening here as the kernel documentation asks you to. It was intended as just some patch for the reporter to try while narrowing the problem down. Sorry for the noise. I should have thought about removing stable from CC. Thank you, -- Vinicius From lkp at intel.com Thu Dec 2 11:10:26 2021 From: lkp at intel.com (kernel test robot) Date: Thu, 02 Dec 2021 19:10:26 +0800 Subject: [Intel-wired-lan] [tnguy-net-queue:40GbE] BUILD SUCCESS 27b0f1485fe9a68d52554a85af81440644fc7675 Message-ID: <61a8a9a2.itOdIRqAdSI66fpg%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue.git 40GbE branch HEAD: 27b0f1485fe9a68d52554a85af81440644fc7675 i40e: Fix NULL pointer dereference in i40e_dbg_dump_desc elapsed time: 748m configs tested: 164 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm64 defconfig arm allyesconfig arm defconfig arm64 allyesconfig arm allmodconfig i386 randconfig-c001-20211202 arm corgi_defconfig sparc sparc64_defconfig powerpc warp_defconfig arm pxa910_defconfig powerpc g5_defconfig arc nsim_700_defconfig openrisc simple_smp_defconfig powerpc xes_mpc85xx_defconfig powerpc linkstation_defconfig sh apsh4ad0a_defconfig arm versatile_defconfig sh sh7710voipgw_defconfig riscv nommu_k210_defconfig arm moxart_defconfig sh allmodconfig arm lubbock_defconfig xtensa iss_defconfig arm ixp4xx_defconfig powerpc mpc8540_ads_defconfig sh r7780mp_defconfig sh apsh4a3a_defconfig xtensa smp_lx200_defconfig arm hackkit_defconfig arm mv78xx0_defconfig s390 defconfig nds32 alldefconfig sh landisk_defconfig sh edosk7760_defconfig m68k m5275evb_defconfig arm tct_hammer_defconfig mips bcm63xx_defconfig s390 zfcpdump_defconfig xtensa common_defconfig arm jornada720_defconfig nds32 defconfig arm pleb_defconfig arm pxa168_defconfig m68k hp300_defconfig m68k q40_defconfig arm sama5_defconfig powerpc lite5200b_defconfig arm spear3xx_defconfig powerpc mgcoge_defconfig arm cns3420vb_defconfig arm hisi_defconfig um defconfig sh edosk7705_defconfig arm h5000_defconfig arm cm_x300_defconfig powerpc microwatt_defconfig arm s3c2410_defconfig sh se7619_defconfig arm randconfig-c002-20211202 ia64 allmodconfig ia64 defconfig ia64 allyesconfig m68k allmodconfig m68k defconfig m68k allyesconfig nios2 defconfig arc allyesconfig nds32 allnoconfig nios2 allyesconfig csky defconfig alpha defconfig alpha allyesconfig xtensa allyesconfig h8300 allyesconfig arc defconfig parisc defconfig s390 allyesconfig s390 allmodconfig parisc allyesconfig i386 allyesconfig i386 defconfig i386 debian-10.3-kselftests i386 debian-10.3 sparc allyesconfig sparc defconfig mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig i386 randconfig-a001-20211129 i386 randconfig-a002-20211129 i386 randconfig-a006-20211129 i386 randconfig-a005-20211129 i386 randconfig-a004-20211129 i386 randconfig-a003-20211129 x86_64 randconfig-a016-20211202 x86_64 randconfig-a011-20211202 x86_64 randconfig-a013-20211202 x86_64 randconfig-a014-20211202 x86_64 randconfig-a012-20211202 x86_64 randconfig-a015-20211202 i386 randconfig-a016-20211202 i386 randconfig-a013-20211202 i386 randconfig-a011-20211202 i386 randconfig-a014-20211202 i386 randconfig-a012-20211202 i386 randconfig-a015-20211202 arc randconfig-r043-20211129 arc randconfig-r043-20211128 s390 randconfig-r044-20211128 riscv randconfig-r042-20211128 riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig x86_64 rhel-8.3-kselftests um x86_64_defconfig um i386_defconfig x86_64 allyesconfig x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec clang tested configs: arm randconfig-c002-20211202 x86_64 randconfig-c007-20211202 riscv randconfig-c006-20211202 i386 randconfig-c001-20211202 powerpc randconfig-c003-20211202 s390 randconfig-c005-20211202 x86_64 randconfig-a006-20211202 x86_64 randconfig-a005-20211202 x86_64 randconfig-a001-20211202 x86_64 randconfig-a002-20211202 x86_64 randconfig-a004-20211202 x86_64 randconfig-a003-20211202 i386 randconfig-a001-20211128 i386 randconfig-a002-20211128 i386 randconfig-a006-20211128 i386 randconfig-a005-20211128 i386 randconfig-a004-20211128 i386 randconfig-a003-20211128 i386 randconfig-a001-20211202 i386 randconfig-a005-20211202 i386 randconfig-a002-20211202 i386 randconfig-a003-20211202 i386 randconfig-a006-20211202 i386 randconfig-a004-20211202 x86_64 randconfig-a014-20211130 x86_64 randconfig-a013-20211130 x86_64 randconfig-a012-20211130 x86_64 randconfig-a015-20211130 x86_64 randconfig-a011-20211130 x86_64 randconfig-a016-20211130 i386 randconfig-a013-20211201 i386 randconfig-a016-20211201 i386 randconfig-a011-20211201 i386 randconfig-a014-20211201 i386 randconfig-a012-20211201 i386 randconfig-a015-20211201 hexagon randconfig-r045-20211129 hexagon randconfig-r041-20211129 s390 randconfig-r044-20211129 riscv randconfig-r042-20211129 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From lkp at intel.com Thu Dec 2 11:21:32 2021 From: lkp at intel.com (kernel test robot) Date: Thu, 02 Dec 2021 19:21:32 +0800 Subject: [Intel-wired-lan] [tnguy-next-queue:dev-queue] BUILD SUCCESS 99b52d8ae980f329a6b1c3f2cb76eb31c800a684 Message-ID: <61a8ac3c.oSz+5pmGTd74L69W%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue branch HEAD: 99b52d8ae980f329a6b1c3f2cb76eb31c800a684 ice: Add ability for PF admin to enable VF VLAN pruning elapsed time: 1072m configs tested: 139 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm64 allyesconfig arm defconfig arm64 defconfig arm allyesconfig arm allmodconfig i386 randconfig-c001-20211202 arm pcm027_defconfig powerpc microwatt_defconfig m68k m5208evb_defconfig m68k m5407c3_defconfig mips rb532_defconfig powerpc makalu_defconfig sh se7721_defconfig arc haps_hs_defconfig arm exynos_defconfig arm ixp4xx_defconfig xtensa defconfig sh alldefconfig powerpc tqm5200_defconfig sh sh03_defconfig powerpc pseries_defconfig arm milbeaut_m10v_defconfig arm iop32x_defconfig sh magicpanelr2_defconfig arm randconfig-c002-20211202 ia64 allmodconfig ia64 defconfig ia64 allyesconfig m68k allmodconfig m68k defconfig m68k allyesconfig nds32 defconfig csky defconfig alpha defconfig alpha allyesconfig nios2 allyesconfig nios2 defconfig arc allyesconfig nds32 allnoconfig xtensa allyesconfig h8300 allyesconfig arc defconfig sh allmodconfig parisc defconfig s390 allyesconfig s390 allmodconfig parisc allyesconfig s390 defconfig i386 allyesconfig sparc allyesconfig sparc defconfig i386 defconfig i386 debian-10.3-kselftests i386 debian-10.3 mips allmodconfig mips allyesconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig i386 randconfig-a002-20211130 i386 randconfig-a004-20211130 i386 randconfig-a003-20211130 i386 randconfig-a001-20211130 i386 randconfig-a005-20211130 i386 randconfig-a006-20211130 x86_64 randconfig-a016-20211202 x86_64 randconfig-a011-20211202 x86_64 randconfig-a013-20211202 x86_64 randconfig-a014-20211202 x86_64 randconfig-a012-20211202 x86_64 randconfig-a015-20211202 x86_64 randconfig-a011-20211128 x86_64 randconfig-a014-20211128 x86_64 randconfig-a012-20211128 x86_64 randconfig-a016-20211128 x86_64 randconfig-a013-20211128 x86_64 randconfig-a015-20211128 i386 randconfig-a016-20211202 i386 randconfig-a013-20211202 i386 randconfig-a011-20211202 i386 randconfig-a014-20211202 i386 randconfig-a012-20211202 i386 randconfig-a015-20211202 arc randconfig-r043-20211128 s390 randconfig-r044-20211128 riscv randconfig-r042-20211128 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig um x86_64_defconfig um i386_defconfig x86_64 allyesconfig x86_64 rhel-8.3-kselftests x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec clang tested configs: x86_64 randconfig-a006-20211202 x86_64 randconfig-a005-20211202 x86_64 randconfig-a001-20211202 x86_64 randconfig-a002-20211202 x86_64 randconfig-a004-20211202 x86_64 randconfig-a003-20211202 x86_64 randconfig-a001-20211128 x86_64 randconfig-a003-20211128 x86_64 randconfig-a004-20211128 x86_64 randconfig-a002-20211128 x86_64 randconfig-a006-20211128 x86_64 randconfig-a005-20211128 i386 randconfig-a001-20211202 i386 randconfig-a005-20211202 i386 randconfig-a002-20211202 i386 randconfig-a003-20211202 i386 randconfig-a006-20211202 i386 randconfig-a004-20211202 i386 randconfig-a001-20211128 i386 randconfig-a002-20211128 i386 randconfig-a006-20211128 i386 randconfig-a005-20211128 i386 randconfig-a004-20211128 i386 randconfig-a003-20211128 i386 randconfig-a015-20211129 i386 randconfig-a016-20211129 i386 randconfig-a013-20211129 i386 randconfig-a012-20211129 i386 randconfig-a014-20211129 i386 randconfig-a011-20211129 hexagon randconfig-r045-20211129 hexagon randconfig-r041-20211129 s390 randconfig-r044-20211129 riscv randconfig-r042-20211129 hexagon randconfig-r045-20211202 hexagon randconfig-r041-20211202 hexagon randconfig-r045-20211128 hexagon randconfig-r041-20211128 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From lkp at intel.com Thu Dec 2 11:48:51 2021 From: lkp at intel.com (kernel test robot) Date: Thu, 2 Dec 2021 19:48:51 +0800 Subject: [Intel-wired-lan] [tnguy-next-queue:dev-queue 108/111] drivers/net/ethernet/intel/ice/ice_vlan_mode.c:96:31: error: 'ICE_DBG_AQ' undeclared; did you mean 'ICE_DBG_LAN'? Message-ID: <202112021957.1KmfBjqc-lkp@intel.com> tree: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue head: 99b52d8ae980f329a6b1c3f2cb76eb31c800a684 commit: 3f419c30541088b1a1b8a7a7197d82c21ba3898c [108/111] ice: Support configuring the device to Double VLAN Mode config: m68k-randconfig-r014-20211202 (https://download.01.org/0day-ci/archive/20211202/202112021957.1KmfBjqc-lkp at intel.com/config) compiler: m68k-linux-gcc (GCC) 11.2.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git/commit/?id=3f419c30541088b1a1b8a7a7197d82c21ba3898c git remote add tnguy-next-queue https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git git fetch --no-tags tnguy-next-queue dev-queue git checkout 3f419c30541088b1a1b8a7a7197d82c21ba3898c # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=m68k SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All errors (new ones prefixed by >>): In file included from drivers/net/ethernet/intel/ice/ice_type.h:11, from drivers/net/ethernet/intel/ice/ice.h:58, from drivers/net/ethernet/intel/ice/ice_common.h:7, from drivers/net/ethernet/intel/ice/ice_vlan_mode.c:4: drivers/net/ethernet/intel/ice/ice_vlan_mode.c: In function 'ice_aq_is_dvm_ena': >> drivers/net/ethernet/intel/ice/ice_vlan_mode.c:96:31: error: 'ICE_DBG_AQ' undeclared (first use in this function); did you mean 'ICE_DBG_LAN'? 96 | ice_debug(hw, ICE_DBG_AQ, "Failed to get VLAN mode, status %d\n", | ^~~~~~~~~~ drivers/net/ethernet/intel/ice/ice_osdep.h:42:14: note: in definition of macro 'ice_debug' 42 | if ((type) & (hw)->debug_mask) \ | ^~~~ drivers/net/ethernet/intel/ice/ice_vlan_mode.c:96:31: note: each undeclared identifier is reported only once for each function it appears in 96 | ice_debug(hw, ICE_DBG_AQ, "Failed to get VLAN mode, status %d\n", | ^~~~~~~~~~ drivers/net/ethernet/intel/ice/ice_osdep.h:42:14: note: in definition of macro 'ice_debug' 42 | if ((type) & (hw)->debug_mask) \ | ^~~~ vim +96 drivers/net/ethernet/intel/ice/ice_vlan_mode.c 3 > 4 #include "ice_common.h" 5 6 /** 7 * ice_pkg_get_supported_vlan_mode - determine if DDP supports Double VLAN mode 8 * @hw: pointer to the HW struct 9 * @dvm: output variable to determine if DDP supports DVM(true) or SVM(false) 10 */ 11 static int 12 ice_pkg_get_supported_vlan_mode(struct ice_hw *hw, bool *dvm) 13 { 14 u16 meta_init_size = sizeof(struct ice_meta_init_section); 15 struct ice_meta_init_section *sect; 16 struct ice_buf_build *bld; 17 int status; 18 19 /* if anything fails, we assume there is no DVM support */ 20 *dvm = false; 21 22 bld = ice_pkg_buf_alloc_single_section(hw, 23 ICE_SID_RXPARSER_METADATA_INIT, 24 meta_init_size, (void **)§); 25 if (!bld) 26 return -ENOMEM; 27 28 /* only need to read a single section */ 29 sect->count = cpu_to_le16(1); 30 sect->offset = cpu_to_le16(ICE_META_VLAN_MODE_ENTRY); 31 32 status = ice_aq_upload_section(hw, 33 (struct ice_buf_hdr *)ice_pkg_buf(bld), 34 ICE_PKG_BUF_SIZE, NULL); 35 if (!status) { 36 DECLARE_BITMAP(entry, ICE_META_INIT_BITS); 37 u32 arr[ICE_META_INIT_DW_CNT]; 38 u16 i; 39 40 /* convert to host bitmap format */ 41 for (i = 0; i < ICE_META_INIT_DW_CNT; i++) 42 arr[i] = le32_to_cpu(sect->entry.bm[i]); 43 44 bitmap_from_arr32(entry, arr, (u16)ICE_META_INIT_BITS); 45 46 /* check if DVM is supported */ 47 *dvm = test_bit(ICE_META_VLAN_MODE_BIT, entry); 48 } 49 50 ice_pkg_buf_free(hw, bld); 51 52 return status; 53 } 54 55 /** 56 * ice_aq_get_vlan_mode - get the VLAN mode of the device 57 * @hw: pointer to the HW structure 58 * @get_params: structure FW fills in based on the current VLAN mode config 59 * 60 * Get VLAN Mode Parameters (0x020D) 61 */ 62 static int 63 ice_aq_get_vlan_mode(struct ice_hw *hw, 64 struct ice_aqc_get_vlan_mode *get_params) 65 { 66 struct ice_aq_desc desc; 67 68 if (!get_params) 69 return -EINVAL; 70 71 ice_fill_dflt_direct_cmd_desc(&desc, 72 ice_aqc_opc_get_vlan_mode_parameters); 73 74 return ice_aq_send_cmd(hw, &desc, get_params, sizeof(*get_params), 75 NULL); 76 } 77 78 /** 79 * ice_aq_is_dvm_ena - query FW to check if double VLAN mode is enabled 80 * @hw: pointer to the HW structure 81 * 82 * Returns true if the hardware/firmware is configured in double VLAN mode, 83 * else return false signaling that the hardware/firmware is configured in 84 * single VLAN mode. 85 * 86 * Also, return false if this call fails for any reason (i.e. firmware doesn't 87 * support this AQ call). 88 */ 89 static bool ice_aq_is_dvm_ena(struct ice_hw *hw) 90 { 91 struct ice_aqc_get_vlan_mode get_params = { 0 }; 92 int status; 93 94 status = ice_aq_get_vlan_mode(hw, &get_params); 95 if (status) { > 96 ice_debug(hw, ICE_DBG_AQ, "Failed to get VLAN mode, status %d\n", 97 status); 98 return false; 99 } 100 101 return (get_params.vlan_mode & ICE_AQ_VLAN_MODE_DVM_ENA); 102 } 103 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From lkp at intel.com Thu Dec 2 11:48:55 2021 From: lkp at intel.com (kernel test robot) Date: Thu, 2 Dec 2021 19:48:55 +0800 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: <20211201180208.640179-3-maciej.machnikowski@intel.com> References: <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: <202112021948.p1Sqfiw5-lkp@intel.com> Hi Maciej, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on net-next/master] url: https://github.com/0day-ci/linux/commits/Maciej-Machnikowski/Add-ethtool-interface-for-SyncE/20211202-021915 base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 23ea630f86c70cbe6691f9f839e7b6742f0e9ad3 reproduce: make htmldocs If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All warnings (new ones prefixed by >>): include/uapi/linux/ethtool.h:1: warning: 'ethtool_rclk_pin_state' not found vim +/ethtool_rclk_pin_state +1 include/uapi/linux/ethtool.h 6f52b16c5b29b8 Greg Kroah-Hartman 2017-11-01 @1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 607ca46e97a1b6 David Howells 2012-10-13 2 /* 607ca46e97a1b6 David Howells 2012-10-13 3 * ethtool.h: Defines for Linux ethtool. 607ca46e97a1b6 David Howells 2012-10-13 4 * 607ca46e97a1b6 David Howells 2012-10-13 5 * Copyright (C) 1998 David S. Miller (davem at redhat.com) 607ca46e97a1b6 David Howells 2012-10-13 6 * Copyright 2001 Jeff Garzik 607ca46e97a1b6 David Howells 2012-10-13 7 * Portions Copyright 2001 Sun Microsystems (thockin at sun.com) 607ca46e97a1b6 David Howells 2012-10-13 8 * Portions Copyright 2002 Intel (eli.kupermann at intel.com, 607ca46e97a1b6 David Howells 2012-10-13 9 * christopher.leech at intel.com, 607ca46e97a1b6 David Howells 2012-10-13 10 * scott.feldman at intel.com) 607ca46e97a1b6 David Howells 2012-10-13 11 * Portions Copyright (C) Sun Microsystems 2008 607ca46e97a1b6 David Howells 2012-10-13 12 */ 607ca46e97a1b6 David Howells 2012-10-13 13 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From karen.sornek at intel.com Thu Dec 2 11:52:01 2021 From: karen.sornek at intel.com (Sornek, Karen) Date: Thu, 2 Dec 2021 12:52:01 +0100 Subject: [Intel-wired-lan] [PATCH net v2] i40e: Fix for failed to init adminq while VF reset Message-ID: <20211202115201.1304422-1-karen.sornek@intel.com> From: Karen Sornek Fix for failed to init adminq: -53 while VF is resetting via MAC address changing procedure. Added sync module to avoid reading deadbeef value in reinit adminq during software reset. Without this patch it is possible to trigger VF reset procedure during reinit adminq. This resulted in an incorrect reading of value from the AQP registers and generated the -53 error. Fixes: 5c3c48ac6bf5 ("i40e: implement virtual device interface") Signed-off-by: Grzegorz Szczurek Signed-off-by: Karen Sornek --- v2: Added "Fixes" tag --- .../net/ethernet/intel/i40e/i40e_register.h | 3 ++ .../ethernet/intel/i40e/i40e_virtchnl_pf.c | 44 ++++++++++++++++++- .../ethernet/intel/i40e/i40e_virtchnl_pf.h | 1 + 3 files changed, 46 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_register.h b/drivers/net/ethernet/intel/i40e/i40e_register.h index 8d0588a27a05..1908eed4fa5e 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_register.h +++ b/drivers/net/ethernet/intel/i40e/i40e_register.h @@ -413,6 +413,9 @@ #define I40E_VFINT_DYN_CTLN(_INTVF) (0x00024800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ #define I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT 1 #define I40E_VFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT) +#define I40E_VFINT_ICR0_ADMINQ_SHIFT 30 +#define I40E_VFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ADMINQ_SHIFT) +#define I40E_VFINT_ICR0_ENA(_VF) (0x0002C000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ #define I40E_VPINT_AEQCTL(_VF) (0x0002B800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ #define I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT 0 #define I40E_VPINT_AEQCTL_ITR_INDX_SHIFT 11 diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 3efc6926d308..d4c6914d2347 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -1379,6 +1379,32 @@ static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf, return aq_ret; } +/** + * i40e_sync_vfr_reset + * @hw: pointer to hw struct + * @vf_id: VF identifier + * + * Before trigger hardware reset, we need to know if no other process has + * reserved the hardware for any reset operations. This check is done by + * examining the status of the RSTAT1 register used to signal the reset. + **/ +static int i40e_sync_vfr_reset(struct i40e_hw *hw, int vf_id) +{ + u32 reg; + int i; + + for (i = 0; i < I40E_VFR_WAIT_COUNT; i++) { + reg = rd32(hw, I40E_VFINT_ICR0_ENA(vf_id)) & + I40E_VFINT_ICR0_ADMINQ_MASK; + if (reg) + return 0; + + usleep_range(100, 200); + } + + return -EAGAIN; +} + /** * i40e_trigger_vf_reset * @vf: pointer to the VF structure @@ -1393,9 +1419,11 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr) struct i40e_pf *pf = vf->pf; struct i40e_hw *hw = &pf->hw; u32 reg, reg_idx, bit_idx; + bool vf_active; + u32 radq; /* warn the VF */ - clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states); + vf_active = test_and_clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states); /* Disable VF's configuration API during reset. The flag is re-enabled * in i40e_alloc_vf_res(), when it's safe again to access VF's VSI. @@ -1409,7 +1437,19 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr) * just need to clean up, so don't hit the VFRTRIG register. */ if (!flr) { - /* reset VF using VPGEN_VFRTRIG reg */ + /* Sync VFR reset before trigger next one */ + radq = rd32(hw, I40E_VFINT_ICR0_ENA(vf->vf_id)) & + I40E_VFINT_ICR0_ADMINQ_MASK; + if (vf_active && !radq) + /* waiting for finish reset by virtual driver */ + if (i40e_sync_vfr_reset(hw, vf->vf_id)) + dev_info(&pf->pdev->dev, + "Reset VF %d never finished\n", + vf->vf_id); + + /* Reset VF using VPGEN_VFRTRIG reg. It is also setting + * in progress state in rstat1 register. + */ reg = rd32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id)); reg |= I40E_VPGEN_VFRTRIG_VFSWR_MASK; wr32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id), reg); diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h index 6aa35c8c9091..8135bd6a1c0a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h @@ -19,6 +19,7 @@ #define I40E_MAX_VF_PROMISC_FLAGS 3 #define I40E_VF_STATE_WAIT_COUNT 20 +#define I40E_VFR_WAIT_COUNT 100 #define I40E_VF_RESET_TIME_MIN 30000000 /* time in nsec */ /* Various queue ctrls */ -- 2.27.0 From idosch at idosch.org Thu Dec 2 12:43:39 2021 From: idosch at idosch.org (Ido Schimmel) Date: Thu, 2 Dec 2021 14:43:39 +0200 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: <20211201180208.640179-3-maciej.machnikowski@intel.com> References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski wrote: > +RCLK_GET > +======== > + > +Get status of an output pin for PHY recovered frequency clock. > + > +Request contents: > + > + ====================================== ====== ========================== > + ``ETHTOOL_A_RCLK_HEADER`` nested request header > + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin > + ====================================== ====== ========================== > + > +Kernel response contents: > + > + ====================================== ====== ========================== > + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin > + ``ETHTOOL_A_RCLK_PIN_FLAGS`` u32 state of a pin > + ``ETHTOOL_A_RCLK_RANGE_MIN_PIN`` u32 min index of RCLK pins > + ``ETHTOOL_A_RCLK_RANGE_MAX_PIN`` u32 max index of RCLK pins > + ====================================== ====== ========================== > + > +Supported device can have mulitple reference recover clock pins available s/mulitple/multiple/ > +to be used as source of frequency for a DPLL. > +Once a pin on given port is enabled. The PHY recovered frequency is being > +fed onto that pin, and can be used by DPLL to synchonize with its signal. s/synchonize/synchronize/ Please run a spell checker on documentation > +Pins don't have to start with index equal 0 - device can also have different > +external sources pins. > + > +The ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is optional parameter. If present in > +the RCLK_GET request, the ``ETHTOOL_A_RCLK_PIN_ENABLED`` is provided in a The `ETHTOOL_A_RCLK_PIN_ENABLED` attribute is no where to be found in this submission > +response, it contatins state of the pin pointed by the index. Values are: s/contatins/contains/ > + > +.. kernel-doc:: include/uapi/linux/ethtool.h > + :identifiers: ethtool_rclk_pin_state This structure is also no where to be found > + > +If ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is not present in the RCLK_GET request, > +the range of available pins is returned: > +``ETHTOOL_A_RCLK_RANGE_MIN_PIN`` is lowest possible index of a pin available > +for recovering frequency from PHY. > +``ETHTOOL_A_RCLK_RANGE_MAX_PIN`` is highest possible index of a pin available > +for recovering frequency from PHY. > + > +RCLK_SET > +========== > + > +Set status of an output pin for PHY recovered frequency clock. > + > +Request contents: > + > + ====================================== ====== ======================== > + ``ETHTOOL_A_RCLK_HEADER`` nested request header > + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin > + ``ETHTOOL_A_RCLK_PIN_FLAGS`` u32 requested state > + ====================================== ====== ======================== > + > +``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is a index of a pin for which the change of > +state is requested. Values of ``ETHTOOL_A_RCLK_PIN_ENABLED`` are: > + > +.. kernel-doc:: include/uapi/linux/ethtool.h > + :identifiers: ethtool_rclk_pin_state Same. Looking at the diagram from the previous submission [1]: ??????????????????????? ? RX ? TX ? 1 ? ports ? ports ? 1 ??????????? ? ??????? 2 ? ? ? ? 2 ????????? ? ? ??????? 3 ? ? ? ? ? 3 ??????? ? ? ? ??????? ? ? ? ? ? ? ? ?????? ? ? ? \____/ ? ? ??????????????????????? 1? 2? ? RCLK out? ? ? TX CLK in ? ? ? ??????????????????? ? ? ? SEC ? ? ? ??????????????????? Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message allows me to redirect the frequency recovered from this netdev to the EEC via either pin 1, pin 2 or both. Given a netdev, the RCLK_GET message allows me to query the range of pins (RCLK out 1-2 in the diagram) through which the frequency can be fed into the EEC. Questions: 1. The query for all the above netdevs will return the same range of pins. How does user space know that these are the same pins? That is, how does user space know that RCLK_SET message to redirect the frequency recovered from netdev 1 to pin 1 will be overridden by the same message but for netdev 2? 2. How does user space know the mapping between a netdev and an EEC? That is, how does user space know that RCLK_SET message for netdev 1 will cause the Tx frequency of netdev 2 to change according to the frequency recovered from netdev 1? 3. If user space sends two RCLK_SET messages to redirect the frequency recovered from netdev 1 to RCLK out 1 and from netdev 2 to RCLK out 2, how does it know which recovered frequency is actually used an input to the EEC? 4. Why these pins are represented as attributes of a netdev and not as attributes of the EEC? That is, why are they represented as output pins of the PHY as opposed to input pins of the EEC? 5. What is the problem with the following model? - The EEC is a separate object with following attributes: * State: Invalid / Freerun / Locked / etc * Sources: Netdev / external / etc * Potentially more - Notifications are emitted to user space when the state of the EEC changes. Drivers will either poll the state from the device or get interrupts - The mapping from netdev to EEC is queried via ethtool [1] https://lore.kernel.org/netdev/20211110114448.2792314-1-maciej.machnikowski at intel.com/ From roots at gmx.de Thu Dec 2 08:34:03 2021 From: roots at gmx.de (Stefan Dietrich) Date: Thu, 02 Dec 2021 09:34:03 +0100 Subject: [Intel-wired-lan] [PATCH] igc: Avoid possible deadlock during suspend/resume In-Reply-To: <20211201185731.236130-1-vinicius.gomes@intel.com> References: <87r1awtdx3.fsf@intel.com> <20211201185731.236130-1-vinicius.gomes@intel.com> Message-ID: <5a4b31d43d9bf32e518188f3ef84c433df3a18b1.camel@gmx.de> Hi Vinicius, thanks for the patch - unfortunately it did not solve the issue and I am still getting reboots/lockups. Cheers, Stefan On Wed, 2021-12-01 at 10:57 -0800, Vinicius Costa Gomes wrote: > Inspired by: > https://bugzilla.kernel.org/show_bug.cgi?id=215129 > > Signed-off-by: Vinicius Costa Gomes > --- > Just to see if it's indeed the same problem as the bug report above. > > drivers/net/ethernet/intel/igc/igc_main.c | 19 +++++++++++++------ > 1 file changed, 13 insertions(+), 6 deletions(-) > > diff --git a/drivers/net/ethernet/intel/igc/igc_main.c > b/drivers/net/ethernet/intel/igc/igc_main.c > index 0e19b4d02e62..c58bf557a2a1 100644 > --- a/drivers/net/ethernet/intel/igc/igc_main.c > +++ b/drivers/net/ethernet/intel/igc/igc_main.c > @@ -6619,7 +6619,7 @@ static void igc_deliver_wake_packet(struct > net_device *netdev) > netif_rx(skb); > } > > -static int __maybe_unused igc_resume(struct device *dev) > +static int __maybe_unused __igc_resume(struct device *dev, bool rpm) > { > struct pci_dev *pdev = to_pci_dev(dev); > struct net_device *netdev = pci_get_drvdata(pdev); > @@ -6661,20 +6661,27 @@ static int __maybe_unused igc_resume(struct > device *dev) > > wr32(IGC_WUS, ~0); > > - rtnl_lock(); > + if (!rpm) > + rtnl_lock(); > if (!err && netif_running(netdev)) > err = __igc_open(netdev, true); > > if (!err) > netif_device_attach(netdev); > - rtnl_unlock(); > + if (!rpm) > + rtnl_unlock(); > > return err; > } > > static int __maybe_unused igc_runtime_resume(struct device *dev) > { > - return igc_resume(dev); > + return __igc_resume(dev, true); > +} > + > +static int __maybe_unused igc_resume(struct device *dev) > +{ > + return __igc_resume(dev, false); > } > > static int __maybe_unused igc_suspend(struct device *dev) > @@ -6738,7 +6745,7 @@ static pci_ers_result_t > igc_io_error_detected(struct pci_dev *pdev, > * @pdev: Pointer to PCI device > * > * Restart the card from scratch, as if from a cold-boot. > Implementation > - * resembles the first-half of the igc_resume routine. > + * resembles the first-half of the __igc_resume routine. > **/ > static pci_ers_result_t igc_io_slot_reset(struct pci_dev *pdev) > { > @@ -6777,7 +6784,7 @@ static pci_ers_result_t > igc_io_slot_reset(struct pci_dev *pdev) > * > * This callback is called when the error recovery driver tells us > that > * its OK to resume normal operation. Implementation resembles the > - * second-half of the igc_resume routine. > + * second-half of the __igc_resume routine. > */ > static void igc_io_resume(struct pci_dev *pdev) > { From maciej.machnikowski at intel.com Thu Dec 2 15:17:06 2021 From: maciej.machnikowski at intel.com (Machnikowski, Maciej) Date: Thu, 2 Dec 2021 15:17:06 +0000 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: > -----Original Message----- > From: Ido Schimmel > Sent: Thursday, December 2, 2021 1:44 PM > To: Machnikowski, Maciej > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > recovered clock for SyncE feature > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski wrote: > > +RCLK_GET > > +======== > > + > > +Get status of an output pin for PHY recovered frequency clock. > > + > > +Request contents: > > + > > + ====================================== ====== > ========================== > > + ``ETHTOOL_A_RCLK_HEADER`` nested request header > > + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin > > + ====================================== ====== > ========================== > > + > > +Kernel response contents: > > + > > + ====================================== ====== > ========================== > > + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin > > + ``ETHTOOL_A_RCLK_PIN_FLAGS`` u32 state of a pin > > + ``ETHTOOL_A_RCLK_RANGE_MIN_PIN`` u32 min index of RCLK pins > > + ``ETHTOOL_A_RCLK_RANGE_MAX_PIN`` u32 max index of RCLK > pins > > + ====================================== ====== > ========================== > > + > > +Supported device can have mulitple reference recover clock pins available > > s/mulitple/multiple/ > > > +to be used as source of frequency for a DPLL. > > +Once a pin on given port is enabled. The PHY recovered frequency is being > > +fed onto that pin, and can be used by DPLL to synchonize with its signal. > > s/synchonize/synchronize/ > > Please run a spell checker on documentation > > > +Pins don't have to start with index equal 0 - device can also have different > > +external sources pins. > > + > > +The ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is optional parameter. If present > in > > +the RCLK_GET request, the ``ETHTOOL_A_RCLK_PIN_ENABLED`` is > provided in a > > The `ETHTOOL_A_RCLK_PIN_ENABLED` attribute is no where to be found in > this submission > > > +response, it contatins state of the pin pointed by the index. Values are: > > s/contatins/contains/ > > > + > > +.. kernel-doc:: include/uapi/linux/ethtool.h > > + :identifiers: ethtool_rclk_pin_state > > This structure is also no where to be found > > > + > > +If ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is not present in the RCLK_GET > request, > > +the range of available pins is returned: > > +``ETHTOOL_A_RCLK_RANGE_MIN_PIN`` is lowest possible index of a pin > available > > +for recovering frequency from PHY. > > +``ETHTOOL_A_RCLK_RANGE_MAX_PIN`` is highest possible index of a pin > available > > +for recovering frequency from PHY. > > + > > +RCLK_SET > > +========== > > + > > +Set status of an output pin for PHY recovered frequency clock. > > + > > +Request contents: > > + > > + ====================================== ====== > ======================== > > + ``ETHTOOL_A_RCLK_HEADER`` nested request header > > + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin > > + ``ETHTOOL_A_RCLK_PIN_FLAGS`` u32 requested state > > + ====================================== ====== > ======================== > > + > > +``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is a index of a pin for which the > change of > > +state is requested. Values of ``ETHTOOL_A_RCLK_PIN_ENABLED`` are: > > + > > +.. kernel-doc:: include/uapi/linux/ethtool.h > > + :identifiers: ethtool_rclk_pin_state > > Same. Done - rewritten the manual > Looking at the diagram from the previous submission [1]: > > ??????????????????????? > ? RX ? TX ? > 1 ? ports ? ports ? 1 > ??????????? ? ??????? > 2 ? ? ? ? 2 > ????????? ? ? ??????? > 3 ? ? ? ? ? 3 > ??????? ? ? ? ??????? > ? ? ? ? ? ? > ? ?????? ? ? > ? \____/ ? ? > ??????????????????????? > 1? 2? ? > RCLK out? ? ? TX CLK in > ? ? ? > ??????????????????? > ? ? > ? SEC ? > ? ? > ??????????????????? > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message allows > me to redirect the frequency recovered from this netdev to the EEC via > either pin 1, pin 2 or both. > > Given a netdev, the RCLK_GET message allows me to query the range of > pins (RCLK out 1-2 in the diagram) through which the frequency can be > fed into the EEC. > > Questions: > > 1. The query for all the above netdevs will return the same range of > pins. How does user space know that these are the same pins? That is, > how does user space know that RCLK_SET message to redirect the frequency > recovered from netdev 1 to pin 1 will be overridden by the same message > but for netdev 2? We don't have a way to do so right now. When we have EEC subsystem in place the right thing to do will be to add EEC input index and EEC index as additional arguments > 2. How does user space know the mapping between a netdev and an EEC? > That is, how does user space know that RCLK_SET message for netdev 1 > will cause the Tx frequency of netdev 2 to change according to the > frequency recovered from netdev 1? Ditto - currently we don't have any entity to link the pins to ATM, but we can address that in userspace just like PTP pins are used now > 3. If user space sends two RCLK_SET messages to redirect the frequency > recovered from netdev 1 to RCLK out 1 and from netdev 2 to RCLK out 2, > how does it know which recovered frequency is actually used an input to > the EEC? > > 4. Why these pins are represented as attributes of a netdev and not as > attributes of the EEC? That is, why are they represented as output pins > of the PHY as opposed to input pins of the EEC? They are 2 separate beings. Recovered clock outputs are controlled separately from EEC inputs. If we mix them it'll be hard to control everything especially that a single EEC can support multiple devices. Also if we make those pins attributes of the EEC it'll become extremally hard to map them to netdevs and control them from the userspace app that will receive the ESMC message with a given QL level on netdev X. > 5. What is the problem with the following model? > > - The EEC is a separate object with following attributes: > * State: Invalid / Freerun / Locked / etc > * Sources: Netdev / external / etc > * Potentially more > > - Notifications are emitted to user space when the state of the EEC > changes. Drivers will either poll the state from the device or get > interrupts > > - The mapping from netdev to EEC is queried via ethtool Yep - that will be part of the EEC (DPLL) subsystem > [1] https://lore.kernel.org/netdev/20211110114448.2792314-1- > maciej.machnikowski at intel.com/ From idosch at idosch.org Thu Dec 2 16:35:42 2021 From: idosch at idosch.org (Ido Schimmel) Date: Thu, 2 Dec 2021 18:35:42 +0200 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: > > -----Original Message----- > > From: Ido Schimmel > > Sent: Thursday, December 2, 2021 1:44 PM > > To: Machnikowski, Maciej > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > > recovered clock for SyncE feature > > > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski wrote: > > > +RCLK_GET > > > +======== > > > + > > > +Get status of an output pin for PHY recovered frequency clock. > > > + > > > +Request contents: > > > + > > > + ====================================== ====== > > ========================== > > > + ``ETHTOOL_A_RCLK_HEADER`` nested request header > > > + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin > > > + ====================================== ====== > > ========================== > > > + > > > +Kernel response contents: > > > + > > > + ====================================== ====== > > ========================== > > > + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin > > > + ``ETHTOOL_A_RCLK_PIN_FLAGS`` u32 state of a pin > > > + ``ETHTOOL_A_RCLK_RANGE_MIN_PIN`` u32 min index of RCLK pins > > > + ``ETHTOOL_A_RCLK_RANGE_MAX_PIN`` u32 max index of RCLK > > pins > > > + ====================================== ====== > > ========================== > > > + > > > +Supported device can have mulitple reference recover clock pins available > > > > s/mulitple/multiple/ > > > > > +to be used as source of frequency for a DPLL. > > > +Once a pin on given port is enabled. The PHY recovered frequency is being > > > +fed onto that pin, and can be used by DPLL to synchonize with its signal. > > > > s/synchonize/synchronize/ > > > > Please run a spell checker on documentation > > > > > +Pins don't have to start with index equal 0 - device can also have different > > > +external sources pins. > > > + > > > +The ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is optional parameter. If present > > in > > > +the RCLK_GET request, the ``ETHTOOL_A_RCLK_PIN_ENABLED`` is > > provided in a > > > > The `ETHTOOL_A_RCLK_PIN_ENABLED` attribute is no where to be found in > > this submission > > > > > +response, it contatins state of the pin pointed by the index. Values are: > > > > s/contatins/contains/ > > > > > + > > > +.. kernel-doc:: include/uapi/linux/ethtool.h > > > + :identifiers: ethtool_rclk_pin_state > > > > This structure is also no where to be found > > > > > + > > > +If ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is not present in the RCLK_GET > > request, > > > +the range of available pins is returned: > > > +``ETHTOOL_A_RCLK_RANGE_MIN_PIN`` is lowest possible index of a pin > > available > > > +for recovering frequency from PHY. > > > +``ETHTOOL_A_RCLK_RANGE_MAX_PIN`` is highest possible index of a pin > > available > > > +for recovering frequency from PHY. > > > + > > > +RCLK_SET > > > +========== > > > + > > > +Set status of an output pin for PHY recovered frequency clock. > > > + > > > +Request contents: > > > + > > > + ====================================== ====== > > ======================== > > > + ``ETHTOOL_A_RCLK_HEADER`` nested request header > > > + ``ETHTOOL_A_RCLK_OUT_PIN_IDX`` u32 index of a pin > > > + ``ETHTOOL_A_RCLK_PIN_FLAGS`` u32 requested state > > > + ====================================== ====== > > ======================== > > > + > > > +``ETHTOOL_A_RCLK_OUT_PIN_IDX`` is a index of a pin for which the > > change of > > > +state is requested. Values of ``ETHTOOL_A_RCLK_PIN_ENABLED`` are: > > > + > > > +.. kernel-doc:: include/uapi/linux/ethtool.h > > > + :identifiers: ethtool_rclk_pin_state > > > > Same. > > Done - rewritten the manual > > > Looking at the diagram from the previous submission [1]: > > > > ??????????????????????? > > ? RX ? TX ? > > 1 ? ports ? ports ? 1 > > ??????????? ? ??????? > > 2 ? ? ? ? 2 > > ????????? ? ? ??????? > > 3 ? ? ? ? ? 3 > > ??????? ? ? ? ??????? > > ? ? ? ? ? ? > > ? ?????? ? ? > > ? \____/ ? ? > > ??????????????????????? > > 1? 2? ? > > RCLK out? ? ? TX CLK in > > ? ? ? > > ??????????????????? > > ? ? > > ? SEC ? > > ? ? > > ??????????????????? > > > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message allows > > me to redirect the frequency recovered from this netdev to the EEC via > > either pin 1, pin 2 or both. > > > > Given a netdev, the RCLK_GET message allows me to query the range of > > pins (RCLK out 1-2 in the diagram) through which the frequency can be > > fed into the EEC. > > > > Questions: > > > > 1. The query for all the above netdevs will return the same range of > > pins. How does user space know that these are the same pins? That is, > > how does user space know that RCLK_SET message to redirect the frequency > > recovered from netdev 1 to pin 1 will be overridden by the same message > > but for netdev 2? > > We don't have a way to do so right now. When we have EEC subsystem in place > the right thing to do will be to add EEC input index and EEC index as additional > arguments > > > 2. How does user space know the mapping between a netdev and an EEC? > > That is, how does user space know that RCLK_SET message for netdev 1 > > will cause the Tx frequency of netdev 2 to change according to the > > frequency recovered from netdev 1? > > Ditto - currently we don't have any entity to link the pins to ATM, > but we can address that in userspace just like PTP pins are used now > > > 3. If user space sends two RCLK_SET messages to redirect the frequency > > recovered from netdev 1 to RCLK out 1 and from netdev 2 to RCLK out 2, > > how does it know which recovered frequency is actually used an input to > > the EEC? User space doesn't know this as well? > > > > 4. Why these pins are represented as attributes of a netdev and not as > > attributes of the EEC? That is, why are they represented as output pins > > of the PHY as opposed to input pins of the EEC? > > They are 2 separate beings. Recovered clock outputs are controlled > separately from EEC inputs. Separate how? What does it mean that they are controlled separately? In which sense? That redirection of recovered frequency to pin is controlled via PHY registers whereas priority setting between EEC inputs is controlled via EEC registers? If so, this is an implementation detail of a specific design. It is not of any importance to user space. > If we mix them it'll be hard to control everything especially that a > single EEC can support multiple devices. Hard how? Please provide concrete examples. What do you mean by "multiple devices"? A multi-port adapter with a single EEC or something else? > Also if we make those pins attributes of the EEC it'll become extremally hard > to map them to netdevs and control them from the userspace app that will > receive the ESMC message with a given QL level on netdev X. Hard how? What is the problem with something like: # eec set source 1 type netdev dev swp1 The EEC object should be registered by the same entity that registers the netdevs whose Tx frequency is controlled by the EEC, the MAC driver. > > > 5. What is the problem with the following model? > > > > - The EEC is a separate object with following attributes: > > * State: Invalid / Freerun / Locked / etc > > * Sources: Netdev / external / etc > > * Potentially more > > > > - Notifications are emitted to user space when the state of the EEC > > changes. Drivers will either poll the state from the device or get > > interrupts > > > > - The mapping from netdev to EEC is queried via ethtool > > Yep - that will be part of the EEC (DPLL) subsystem This model avoids all the problems I pointed out in the current proposal. > > > [1] https://lore.kernel.org/netdev/20211110114448.2792314-1- > > maciej.machnikowski at intel.com/ From anthony.l.nguyen at intel.com Thu Dec 2 16:38:44 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:44 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 06/14] ice: Use the proto argument for VLAN ops In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-6-anthony.l.nguyen@intel.com> From: Brett Creeley Currently the proto argument is unused. This is because the driver only supports 802.1Q VLAN filtering. This policy is enforced via netdev features that the driver sets up when configuring the netdev, so the proto argument won't ever be anything other than 802.1Q. However, this will allow for future iterations of the driver to seemlessly support 802.1ad filtering. Begin using the proto argument and extend the related structures to support its use. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_fltr.c | 2 + drivers/net/ethernet/intel/ice/ice_lib.c | 2 +- drivers/net/ethernet/intel/ice/ice_main.c | 22 ++++----- drivers/net/ethernet/intel/ice/ice_switch.c | 5 ++ drivers/net/ethernet/intel/ice/ice_switch.h | 2 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 10 ++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 2 +- drivers/net/ethernet/intel/ice/ice_vlan.h | 3 +- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 48 ++++++++++++++++++- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 4 +- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 4 +- 11 files changed, 78 insertions(+), 26 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c index 8f543851e39f..67044556b5bd 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.c +++ b/drivers/net/ethernet/intel/ice/ice_fltr.c @@ -220,6 +220,8 @@ ice_fltr_add_vlan_to_list(struct ice_vsi *vsi, struct list_head *list, info.fltr_act = ICE_FWD_TO_VSI; info.vsi_handle = vsi->idx; info.l_data.vlan.vlan_id = vlan->vid; + info.l_data.vlan.tpid = vlan->tpid; + info.l_data.vlan.tpid_valid = true; return ice_fltr_add_entry_to_list(ice_pf_to_dev(vsi->back), &info, list); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 55a2aef54922..0fff5ec897c9 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3880,7 +3880,7 @@ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { struct ice_vlan vlan; - vlan = ICE_VLAN(0, 0); + vlan = ICE_VLAN(0, 0, 0); return vsi->vlan_ops.add_vlan(vsi, &vlan); } diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 8669858d104c..8a0684c0ebd0 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3410,14 +3410,13 @@ ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi) /** * ice_vlan_rx_add_vid - Add a VLAN ID filter to HW offload * @netdev: network interface to be adjusted - * @proto: unused protocol + * @proto: VLAN TPID * @vid: VLAN ID to be added * * net_device_ops implementation for adding VLAN IDs */ static int -ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, - u16 vid) +ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; @@ -3438,7 +3437,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); ret = vsi->vlan_ops.add_vlan(vsi, &vlan); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3449,14 +3448,13 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /** * ice_vlan_rx_kill_vid - Remove a VLAN ID filter from HW offload * @netdev: network interface to be adjusted - * @proto: unused protocol + * @proto: VLAN TPID * @vid: VLAN ID to be removed * * net_device_ops implementation for removing VLAN IDs */ static int -ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, - u16 vid) +ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; @@ -3470,7 +3468,7 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, /* Make sure VLAN delete is successful before updating VLAN * information */ - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); ret = vsi->vlan_ops.del_vlan(vsi, &vlan); if (ret) return ret; @@ -5621,14 +5619,14 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vsi->vlan_ops.ena_stripping(vsi); + ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) ret = vsi->vlan_ops.dis_stripping(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_TX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vsi->vlan_ops.ena_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) ret = vsi->vlan_ops.dis_insertion(vsi); @@ -5674,9 +5672,9 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) int ret = 0; if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - ret = vsi->vlan_ops.ena_stripping(vsi); + ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - ret = vsi->vlan_ops.ena_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index f998fcddc789..f851a81a7240 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -1539,6 +1539,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, struct ice_aqc_sw_rules_elem *s_rule, enum ice_adminq_opc opc) { u16 vlan_id = ICE_MAX_VLAN_ID + 1; + u16 vlan_tpid = ETH_P_8021Q; void *daddr = NULL; u16 eth_hdr_sz; u8 *eth_hdr; @@ -1611,6 +1612,8 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, break; case ICE_SW_LKUP_VLAN: vlan_id = f_info->l_data.vlan.vlan_id; + if (f_info->l_data.vlan.tpid_valid) + vlan_tpid = f_info->l_data.vlan.tpid; if (f_info->fltr_act == ICE_FWD_TO_VSI || f_info->fltr_act == ICE_FWD_TO_VSI_LIST) { act |= ICE_SINGLE_ACT_PRUNE; @@ -1653,6 +1656,8 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info, if (!(vlan_id > ICE_MAX_VLAN_ID)) { off = (__force __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET); *off = cpu_to_be16(vlan_id); + off = (__force __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET); + *off = cpu_to_be16(vlan_tpid); } /* Create the switch rule with the final dummy Ethernet header */ diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index 4fb1a7ae5dbb..5000cc8276cd 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -77,6 +77,8 @@ struct ice_fltr_info { } mac_vlan; struct { u16 vlan_id; + u16 tpid; + u8 tpid_valid; } vlan; /* Set lkup_type as ICE_SW_LKUP_ETHERTYPE * if just using ethertype as filter. Set lkup_type as diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 4971e547432c..e576cd201a48 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -4139,7 +4139,7 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, mutex_lock(&vf->cfg_lock); - vf->port_vlan_info = ICE_VLAN(vlan_id, qos); + vf->port_vlan_info = ICE_VLAN(ETH_P_8021Q, vlan_id, qos); if (ice_vf_is_port_vlan_ena(vf)) dev_info(dev, "Setting VLAN %u, QoS %u on VF %d\n", vlan_id, qos, vf_id); @@ -4260,7 +4260,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); status = vsi->vlan_ops.add_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4313,7 +4313,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - vlan = ICE_VLAN(vid, 0); + vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); status = vsi->vlan_ops.del_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4392,7 +4392,7 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf) } vsi = ice_get_vf_vsi(vf); - if (vsi->vlan_ops.ena_stripping(vsi)) + if (vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4457,7 +4457,7 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) - return vsi->vlan_ops.ena_stripping(vsi); + return vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); else return vsi->vlan_ops.dis_stripping(vsi); } diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 5079a3b72698..b06ca1f97833 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -120,7 +120,7 @@ struct ice_vf { struct ice_time_mac legacy_last_added_umac; DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); - struct ice_vlan port_vlan_info; /* Port VLAN ID and QoS */ + struct ice_vlan port_vlan_info; /* Port VLAN ID, QoS, and TPID */ u8 pf_set_mac:1; /* VF MAC address set by VMM admin */ u8 trusted:1; u8 spoofchk:1; diff --git a/drivers/net/ethernet/intel/ice/ice_vlan.h b/drivers/net/ethernet/intel/ice/ice_vlan.h index 3fad0cba2da6..bc4550a03173 100644 --- a/drivers/net/ethernet/intel/ice/ice_vlan.h +++ b/drivers/net/ethernet/intel/ice/ice_vlan.h @@ -8,10 +8,11 @@ #include "ice_type.h" struct ice_vlan { + u16 tpid; u16 vid; u8 prio; }; -#define ICE_VLAN(vid, prio) ((struct ice_vlan){ vid, prio }) +#define ICE_VLAN(tpid, vid, prio) ((struct ice_vlan){ tpid, vid, prio }) #endif /* _ICE_VLAN_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 74b6dec0744b..6b7feab0b2a1 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -6,6 +6,31 @@ #include "ice_fltr.h" #include "ice.h" +static void print_invalid_tpid(struct ice_vsi *vsi, u16 tpid) +{ + dev_err(ice_pf_to_dev(vsi->back), "%s %d specified invalid VLAN tpid 0x%04x\n", + ice_vsi_type_str(vsi->type), vsi->idx, tpid); +} + +/** + * validate_vlan - check if the ice_vlan passed in is valid + * @vsi: VSI used for printing error message + * @vlan: ice_vlan structure to validate + * + * Return true if the VLAN TPID is valid or if the VLAN TPID is 0 and the VLAN + * VID is 0, which allows for non-zero VLAN filters with the specified VLAN TPID + * and untagged VLAN 0 filters to be added to the prune list respectively. + */ +static bool validate_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + if (vlan->tpid != ETH_P_8021Q && (vlan->tpid || vlan->vid)) { + print_invalid_tpid(vsi, vlan->tpid); + return false; + } + + return true; +} + /** * ice_vsi_add_vlan - default add VLAN implementation for all VSI types * @vsi: VSI being configured @@ -15,6 +40,9 @@ int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { int err = 0; + if (!validate_vlan(vsi, vlan)) + return -EINVAL; + if (!ice_fltr_add_vlan(vsi, vlan)) { vsi->num_vlan++; } else { @@ -37,6 +65,9 @@ int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) struct device *dev; int err; + if (!validate_vlan(vsi, vlan)) + return -EINVAL; + dev = ice_pf_to_dev(pf); err = ice_fltr_remove_vlan(vsi, vlan); @@ -143,8 +174,13 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) return err; } -int ice_vsi_ena_stripping(struct ice_vsi *vsi) +int ice_vsi_ena_stripping(struct ice_vsi *vsi, const u16 tpid) { + if (tpid != ETH_P_8021Q) { + print_invalid_tpid(vsi, tpid); + return -EINVAL; + } + return ice_vsi_manage_vlan_stripping(vsi, true); } @@ -153,8 +189,13 @@ int ice_vsi_dis_stripping(struct ice_vsi *vsi) return ice_vsi_manage_vlan_stripping(vsi, false); } -int ice_vsi_ena_insertion(struct ice_vsi *vsi) +int ice_vsi_ena_insertion(struct ice_vsi *vsi, const u16 tpid) { + if (tpid != ETH_P_8021Q) { + print_invalid_tpid(vsi, tpid); + return -EINVAL; + } + return ice_vsi_manage_vlan_insertion(vsi); } @@ -216,6 +257,9 @@ int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { u16 port_vlan_info; + if (vlan->tpid != ETH_P_8021Q) + return -EINVAL; + if (vlan->prio > 7) return -EINVAL; diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index a0305007896c..1bdbf585db7d 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -12,9 +12,9 @@ struct ice_vsi; int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); -int ice_vsi_ena_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_stripping(struct ice_vsi *vsi, u16 tpid); int ice_vsi_dis_stripping(struct ice_vsi *vsi); -int ice_vsi_ena_insertion(struct ice_vsi *vsi); +int ice_vsi_ena_insertion(struct ice_vsi *vsi, u16 tpid); int ice_vsi_dis_insertion(struct ice_vsi *vsi); int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index c944f04acd3c..76e55b259bc8 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -12,9 +12,9 @@ struct ice_vsi; struct ice_vsi_vlan_ops { int (*add_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); int (*del_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); - int (*ena_stripping)(struct ice_vsi *vsi); + int (*ena_stripping)(struct ice_vsi *vsi, const u16 tpid); int (*dis_stripping)(struct ice_vsi *vsi); - int (*ena_insertion)(struct ice_vsi *vsi); + int (*ena_insertion)(struct ice_vsi *vsi, const u16 tpid); int (*dis_insertion)(struct ice_vsi *vsi); int (*ena_rx_filtering)(struct ice_vsi *vsi); int (*dis_rx_filtering)(struct ice_vsi *vsi); -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:52 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:52 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 14/14] ice: Add ability for PF admin to enable VF VLAN pruning In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-14-anthony.l.nguyen@intel.com> From: Brett Creeley VFs by default are able to see all tagged traffic regardless of trust and VLAN filters. Based on legacy devices (i.e. ixgbe, i40e), customers expect VFs to receive all VLAN tagged traffic with a matching destination MAC. Add an ethtool private flag 'vf-vlan-pruning' and set the default to off so VFs will receive all VLAN traffic directed towards them. When the flag is turned on, VF will only be able to receive untagged traffic or traffic with VLAN tags it has created interfaces for. Also, the flag cannot be changed while any VFs are allocated. This was done to simplify the implementation. So, if this flag is needed, then the PF admin must enable it. If the user tries to enable the flag while VFs are active, then print an unsupported message with the vf-vlan-pruning flag included. In case multiple flags were specified, this makes it clear to the user which flag failed. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice.h | 1 + drivers/net/ethernet/intel/ice/ice_ethtool.c | 9 +++++++++ .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 18 ++++++++++++++++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 14 ++++++++++++++ 4 files changed, 40 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 14aaca8dbbb7..dc86f2562e0f 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -486,6 +486,7 @@ enum ice_pf_flags { ICE_FLAG_LEGACY_RX, ICE_FLAG_VF_TRUE_PROMISC_ENA, ICE_FLAG_MDD_AUTO_RESET_VF, + ICE_FLAG_VF_VLAN_PRUNING, ICE_FLAG_LINK_LENIENT_MODE_ENA, ICE_FLAG_GNSS, /* GNSS successfully initialized */ ICE_PF_FLAGS_NBITS /* must be last */ diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index e2e3ef7fba7f..28ead0b4712f 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -164,6 +164,7 @@ static const struct ice_priv_flag ice_gstrings_priv_flags[] = { ICE_PRIV_FLAG("vf-true-promisc-support", ICE_FLAG_VF_TRUE_PROMISC_ENA), ICE_PRIV_FLAG("mdd-auto-reset-vf", ICE_FLAG_MDD_AUTO_RESET_VF), + ICE_PRIV_FLAG("vf-vlan-pruning", ICE_FLAG_VF_VLAN_PRUNING), ICE_PRIV_FLAG("legacy-rx", ICE_FLAG_LEGACY_RX), }; @@ -1295,6 +1296,14 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags) change_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags); ret = -EAGAIN; } + + if (test_bit(ICE_FLAG_VF_VLAN_PRUNING, change_flags) && + pf->num_alloc_vfs) { + dev_err(dev, "vf-vlan-pruning: VLAN pruning cannot be changed while VFs are active.\n"); + /* toggle bit back to previous state */ + change_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags); + ret = -EOPNOTSUPP; + } ethtool_exit: clear_bit(ICE_FLAG_ETHTOOL_CTXT, pf->flags); return ret; diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index 4be29f97365c..39f2d36cabba 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -43,7 +43,6 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) /* outer VLAN ops regardless of port VLAN config */ vlan_ops->add_vlan = ice_vsi_add_vlan; - vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; @@ -51,6 +50,8 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) if (ice_vf_is_port_vlan_ena(vf)) { /* setup outer VLAN ops */ vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; /* setup inner VLAN ops */ vlan_ops = &vsi->inner_vlan_ops; @@ -61,6 +62,12 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; } else { + if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) + vlan_ops->ena_rx_filtering = noop_vlan; + else + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; + vlan_ops->del_vlan = ice_vsi_del_vlan; vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; @@ -80,14 +87,21 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) /* inner VLAN ops regardless of port VLAN config */ vlan_ops->add_vlan = ice_vsi_add_vlan; - vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; if (ice_vf_is_port_vlan_ena(vf)) { vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; } else { + if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) + vlan_ops->ena_rx_filtering = noop_vlan; + else + vlan_ops->ena_rx_filtering = + ice_vsi_ena_rx_vlan_filtering; + vlan_ops->del_vlan = ice_vsi_del_vlan; vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index f1802de98b82..674d27c1a81d 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -807,6 +807,11 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf, struct ice_vsi *vsi) return err; } + err = vlan_ops->ena_rx_filtering(vsi); + if (err) + dev_warn(dev, "failed to enable Rx VLAN filtering for VF %d VSI %d during VF rebuild, error %d\n", + vf->vf_id, vsi->idx, err); + return 0; } @@ -1791,6 +1796,7 @@ static void ice_vc_notify_vf_reset(struct ice_vf *vf) */ static int ice_init_vf_vsi_res(struct ice_vf *vf) { + struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; u8 broadcast[ETH_ALEN]; struct ice_vsi *vsi; @@ -1811,6 +1817,14 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf) goto release_vsi; } + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + err = vlan_ops->ena_rx_filtering(vsi); + if (err) { + dev_warn(dev, "Failed to enable Rx VLAN filtering for VF %d\n", + vf->vf_id); + goto release_vsi; + } + eth_broadcast_addr(broadcast); err = ice_fltr_add_mac(vsi, broadcast, ICE_FWD_TO_VSI); if (err) { -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:40 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:40 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 02/14] ice: Add helper function for adding VLAN 0 In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-2-anthony.l.nguyen@intel.com> From: Brett Creeley There are multiple places where VLAN 0 is being added. Create a function to be called in order to minimize changes as the implementation is expanded to support double VLAN and avoid duplicated code. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_eswitch.c | 4 ++-- drivers/net/ethernet/intel/ice/ice_lib.c | 11 ++++++++++- drivers/net/ethernet/intel/ice/ice_lib.h | 2 +- drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 2 +- 4 files changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index a737c54c4895..291748553800 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -127,7 +127,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) __dev_mc_unsync(uplink_netdev, NULL); netif_addr_unlock_bh(uplink_netdev); - if (ice_vsi_add_vlan(uplink_vsi, 0, ICE_FWD_TO_VSI)) + if (ice_vsi_add_vlan_zero(uplink_vsi)) goto err_def_rx; if (!ice_is_dflt_vsi_in_use(uplink_vsi->vsw)) { @@ -231,7 +231,7 @@ static int ice_eswitch_setup_reprs(struct ice_pf *pf) goto err; } - if (ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI)) { + if (ice_vsi_add_vlan_zero(vsi)) { ice_fltr_add_mac_and_broadcast(vsi, vf->hw_lan_addr.addr, ICE_FWD_TO_VSI); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 2db3cd6d8907..cc135792834e 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2621,7 +2621,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, * so this handles those cases (i.e. adding the PF to a bridge * without the 8021q module loaded). */ - ret = ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); + ret = ice_vsi_add_vlan_zero(vsi); if (ret) goto unroll_clear_rings; @@ -4069,6 +4069,15 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) return 0; } +/** + * ice_vsi_add_vlan_zero - add VLAN 0 filter(s) for this VSI + * @vsi: VSI used to add VLAN filters + */ +int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) +{ + return ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); +} + /** * ice_is_feature_supported * @pf: pointer to the struct ice_pf instance diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 9fdd95dd5a14..28e0f1147c82 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -133,7 +133,7 @@ void ice_vsi_ctx_clear_antispoof(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx); - +int ice_vsi_add_vlan_zero(struct ice_vsi *vsi); bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f); void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_init_feature_support(struct ice_pf *pf); diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index f947d936def3..ab03010c822d 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -1855,7 +1855,7 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf) if (!vsi) return -ENOMEM; - err = ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); + err = ice_vsi_add_vlan_zero(vsi); if (err) { dev_warn(dev, "Failed to add VLAN 0 filter for VF %d\n", vf->vf_id); -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:39 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:39 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 01/14] ice: Refactor spoofcheck configuration functions Message-ID: <20211202163852.36436-1-anthony.l.nguyen@intel.com> From: Brett Creeley Add functions to configure Tx VLAN antispoof based on iproute configuration and/or VLAN mode and VF driver support. This is needed later so the driver can control when it can be configured. Also, add functions that can be used to enable and disable MAC and VLAN spoofcheck. Move spoofchk configuration during VSI setup into the SR-IOV initialization path and into the post VSI rebuild flow for VF VSIs. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_lib.c | 19 --- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 159 ++++++++++++++---- 2 files changed, 128 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 5ef959769104..2db3cd6d8907 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1125,25 +1125,6 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID); } - /* enable/disable MAC and VLAN anti-spoof when spoofchk is on/off - * respectively - */ - if (vsi->type == ICE_VSI_VF) { - ctxt->info.valid_sections |= - cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); - if (pf->vf[vsi->vf_id].spoofchk) { - ctxt->info.sec_flags |= - ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); - } else { - ctxt->info.sec_flags &= - ~(ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)); - } - } - /* Allow control frames out of main VSI */ if (vsi->type == ICE_VSI_PF) { ctxt->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 8f2045b7c29f..f947d936def3 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -837,6 +837,114 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) return 0; } +static int ice_cfg_vlan_antispoof(struct ice_vsi *vsi, bool enable) +{ + struct ice_vsi_ctx *ctx; + int err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->info.sec_flags = vsi->info.sec_flags; + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); + + if (enable) + ctx->info.sec_flags |= ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S; + else + ctx->info.sec_flags &= ~(ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); + + err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx VLAN anti-spoof %s for VSI %d, error %d\n", + enable ? "ON" : "OFF", vsi->vsi_num, err); + else + vsi->info.sec_flags = ctx->info.sec_flags; + + kfree(ctx); + + return err; +} + +static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable) +{ + struct ice_vsi_ctx *ctx; + int err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->info.sec_flags = vsi->info.sec_flags; + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); + + if (enable) + ctx->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF; + else + ctx->info.sec_flags &= ~ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF; + + err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx MAC anti-spoof %s for VSI %d, error %d\n", + enable ? "ON" : "OFF", vsi->vsi_num, err); + else + vsi->info.sec_flags = ctx->info.sec_flags; + + kfree(ctx); + + return err; +} + +/** + * ice_vsi_ena_spoofchk - enable Tx spoof checking for this VSI + * @vsi: VSI to enable Tx spoof checking for + */ +static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) +{ + int err; + + err = ice_cfg_vlan_antispoof(vsi, true); + if (err) + return err; + + return ice_cfg_mac_antispoof(vsi, true); +} + +/** + * ice_vsi_dis_spoofchk - disable Tx spoof checking for this VSI + * @vsi: VSI to disable Tx spoof checking for + */ +static int ice_vsi_dis_spoofchk(struct ice_vsi *vsi) +{ + int err; + + err = ice_cfg_vlan_antispoof(vsi, false); + if (err) + return err; + + return ice_cfg_mac_antispoof(vsi, false); +} + +/** + * ice_vf_set_spoofchk_cfg - apply Tx spoof checking setting + * @vf: VF set spoofchk for + * @vsi: VSI associated to the VF + */ +static int +ice_vf_set_spoofchk_cfg(struct ice_vf *vf, struct ice_vsi *vsi) +{ + int err; + + if (vf->spoofchk) + err = ice_vsi_ena_spoofchk(vsi); + else + err = ice_vsi_dis_spoofchk(vsi); + + return err; +} + /** * ice_vf_rebuild_host_mac_cfg - add broadcast and the VF's perm_addr/LAA * @vf: VF to add MAC filters for @@ -1344,6 +1452,10 @@ static void ice_vf_rebuild_host_cfg(struct ice_vf *vf) dev_err(dev, "failed to rebuild Tx rate limiting configuration for VF %u\n", vf->vf_id); + if (ice_vf_set_spoofchk_cfg(vf, vsi)) + dev_err(dev, "failed to rebuild spoofchk configuration for VF %d\n", + vf->vf_id); + /* rebuild aggregator node config for main VF VSI */ ice_vf_rebuild_aggregator_node_cfg(vsi); } @@ -1758,6 +1870,13 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf) goto release_vsi; } + err = ice_vf_set_spoofchk_cfg(vf, vsi); + if (err) { + dev_warn(dev, "Failed to initialize spoofchk setting for VF %d\n", + vf->vf_id); + goto release_vsi; + } + vf->num_mac = 1; return 0; @@ -2891,7 +3010,6 @@ int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena) { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_pf *pf = np->vsi->back; - struct ice_vsi_ctx *ctx; struct ice_vsi *vf_vsi; struct device *dev; struct ice_vf *vf; @@ -2924,37 +3042,16 @@ int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena) return 0; } - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); - if (!ctx) - return -ENOMEM; - - ctx->info.sec_flags = vf_vsi->info.sec_flags; - ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); - if (ena) { - ctx->info.sec_flags |= - ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); - } else { - ctx->info.sec_flags &= - ~(ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF | - (ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S)); - } - - ret = ice_update_vsi(&pf->hw, vf_vsi->idx, ctx, NULL); - if (ret) { - dev_err(dev, "Failed to %sable spoofchk on VF %d VSI %d\n error %d\n", - ena ? "en" : "dis", vf->vf_id, vf_vsi->vsi_num, ret); - goto out; - } - - /* only update spoofchk state and VSI context on success */ - vf_vsi->info.sec_flags = ctx->info.sec_flags; - vf->spoofchk = ena; + if (ena) + ret = ice_vsi_ena_spoofchk(vf_vsi); + else + ret = ice_vsi_dis_spoofchk(vf_vsi); + if (ret) + dev_err(dev, "Failed to set spoofchk %s for VF %d VSI %d\n error %d\n", + ena ? "ON" : "OFF", vf->vf_id, vf_vsi->vsi_num, ret); + else + vf->spoofchk = ena; -out: - kfree(ctx); return ret; } -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:48 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:48 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 10/14] ice: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-10-anthony.l.nguyen@intel.com> From: Brett Creeley Add support for the VF driver to be able to request VIRTCHNL_VF_OFFLOAD_VLAN_V2, negotiate its VLAN capabilities via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, add/delete VLAN filters, and enable/disable VLAN offloads. VFs supporting VIRTCHNL_OFFLOAD_VLAN_V2 will be able to use the following virtchnl opcodes: VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS VIRTCHNL_OP_ADD_VLAN_V2 VIRTCHNL_OP_DEL_VLAN_V2 VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 Legacy VF drivers may expect the initial VLAN stripping settings to be configured by the PF, so the PF initializes VLAN stripping based on the VIRTCHNL_OP_GET_VF_RESOURCES opcode. However, with VLAN support via VIRTCHNL_VF_OFFLOAD_VLAN_V2, this function is only expected to be used for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN, which will only be supported when a port VLAN is configured. Update the function based on the new expectations. Also, change the message when the PF can't enable/disable VLAN stripping to a dev_dbg() as this isn't fatal. When a VF isn't in a port VLAN and it only supports VIRTCHNL_VF_OFFLOAD_VLAN when Double VLAN Mode (DVM) is enabled, then the PF needs to reject the VIRTCHNL_VF_OFFLOAD_VLAN capability and configure the VF in software only VLAN mode. To do this add the new function ice_vf_vsi_cfg_legacy_vlan_mode(), which updates the VF's inner and outer ice_vsi_vlan_ops functions and sets up software only VLAN mode. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_base.c | 1 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 115 ++ .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.h | 3 + .../intel/ice/ice_virtchnl_allowlist.c | 10 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 1132 ++++++++++++++++- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 8 + 6 files changed, 1226 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 9ca0ae2bb1dc..0dec7c5463eb 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -5,6 +5,7 @@ #include "ice_base.h" #include "ice_lib.h" #include "ice_dcb_lib.h" +#include "ice_virtchnl_pf.h" /** * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index 741b041606a2..d89577843d68 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -14,9 +14,20 @@ noop_vlan_arg(struct ice_vsi __always_unused *vsi, return 0; } +static int +noop_vlan(struct ice_vsi __always_unused *vsi) +{ + return 0; +} + /** * ice_vf_vsi_init_vlan_ops - Initialize default VSI VLAN ops for VF VSI * @vsi: VF's VSI being configured + * + * If Double VLAN Mode (DVM) is enabled, assume that the VF supports the new + * VIRTCHNL_VF_VLAN_OFFLOAD_V2 capability and set up the VLAN ops accordingly. + * If SVM is enabled maintain the same level of VLAN support previous to + * VIRTCHNL_VF_VLAN_OFFLOAD_V2. */ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) { @@ -44,6 +55,20 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) vlan_ops = &vsi->inner_vlan_ops; vlan_ops->add_vlan = noop_vlan_arg; vlan_ops->del_vlan = noop_vlan_arg; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } else { + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_outer_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + + /* setup inner VLAN ops */ + vlan_ops = &vsi->inner_vlan_ops; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; @@ -70,3 +95,93 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) } } } + +/** + * ice_vf_vsi_cfg_dvm_legacy_vlan_mode - Config VLAN mode for old VFs in DVM + * @vsi: VF's VSI being configured + * + * This should only be called when Double VLAN Mode (DVM) is enabled, there + * is not a port VLAN enabled on this VF, and the VF negotiates + * VIRTCHNL_VF_OFFLOAD_VLAN. + * + * This function sets up the VF VSI's inner and outer ice_vsi_vlan_ops and also + * initializes software only VLAN mode (i.e. allow all VLANs). Also, use no-op + * implementations for any functions that may be called during the lifetime of + * the VF so these methods do nothing and succeed. + */ +void ice_vf_vsi_cfg_dvm_legacy_vlan_mode(struct ice_vsi *vsi) +{ + struct ice_vf *vf = &vsi->back->vf[vsi->vf_id]; + struct device *dev = ice_pf_to_dev(vf->pf); + struct ice_vsi_vlan_ops *vlan_ops; + + if (!ice_is_dvm_ena(&vsi->back->hw) || ice_vf_is_port_vlan_ena(vf)) + return; + + vlan_ops = &vsi->outer_vlan_ops; + + /* Rx VLAN filtering always disabled to allow software offloaded VLANs + * for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN and don't have a + * port VLAN configured + */ + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + /* Don't fail when attempting to enable Rx VLAN filtering */ + vlan_ops->ena_rx_filtering = noop_vlan; + + /* Tx VLAN filtering always disabled to allow software offloaded VLANs + * for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN and don't have a + * port VLAN configured + */ + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + /* Don't fail when attempting to enable Tx VLAN filtering */ + vlan_ops->ena_tx_filtering = noop_vlan; + + if (vlan_ops->dis_rx_filtering(vsi)) + dev_dbg(dev, "Failed to disable Rx VLAN filtering for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + if (vlan_ops->dis_tx_filtering(vsi)) + dev_dbg(dev, "Failed to disable Tx VLAN filtering for old VF without VIRTHCNL_VF_OFFLOAD_VLAN_V2 support\n"); + + /* All outer VLAN offloads must be disabled */ + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + + if (vlan_ops->dis_stripping(vsi)) + dev_dbg(dev, "Failed to disable outer VLAN stripping for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + + if (vlan_ops->dis_insertion(vsi)) + dev_dbg(dev, "Failed to disable outer VLAN insertion for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + + /* All inner VLAN offloads must be disabled */ + vlan_ops = &vsi->inner_vlan_ops; + + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + + if (vlan_ops->dis_stripping(vsi)) + dev_dbg(dev, "Failed to disable inner VLAN stripping for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); + + if (vlan_ops->dis_insertion(vsi)) + dev_dbg(dev, "Failed to disable inner VLAN insertion for old VF without VIRTCHNL_VF_OFFLOAD_VLAN_V2 support\n"); +} + +/** + * ice_vf_vsi_cfg_svm_legacy_vlan_mode - Config VLAN mode for old VFs in SVM + * @vsi: VF's VSI being configured + * + * This should only be called when Single VLAN Mode (SVM) is enabled, there is + * not a port VLAN enabled on this VF, and the VF negotiates + * VIRTCHNL_VF_OFFLOAD_VLAN. + * + * All of the normal SVM VLAN ops are identical for this case. However, by + * default Rx VLAN filtering should be turned off by default in this case. + */ +void ice_vf_vsi_cfg_svm_legacy_vlan_mode(struct ice_vsi *vsi) +{ + struct ice_vf *vf = &vsi->back->vf[vsi->vf_id]; + + if (ice_is_dvm_ena(&vsi->back->hw) || ice_vf_is_port_vlan_ena(vf)) + return; + + if (vsi->inner_vlan_ops.dis_rx_filtering(vsi)) + dev_dbg(ice_pf_to_dev(vf->pf), "Failed to disable Rx VLAN filtering for old VF with VIRTCHNL_VF_OFFLOAD_VLAN support\n"); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h index 8ea13628a5e1..875a4e615f39 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h @@ -8,6 +8,9 @@ struct ice_vsi; +void ice_vf_vsi_cfg_dvm_legacy_vlan_mode(struct ice_vsi *vsi); +void ice_vf_vsi_cfg_svm_legacy_vlan_mode(struct ice_vsi *vsi); + #ifdef CONFIG_PCI_IOV void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi); #else diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c index 9feebe5f556c..5a82216e7d03 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c @@ -55,6 +55,15 @@ static const u32 vlan_allowlist_opcodes[] = { VIRTCHNL_OP_ENABLE_VLAN_STRIPPING, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING, }; +/* VIRTCHNL_VF_OFFLOAD_VLAN_V2 */ +static const u32 vlan_v2_allowlist_opcodes[] = { + VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, VIRTCHNL_OP_ADD_VLAN_V2, + VIRTCHNL_OP_DEL_VLAN_V2, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2, + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2, + VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2, + VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2, +}; + /* VIRTCHNL_VF_OFFLOAD_RSS_PF */ static const u32 rss_pf_allowlist_opcodes[] = { VIRTCHNL_OP_CONFIG_RSS_KEY, VIRTCHNL_OP_CONFIG_RSS_LUT, @@ -89,6 +98,7 @@ static const struct allowlist_opcode_info allowlist_opcodes[] = { ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_RSS_PF, rss_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF, adv_rss_pf_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_FDIR_PF, fdir_pf_allowlist_opcodes), + ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_VLAN_V2, vlan_v2_allowlist_opcodes), }; /** diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 100c86c8ad9a..de74a2b4f846 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -11,6 +11,7 @@ #include "ice_virtchnl_allowlist.h" #include "ice_flex_pipe.h" #include "ice_vf_vsi_vlan_ops.h" +#include "ice_vlan.h" #define FIELD_SELECTOR(proto_hdr_field) \ BIT((proto_hdr_field) & PROTO_HDR_FIELD_MASK) @@ -1458,6 +1459,7 @@ static void ice_vf_set_initialized(struct ice_vf *vf) clear_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states); clear_bit(ICE_VF_STATE_DIS, vf->vf_states); set_bit(ICE_VF_STATE_INIT, vf->vf_states); + memset(&vf->vlan_v2_caps, 0, sizeof(vf->vlan_v2_caps)); } /** @@ -2347,8 +2349,33 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) goto err; } - if (!ice_vf_is_port_vlan_ena(vf)) - vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; + if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN_V2) { + /* VLAN offloads based on current device configuration */ + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN_V2; + } else if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_VLAN) { + /* allow VF to negotiate VIRTCHNL_VF_OFFLOAD explicitly for + * these two conditions, which amounts to guest VLAN filtering + * and offloads being based on the inner VLAN or the + * inner/single VLAN respectively and don't allow VF to + * negotiate VIRTCHNL_VF_OFFLOAD in any other cases + */ + if (ice_is_dvm_ena(&pf->hw) && ice_vf_is_port_vlan_ena(vf)) { + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; + } else if (!ice_is_dvm_ena(&pf->hw) && + !ice_vf_is_port_vlan_ena(vf)) { + vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; + /* configure backward compatible support for VFs that + * only support VIRTCHNL_VF_OFFLOAD_VLAN, the PF is + * configured in SVM, and no port VLAN is configured + */ + ice_vf_vsi_cfg_svm_legacy_vlan_mode(vsi); + } else if (ice_is_dvm_ena(&pf->hw)) { + /* configure software offloaded VLAN support when DVM + * is enabled, but no port VLAN is enabled + */ + ice_vf_vsi_cfg_dvm_legacy_vlan_mode(vsi); + } + } if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_PF; @@ -4175,6 +4202,62 @@ static bool ice_vf_vlan_offload_ena(u32 caps) return !!(caps & VIRTCHNL_VF_OFFLOAD_VLAN); } +/** + * ice_is_vlan_promisc_allowed - check if VLAN promiscuous config is allowed + * @vf: VF used to determine if VLAN promiscuous config is allowed + */ +static bool ice_is_vlan_promisc_allowed(struct ice_vf *vf) +{ + if ((test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states) || + test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) && + test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, vf->pf->flags)) + return true; + + return false; +} + +/** + * ice_vf_ena_vlan_promisc - Enable Tx/Rx VLAN promiscuous for the VLAN + * @vsi: VF's VSI used to enable VLAN promiscuous mode + * @vlan: VLAN used to enable VLAN promiscuous + * + * This function should only be called if VLAN promiscuous mode is allowed, + * which can be determined via ice_is_vlan_promisc_allowed(). + */ +static int ice_vf_ena_vlan_promisc(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + u8 promisc_m = ICE_PROMISC_VLAN_TX | ICE_PROMISC_VLAN_RX; + int status; + + status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, + vlan->vid); + if (status && status != -EEXIST) + return status; + + return 0; +} + +/** + * ice_vf_dis_vlan_promisc - Disable Tx/Rx VLAN promiscuous for the VLAN + * @vsi: VF's VSI used to disable VLAN promiscuous mode for + * @vlan: VLAN used to disable VLAN promiscuous + * + * This function should only be called if VLAN promiscuous mode is allowed, + * which can be determined via ice_is_vlan_promisc_allowed(). + */ +static int ice_vf_dis_vlan_promisc(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + u8 promisc_m = ICE_PROMISC_VLAN_TX | ICE_PROMISC_VLAN_RX; + int status; + + status = ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, + vlan->vid); + if (status && status != -ENOENT) + return status; + + return 0; +} + /** * ice_vf_has_max_vlans - check if VF already has the max allowed VLAN filters * @vf: VF to check against @@ -4209,14 +4292,11 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; struct virtchnl_vlan_filter_list *vfl = (struct virtchnl_vlan_filter_list *)msg; - struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; bool vlan_promisc = false; struct ice_vsi *vsi; struct device *dev; - struct ice_hw *hw; int status = 0; - u8 promisc_m; int i; dev = ice_pf_to_dev(pf); @@ -4244,7 +4324,6 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) } } - hw = &pf->hw; vsi = ice_get_vf_vsi(vf); if (!vsi) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4260,17 +4339,22 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - if (ice_vf_is_port_vlan_ena(vf)) { + /* in DVM a VF can add/delete inner VLAN filters when + * VIRTCHNL_VF_OFFLOAD_VLAN is negotiated, so only reject in SVM + */ + if (ice_vf_is_port_vlan_ena(vf) && !ice_is_dvm_ena(&pf->hw)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; } - if ((test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states) || - test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) && - test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags)) - vlan_promisc = true; + /* in DVM VLAN promiscuous is based on the outer VLAN, which would be + * the port VLAN if VIRTCHNL_VF_OFFLOAD_VLAN was negotiated, so only + * allow vlan_promisc = true in SVM and if no port VLAN is configured + */ + vlan_promisc = ice_is_vlan_promisc_allowed(vf) && + !ice_is_dvm_ena(&pf->hw) && + !ice_vf_is_port_vlan_ena(vf); - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); if (add_v) { for (i = 0; i < vfl->num_elements; i++) { u16 vid = vfl->vlan_id[i]; @@ -4300,23 +4384,16 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - /* Enable VLAN pruning when non-zero VLAN is added */ - if (!vlan_promisc && vid && - !ice_vsi_is_vlan_pruning_ena(vsi)) { - status = vlan_ops->ena_rx_filtering(vsi); - if (status) { + /* Enable VLAN filtering on first non-zero VLAN */ + if (!vlan_promisc && vid && !ice_is_dvm_ena(&pf->hw)) { + if (vsi->inner_vlan_ops.ena_rx_filtering(vsi)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n", vid, status); goto error_param; } } else if (vlan_promisc) { - /* Enable Ucast/Mcast VLAN promiscuous mode */ - promisc_m = ICE_PROMISC_VLAN_TX | - ICE_PROMISC_VLAN_RX; - - status = ice_set_vsi_promisc(hw, vsi->idx, - promisc_m, vid); + status = ice_vf_ena_vlan_promisc(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable Unicast/multicast promiscuous mode on VLAN ID:%d failed error-%d\n", @@ -4353,19 +4430,12 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - /* Disable VLAN pruning when only VLAN 0 is left */ - if (!ice_vsi_has_non_zero_vlans(vsi) && - ice_vsi_is_vlan_pruning_ena(vsi)) - status = vlan_ops->dis_rx_filtering(vsi); - - /* Disable Unicast/Multicast VLAN promiscuous mode */ - if (vlan_promisc) { - promisc_m = ICE_PROMISC_VLAN_TX | - ICE_PROMISC_VLAN_RX; + /* Disable VLAN filtering when only VLAN 0 is left */ + if (!ice_vsi_has_non_zero_vlans(vsi)) + vsi->inner_vlan_ops.dis_rx_filtering(vsi); - ice_clear_vsi_promisc(hw, vsi->idx, - promisc_m, vid); - } + if (vlan_promisc) + ice_vf_dis_vlan_promisc(vsi, &vlan); } } @@ -4472,11 +4542,8 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf) * ice_vf_init_vlan_stripping - enable/disable VLAN stripping on initialization * @vf: VF to enable/disable VLAN stripping for on initialization * - * If the VIRTCHNL_VF_OFFLOAD_VLAN flag is set enable VLAN stripping, else if - * the flag is cleared then we want to disable stripping. For example, the flag - * will be cleared when port VLANs are configured by the administrator before - * passing the VF to the guest or if the AVF driver doesn't support VLAN - * offloads. + * Set the default for VLAN stripping based on whether a port VLAN is configured + * and the current VLAN mode of the device. */ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) { @@ -4485,8 +4552,10 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) if (!vsi) return -EINVAL; - /* don't modify stripping if port VLAN is configured */ - if (ice_vf_is_port_vlan_ena(vf)) + /* don't modify stripping if port VLAN is configured in SVM since the + * port VLAN is based on the inner/single VLAN in SVM + */ + if (ice_vf_is_port_vlan_ena(vf) && !ice_is_dvm_ena(&vsi->back->hw)) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) @@ -4495,6 +4564,955 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return vsi->inner_vlan_ops.dis_stripping(vsi); } +static u16 ice_vc_get_max_vlan_fltrs(struct ice_vf *vf) +{ + if (vf->trusted) + return VLAN_N_VID; + else + return ICE_MAX_VLAN_PER_VF; +} + +/** + * ice_vf_outer_vlan_not_allowed - check outer VLAN can be used when the device is in DVM + * @vf: VF that being checked for + */ +static bool ice_vf_outer_vlan_not_allowed(struct ice_vf *vf) +{ + if (ice_vf_is_port_vlan_ena(vf)) + return true; + + return false; +} + +/** + * ice_vc_set_dvm_caps - set VLAN capabilities when the device is in DVM + * @vf: VF that capabilities are being set for + * @caps: VLAN capabilities to populate + * + * Determine VLAN capabilities support based on whether a port VLAN is + * configured. If a port VLAN is configured then the VF should use the inner + * filtering/offload capabilities since the port VLAN is using the outer VLAN + * capabilies. + */ +static void +ice_vc_set_dvm_caps(struct ice_vf *vf, struct virtchnl_vlan_caps *caps) +{ + struct virtchnl_vlan_supported_caps *supported_caps; + + if (ice_vf_outer_vlan_not_allowed(vf)) { + /* until support for inner VLAN filtering is added when a port + * VLAN is configured, only support software offloaded inner + * VLANs when a port VLAN is confgured in DVM + */ + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + caps->offloads.ethertype_match = + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + } else { + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100 | + VIRTCHNL_VLAN_ETHERTYPE_AND; + caps->filtering.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100 | + VIRTCHNL_VLAN_ETHERTYPE_XOR | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_ETHERTYPE_88A8 | + VIRTCHNL_VLAN_ETHERTYPE_9100 | + VIRTCHNL_VLAN_ETHERTYPE_XOR | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + + caps->offloads.ethertype_match = + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + } + + caps->filtering.max_filters = ice_vc_get_max_vlan_fltrs(vf); +} + +/** + * ice_vc_set_svm_caps - set VLAN capabilities when the device is in SVM + * @vf: VF that capabilities are being set for + * @caps: VLAN capabilities to populate + * + * Determine VLAN capabilities support based on whether a port VLAN is + * configured. If a port VLAN is configured then the VF does not have any VLAN + * filtering or offload capabilities since the port VLAN is using the inner VLAN + * capabilities in single VLAN mode (SVM). Otherwise allow the VF to use inner + * VLAN fitlering and offload capabilities. + */ +static void +ice_vc_set_svm_caps(struct ice_vf *vf, struct virtchnl_vlan_caps *caps) +{ + struct virtchnl_vlan_supported_caps *supported_caps; + + if (ice_vf_is_port_vlan_ena(vf)) { + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_UNSUPPORTED; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_UNSUPPORTED; + caps->offloads.ethertype_match = VIRTCHNL_VLAN_UNSUPPORTED; + caps->filtering.max_filters = 0; + } else { + supported_caps = &caps->filtering.filtering_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + caps->filtering.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + + supported_caps = &caps->offloads.stripping_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + supported_caps = &caps->offloads.insertion_support; + supported_caps->inner = VIRTCHNL_VLAN_ETHERTYPE_8100 | + VIRTCHNL_VLAN_TOGGLE | + VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1; + supported_caps->outer = VIRTCHNL_VLAN_UNSUPPORTED; + + caps->offloads.ethertype_init = VIRTCHNL_VLAN_ETHERTYPE_8100; + caps->offloads.ethertype_match = + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION; + caps->filtering.max_filters = ice_vc_get_max_vlan_fltrs(vf); + } +} + +/** + * ice_vc_get_offload_vlan_v2_caps - determine VF's VLAN capabilities + * @vf: VF to determine VLAN capabilities for + * + * This will only be called if the VF and PF successfully negotiated + * VIRTCHNL_VF_OFFLOAD_VLAN_V2. + * + * Set VLAN capabilities based on the current VLAN mode and whether a port VLAN + * is configured or not. + */ +static int ice_vc_get_offload_vlan_v2_caps(struct ice_vf *vf) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_caps *caps = NULL; + int err, len = 0; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + caps = kzalloc(sizeof(*caps), GFP_KERNEL); + if (!caps) { + v_ret = VIRTCHNL_STATUS_ERR_NO_MEMORY; + goto out; + } + len = sizeof(*caps); + + if (ice_is_dvm_ena(&vf->pf->hw)) + ice_vc_set_dvm_caps(vf, caps); + else + ice_vc_set_svm_caps(vf, caps); + + /* store negotiated caps to prevent invalid VF messages */ + memcpy(&vf->vlan_v2_caps, caps, sizeof(*caps)); + +out: + err = ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, + v_ret, (u8 *)caps, len); + kfree(caps); + return err; +} + +/** + * ice_vc_validate_vlan_tpid - validate VLAN TPID + * @filtering_caps: negotiated/supported VLAN filtering capabilities + * @tpid: VLAN TPID used for validation + * + * Convert the VLAN TPID to a VIRTCHNL_VLAN_ETHERTYPE_* and then compare against + * the negotiated/supported filtering caps to see if the VLAN TPID is valid. + */ +static bool ice_vc_validate_vlan_tpid(u16 filtering_caps, u16 tpid) +{ + enum virtchnl_vlan_support vlan_ethertype = VIRTCHNL_VLAN_UNSUPPORTED; + + switch (tpid) { + case ETH_P_8021Q: + vlan_ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100; + break; + case ETH_P_8021AD: + vlan_ethertype = VIRTCHNL_VLAN_ETHERTYPE_88A8; + break; + case ETH_P_QINQ1: + vlan_ethertype = VIRTCHNL_VLAN_ETHERTYPE_9100; + break; + } + + if (!(filtering_caps & vlan_ethertype)) + return false; + + return true; +} + +/** + * ice_vc_is_valid_vlan - validate the virtchnl_vlan + * @vc_vlan: virtchnl_vlan to validate + * + * If the VLAN TCI and VLAN TPID are 0, then this filter is invalid, so return + * false. Otherwise return true. + */ +static bool ice_vc_is_valid_vlan(struct virtchnl_vlan *vc_vlan) +{ + if (!vc_vlan->tci || !vc_vlan->tpid) + return false; + + return true; +} + +/** + * ice_vc_validate_vlan_filter_list - validate the filter list from the VF + * @vfc: negotiated/supported VLAN filtering capabilities + * @vfl: VLAN filter list from VF to validate + * + * Validate all of the filters in the VLAN filter list from the VF. If any of + * the checks fail then return false. Otherwise return true. + */ +static bool +ice_vc_validate_vlan_filter_list(struct virtchnl_vlan_filtering_caps *vfc, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + u16 i; + + if (!vfl->num_elements) + return false; + + for (i = 0; i < vfl->num_elements; i++) { + struct virtchnl_vlan_supported_caps *filtering_support = + &vfc->filtering_support; + struct virtchnl_vlan_filter *vlan_fltr = &vfl->filters[i]; + struct virtchnl_vlan *outer = &vlan_fltr->outer; + struct virtchnl_vlan *inner = &vlan_fltr->inner; + + if ((ice_vc_is_valid_vlan(outer) && + filtering_support->outer == VIRTCHNL_VLAN_UNSUPPORTED) || + (ice_vc_is_valid_vlan(inner) && + filtering_support->inner == VIRTCHNL_VLAN_UNSUPPORTED)) + return false; + + if ((outer->tci_mask && + !(filtering_support->outer & VIRTCHNL_VLAN_FILTER_MASK)) || + (inner->tci_mask && + !(filtering_support->inner & VIRTCHNL_VLAN_FILTER_MASK))) + return false; + + if (((outer->tci & VLAN_PRIO_MASK) && + !(filtering_support->outer & VIRTCHNL_VLAN_PRIO)) || + ((inner->tci & VLAN_PRIO_MASK) && + !(filtering_support->inner & VIRTCHNL_VLAN_PRIO))) + return false; + + if ((ice_vc_is_valid_vlan(outer) && + !ice_vc_validate_vlan_tpid(filtering_support->outer, outer->tpid)) || + (ice_vc_is_valid_vlan(inner) && + !ice_vc_validate_vlan_tpid(filtering_support->inner, inner->tpid))) + return false; + } + + return true; +} + +/** + * ice_vc_to_vlan - transform from struct virtchnl_vlan to struct ice_vlan + * @vc_vlan: struct virtchnl_vlan to transform + */ +static struct ice_vlan ice_vc_to_vlan(struct virtchnl_vlan *vc_vlan) +{ + struct ice_vlan vlan = { 0 }; + + vlan.prio = (vc_vlan->tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + vlan.vid = vc_vlan->tci & VLAN_VID_MASK; + vlan.tpid = vc_vlan->tpid; + + return vlan; +} + +/** + * ice_vc_vlan_action - action to perform on the virthcnl_vlan + * @vsi: VF's VSI used to perform the action + * @vlan_action: function to perform the action with (i.e. add/del) + * @vlan: VLAN filter to perform the action with + */ +static int +ice_vc_vlan_action(struct ice_vsi *vsi, + int (*vlan_action)(struct ice_vsi *, struct ice_vlan *), + struct ice_vlan *vlan) +{ + int err; + + err = vlan_action(vsi, vlan); + if (err) + return err; + + return 0; +} + +/** + * ice_vc_del_vlans - delete VLAN(s) from the virtchnl filter list + * @vf: VF used to delete the VLAN(s) + * @vsi: VF's VSI used to delete the VLAN(s) + * @vfl: virthchnl filter list used to delete the filters + */ +static int +ice_vc_del_vlans(struct ice_vf *vf, struct ice_vsi *vsi, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + bool vlan_promisc = ice_is_vlan_promisc_allowed(vf); + int err; + u16 i; + + for (i = 0; i < vfl->num_elements; i++) { + struct virtchnl_vlan_filter *vlan_fltr = &vfl->filters[i]; + struct virtchnl_vlan *vc_vlan; + + vc_vlan = &vlan_fltr->outer; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->outer_vlan_ops.del_vlan, + &vlan); + if (err) + return err; + + if (vlan_promisc) + ice_vf_dis_vlan_promisc(vsi, &vlan); + } + + vc_vlan = &vlan_fltr->inner; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->inner_vlan_ops.del_vlan, + &vlan); + if (err) + return err; + + /* no support for VLAN promiscuous on inner VLAN unless + * we are in Single VLAN Mode (SVM) + */ + if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc) + ice_vf_dis_vlan_promisc(vsi, &vlan); + } + } + + return 0; +} + +/** + * ice_vc_remove_vlan_v2_msg - virtchnl handler for VIRTCHNL_OP_DEL_VLAN_V2 + * @vf: VF the message was received from + * @msg: message received from the VF + */ +static int ice_vc_remove_vlan_v2_msg(struct ice_vf *vf, u8 *msg) +{ + struct virtchnl_vlan_filter_list_v2 *vfl = + (struct virtchnl_vlan_filter_list_v2 *)msg; + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct ice_vsi *vsi; + + if (!ice_vc_validate_vlan_filter_list(&vf->vlan_v2_caps.filtering, + vfl)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, vfl->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (ice_vc_del_vlans(vf, vsi, vfl)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DEL_VLAN_V2, v_ret, NULL, + 0); +} + +/** + * ice_vc_add_vlans - add VLAN(s) from the virtchnl filter list + * @vf: VF used to add the VLAN(s) + * @vsi: VF's VSI used to add the VLAN(s) + * @vfl: virthchnl filter list used to add the filters + */ +static int +ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + bool vlan_promisc = ice_is_vlan_promisc_allowed(vf); + int err; + u16 i; + + for (i = 0; i < vfl->num_elements; i++) { + struct virtchnl_vlan_filter *vlan_fltr = &vfl->filters[i]; + struct virtchnl_vlan *vc_vlan; + + vc_vlan = &vlan_fltr->outer; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->outer_vlan_ops.add_vlan, + &vlan); + if (err) + return err; + + if (vlan_promisc) { + err = ice_vf_ena_vlan_promisc(vsi, &vlan); + if (err) + return err; + } + } + + vc_vlan = &vlan_fltr->inner; + if (ice_vc_is_valid_vlan(vc_vlan)) { + struct ice_vlan vlan = ice_vc_to_vlan(vc_vlan); + + err = ice_vc_vlan_action(vsi, + vsi->inner_vlan_ops.add_vlan, + &vlan); + if (err) + return err; + + /* no support for VLAN promiscuous on inner VLAN unless + * we are in Single VLAN Mode (SVM) + */ + if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc) { + err = ice_vf_ena_vlan_promisc(vsi, &vlan); + if (err) + return err; + } + } + } + + return 0; +} + +/** + * ice_vc_validate_add_vlan_filter_list - validate add filter list from the VF + * @vsi: VF VSI used to get number of existing VLAN filters + * @vfc: negotiated/supported VLAN filtering capabilities + * @vfl: VLAN filter list from VF to validate + * + * Validate all of the filters in the VLAN filter list from the VF during the + * VIRTCHNL_OP_ADD_VLAN_V2 opcode. If any of the checks fail then return false. + * Otherwise return true. + */ +static bool +ice_vc_validate_add_vlan_filter_list(struct ice_vsi *vsi, + struct virtchnl_vlan_filtering_caps *vfc, + struct virtchnl_vlan_filter_list_v2 *vfl) +{ + u16 num_requested_filters = vsi->num_vlan + vfl->num_elements; + + if (num_requested_filters > vfc->max_filters) + return false; + + return ice_vc_validate_vlan_filter_list(vfc, vfl); +} + +/** + * ice_vc_add_vlan_v2_msg - virtchnl handler for VIRTCHNL_OP_ADD_VLAN_V2 + * @vf: VF the message was received from + * @msg: message received from the VF + */ +static int ice_vc_add_vlan_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_filter_list_v2 *vfl = + (struct virtchnl_vlan_filter_list_v2 *)msg; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, vfl->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_validate_add_vlan_filter_list(vsi, + &vf->vlan_v2_caps.filtering, + vfl)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (ice_vc_add_vlans(vf, vsi, vfl)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_VLAN_V2, v_ret, NULL, + 0); +} + +/** + * ice_vc_valid_vlan_setting - validate VLAN setting + * @negotiated_settings: negotiated VLAN settings during VF init + * @ethertype_setting: ethertype(s) requested for the VLAN setting + */ +static bool +ice_vc_valid_vlan_setting(u32 negotiated_settings, u32 ethertype_setting) +{ + if (ethertype_setting && !(negotiated_settings & ethertype_setting)) + return false; + + /* only allow a single VIRTCHNL_VLAN_ETHERTYPE if + * VIRTHCNL_VLAN_ETHERTYPE_AND is not negotiated/supported + */ + if (!(negotiated_settings & VIRTCHNL_VLAN_ETHERTYPE_AND) && + hweight32(ethertype_setting) > 1) + return false; + + /* ability to modify the VLAN setting was not negotiated */ + if (!(negotiated_settings & VIRTCHNL_VLAN_TOGGLE)) + return false; + + return true; +} + +/** + * ice_vc_valid_vlan_setting_msg - validate the VLAN setting message + * @caps: negotiated VLAN settings during VF init + * @msg: message to validate + * + * Used to validate any VLAN virtchnl message sent as a + * virtchnl_vlan_setting structure. Validates the message against the + * negotiated/supported caps during VF driver init. + */ +static bool +ice_vc_valid_vlan_setting_msg(struct virtchnl_vlan_supported_caps *caps, + struct virtchnl_vlan_setting *msg) +{ + if ((!msg->outer_ethertype_setting && + !msg->inner_ethertype_setting) || + (!caps->outer && !caps->inner)) + return false; + + if (msg->outer_ethertype_setting && + !ice_vc_valid_vlan_setting(caps->outer, + msg->outer_ethertype_setting)) + return false; + + if (msg->inner_ethertype_setting && + !ice_vc_valid_vlan_setting(caps->inner, + msg->inner_ethertype_setting)) + return false; + + return true; +} + +/** + * ice_vc_get_tpid - transform from VIRTCHNL_VLAN_ETHERTYPE_* to VLAN TPID + * @ethertype_setting: VIRTCHNL_VLAN_ETHERTYPE_* used to get VLAN TPID + * @tpid: VLAN TPID to populate + */ +static int ice_vc_get_tpid(u32 ethertype_setting, u16 *tpid) +{ + switch (ethertype_setting) { + case VIRTCHNL_VLAN_ETHERTYPE_8100: + *tpid = ETH_P_8021Q; + break; + case VIRTCHNL_VLAN_ETHERTYPE_88A8: + *tpid = ETH_P_8021AD; + break; + case VIRTCHNL_VLAN_ETHERTYPE_9100: + *tpid = ETH_P_QINQ1; + break; + default: + *tpid = 0; + return -EINVAL; + } + + return 0; +} + +/** + * ice_vc_ena_vlan_offload - enable VLAN offload based on the ethertype_setting + * @vsi: VF's VSI used to enable the VLAN offload + * @ena_offload: function used to enable the VLAN offload + * @ethertype_setting: VIRTCHNL_VLAN_ETHERTYPE_* to enable offloads for + */ +static int +ice_vc_ena_vlan_offload(struct ice_vsi *vsi, + int (*ena_offload)(struct ice_vsi *vsi, u16 tpid), + u32 ethertype_setting) +{ + u16 tpid; + int err; + + err = ice_vc_get_tpid(ethertype_setting, &tpid); + if (err) + return err; + + err = ena_offload(vsi, tpid); + if (err) + return err; + + return 0; +} + +#define ICE_L2TSEL_QRX_CONTEXT_REG_IDX 3 +#define ICE_L2TSEL_BIT_OFFSET 23 +enum ice_l2tsel { + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG2_2ND, + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG1, +}; + +/** + * ice_vsi_update_l2tsel - update l2tsel field for all Rx rings on this VSI + * @vsi: VSI used to update l2tsel on + * @l2tsel: l2tsel setting requested + * + * Use the l2tsel setting to update all of the Rx queue context bits for l2tsel. + * This will modify which descriptor field the first offloaded VLAN will be + * stripped into. + */ +static void ice_vsi_update_l2tsel(struct ice_vsi *vsi, enum ice_l2tsel l2tsel) +{ + struct ice_hw *hw = &vsi->back->hw; + u32 l2tsel_bit; + int i; + + if (l2tsel == ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG2_2ND) + l2tsel_bit = 0; + else + l2tsel_bit = BIT(ICE_L2TSEL_BIT_OFFSET); + + for (i = 0; i < vsi->alloc_rxq; i++) { + u16 pfq = vsi->rxq_map[i]; + u32 qrx_context_offset; + u32 regval; + + qrx_context_offset = + QRX_CONTEXT(ICE_L2TSEL_QRX_CONTEXT_REG_IDX, pfq); + + regval = rd32(hw, qrx_context_offset); + regval &= ~BIT(ICE_L2TSEL_BIT_OFFSET); + regval |= l2tsel_bit; + wr32(hw, qrx_context_offset, regval); + } +} + +/** + * ice_vc_ena_vlan_stripping_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 + */ +static int ice_vc_ena_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *stripping_support; + struct virtchnl_vlan_setting *strip_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, strip_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + stripping_support = &vf->vlan_v2_caps.offloads.stripping_support; + if (!ice_vc_valid_vlan_setting_msg(stripping_support, strip_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = strip_msg->outer_ethertype_setting; + if (ethertype_setting) { + if (ice_vc_ena_vlan_offload(vsi, + vsi->outer_vlan_ops.ena_stripping, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } else { + enum ice_l2tsel l2tsel = + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG2_2ND; + + /* PF tells the VF that the outer VLAN tag is always + * extracted to VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 and + * inner is always extracted to + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1. This is needed to + * support outer stripping so the first tag always ends + * up in L2TAG2_2ND and the second/inner tag, if + * enabled, is extracted in L2TAG1. + */ + ice_vsi_update_l2tsel(vsi, l2tsel); + } + } + + ethertype_setting = strip_msg->inner_ethertype_setting; + if (ethertype_setting && + ice_vc_ena_vlan_offload(vsi, vsi->inner_vlan_ops.ena_stripping, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2, v_ret, NULL, 0); +} + +/** + * ice_vc_dis_vlan_stripping_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 + */ +static int ice_vc_dis_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *stripping_support; + struct virtchnl_vlan_setting *strip_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, strip_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + stripping_support = &vf->vlan_v2_caps.offloads.stripping_support; + if (!ice_vc_valid_vlan_setting_msg(stripping_support, strip_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = strip_msg->outer_ethertype_setting; + if (ethertype_setting) { + if (vsi->outer_vlan_ops.dis_stripping(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } else { + enum ice_l2tsel l2tsel = + ICE_L2TSEL_EXTRACT_FIRST_TAG_L2TAG1; + + /* PF tells the VF that the outer VLAN tag is always + * extracted to VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 and + * inner is always extracted to + * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1. This is needed to + * support inner stripping while outer stripping is + * disabled so that the first and only tag is extracted + * in L2TAG1. + */ + ice_vsi_update_l2tsel(vsi, l2tsel); + } + } + + ethertype_setting = strip_msg->inner_ethertype_setting; + if (ethertype_setting && vsi->inner_vlan_ops.dis_stripping(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2, v_ret, NULL, 0); +} + +/** + * ice_vc_ena_vlan_insertion_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 + */ +static int ice_vc_ena_vlan_insertion_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *insertion_support; + struct virtchnl_vlan_setting *insertion_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, insertion_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + insertion_support = &vf->vlan_v2_caps.offloads.insertion_support; + if (!ice_vc_valid_vlan_setting_msg(insertion_support, insertion_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->outer_ethertype_setting; + if (ethertype_setting && + ice_vc_ena_vlan_offload(vsi, vsi->outer_vlan_ops.ena_insertion, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->inner_ethertype_setting; + if (ethertype_setting && + ice_vc_ena_vlan_offload(vsi, vsi->inner_vlan_ops.ena_insertion, + ethertype_setting)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2, v_ret, NULL, 0); +} + +/** + * ice_vc_dis_vlan_insertion_v2_msg + * @vf: VF the message was received from + * @msg: message received from the VF + * + * virthcnl handler for VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 + */ +static int ice_vc_dis_vlan_insertion_v2_msg(struct ice_vf *vf, u8 *msg) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_vlan_supported_caps *insertion_support; + struct virtchnl_vlan_setting *insertion_msg = + (struct virtchnl_vlan_setting *)msg; + u32 ethertype_setting; + struct ice_vsi *vsi; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + if (!ice_vc_isvalid_vsi_id(vf, insertion_msg->vport_id)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + vsi = ice_get_vf_vsi(vf); + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + insertion_support = &vf->vlan_v2_caps.offloads.insertion_support; + if (!ice_vc_valid_vlan_setting_msg(insertion_support, insertion_msg)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->outer_ethertype_setting; + if (ethertype_setting && vsi->outer_vlan_ops.dis_insertion(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + + ethertype_setting = insertion_msg->inner_ethertype_setting; + if (ethertype_setting && vsi->inner_vlan_ops.dis_insertion(vsi)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto out; + } + +out: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2, v_ret, NULL, 0); +} + static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { .get_ver_msg = ice_vc_get_ver_msg, .get_vf_res_msg = ice_vc_get_vf_res_msg, @@ -4517,6 +5535,13 @@ static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { .handle_rss_cfg_msg = ice_vc_handle_rss_cfg, .add_fdir_fltr_msg = ice_vc_add_fdir_fltr, .del_fdir_fltr_msg = ice_vc_del_fdir_fltr, + .get_offload_vlan_v2_caps = ice_vc_get_offload_vlan_v2_caps, + .add_vlan_v2_msg = ice_vc_add_vlan_v2_msg, + .remove_vlan_v2_msg = ice_vc_remove_vlan_v2_msg, + .ena_vlan_stripping_v2_msg = ice_vc_ena_vlan_stripping_v2_msg, + .dis_vlan_stripping_v2_msg = ice_vc_dis_vlan_stripping_v2_msg, + .ena_vlan_insertion_v2_msg = ice_vc_ena_vlan_insertion_v2_msg, + .dis_vlan_insertion_v2_msg = ice_vc_dis_vlan_insertion_v2_msg, }; void ice_vc_set_dflt_vf_ops(struct ice_vc_vf_ops *ops) @@ -4745,7 +5770,7 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event) case VIRTCHNL_OP_GET_VF_RESOURCES: err = ops->get_vf_res_msg(vf, msg); if (ice_vf_init_vlan_stripping(vf)) - dev_err(dev, "Failed to initialize VLAN stripping for VF %d\n", + dev_dbg(dev, "Failed to initialize VLAN stripping for VF %d\n", vf->vf_id); ice_vc_notify_vf_link_state(vf); break; @@ -4810,6 +5835,27 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event) case VIRTCHNL_OP_DEL_RSS_CFG: err = ops->handle_rss_cfg_msg(vf, msg, false); break; + case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: + err = ops->get_offload_vlan_v2_caps(vf); + break; + case VIRTCHNL_OP_ADD_VLAN_V2: + err = ops->add_vlan_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_DEL_VLAN_V2: + err = ops->remove_vlan_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + err = ops->ena_vlan_stripping_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + err = ops->dis_vlan_stripping_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2: + err = ops->ena_vlan_insertion_v2_msg(vf, msg); + break; + case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2: + err = ops->dis_vlan_insertion_v2_msg(vf, msg); + break; case VIRTCHNL_OP_UNKNOWN: default: dev_err(dev, "Unsupported opcode %d from VF %d\n", v_opcode, diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 4110847e0699..4f4961043638 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -95,6 +95,13 @@ struct ice_vc_vf_ops { int (*handle_rss_cfg_msg)(struct ice_vf *vf, u8 *msg, bool add); int (*add_fdir_fltr_msg)(struct ice_vf *vf, u8 *msg); int (*del_fdir_fltr_msg)(struct ice_vf *vf, u8 *msg); + int (*get_offload_vlan_v2_caps)(struct ice_vf *vf); + int (*add_vlan_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*remove_vlan_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*ena_vlan_stripping_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*dis_vlan_stripping_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*ena_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); + int (*dis_vlan_insertion_v2_msg)(struct ice_vf *vf, u8 *msg); }; /* VF information structure */ @@ -121,6 +128,7 @@ struct ice_vf { DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); struct ice_vlan port_vlan_info; /* Port VLAN ID, QoS, and TPID */ + struct virtchnl_vlan_caps vlan_v2_caps; u8 pf_set_mac:1; /* VF MAC address set by VMM admin */ u8 trusted:1; u8 spoofchk:1; -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:47 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:47 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 09/14] ice: Add hot path support for 802.1Q and 802.1ad VLAN offloads In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-9-anthony.l.nguyen@intel.com> From: Brett Creeley Currently the driver only supports 802.1Q VLAN insertion and stripping. However, once Double VLAN Mode (DVM) is fully supported, then both 802.1Q and 802.1ad VLAN insertion and stripping will be supported. Unfortunately the VSI context parameters only allow for one VLAN ethertype at a time for VLAN offloads so only one or the other VLAN ethertype offload can be supported at once. To support this, multiple changes are needed. Rx path changes: [1] In DVM, the Rx queue context l2tagsel field needs to be cleared so the outermost tag shows up in the l2tag2_2nd field of the Rx flex descriptor. In Single VLAN Mode (SVM), the l2tagsel field should remain 1 to support SVM configurations. [2] Modify the ice_test_staterr() function to take a __le16 instead of the ice_32b_rx_flex_desc union pointer so this function can be used for both rx_desc->wb.status_error0 and rx_desc->wb.status_error1. [3] Add the new inline function ice_get_vlan_tag_from_rx_desc() that checks if there is a VLAN tag in l2tag1 or l2tag2_2nd. [4] In ice_receive_skb(), add a check to see if NETIF_F_HW_VLAN_STAG_RX is enabled in netdev->features. If it is, then this is the VLAN ethertype that needs to be added to the stripping VLAN tag. Since ice_fix_features() prevents CTAG_RX and STAG_RX from being enabled simultaneously, the VLAN ethertype will only ever be 802.1Q or 802.1ad. Tx path changes: [1] In DVM, the VLAN tag needs to be placed in the l2tag2 field of the Tx context descriptor. The new define ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN was added to the list of tx_flags to handle this case. [2] When the stack requests the VLAN tag to be offloaded on Tx, the driver needs to set either ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN or ICE_TX_FLAGS_HW_VLAN, so the tag is inserted in l2tag2 or l2tag1 respectively. To determine which location to use, set a bit in the Tx ring flags field during ring allocation that can be used to determine which field to use in the Tx descriptor. In DVM, always use l2tag2, and in SVM, always use l2tag1. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- v2: Fix kdoc issue drivers/net/ethernet/intel/ice/ice_base.c | 18 +++++++++-- drivers/net/ethernet/intel/ice/ice_dcb_lib.c | 8 +++-- .../net/ethernet/intel/ice/ice_lan_tx_rx.h | 2 ++ drivers/net/ethernet/intel/ice/ice_lib.c | 5 ++++ drivers/net/ethernet/intel/ice/ice_txrx.c | 28 +++++++++++------ drivers/net/ethernet/intel/ice/ice_txrx.h | 3 ++ drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 9 ++++-- drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 30 +++++++++++++++++-- drivers/net/ethernet/intel/ice/ice_xsk.c | 6 ++-- 9 files changed, 87 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 44bdd0ed1629..9ca0ae2bb1dc 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -406,8 +406,22 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring) */ rlan_ctx.crcstrip = 1; - /* L2TSEL flag defines the reported L2 Tags in the receive descriptor */ - rlan_ctx.l2tsel = 1; + /* L2TSEL flag defines the reported L2 Tags in the receive descriptor + * and it needs to remain 1 for non-DVM capable configurations to not + * break backward compatibility for VF drivers. Setting this field to 0 + * will cause the single/outer VLAN tag to be stripped to the L2TAG2_2ND + * field in the Rx descriptor. Setting it to 1 allows the VLAN tag to + * be stripped in L2TAG1 of the Rx descriptor, which is where VFs will + * check for the tag + */ + if (ice_is_dvm_ena(hw)) + if (vsi->type == ICE_VSI_VF && + ice_vf_is_port_vlan_ena(&vsi->back->vf[vsi->vf_id])) + rlan_ctx.l2tsel = 1; + else + rlan_ctx.l2tsel = 0; + else + rlan_ctx.l2tsel = 1; rlan_ctx.dtype = ICE_RX_DTYPE_NO_SPLIT; rlan_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_NO_SPLIT; diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c index b94d8daeaa58..add90e75f05c 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c @@ -916,7 +916,8 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, return; /* Insert 802.1p priority into VLAN header */ - if ((first->tx_flags & ICE_TX_FLAGS_HW_VLAN) || + if ((first->tx_flags & ICE_TX_FLAGS_HW_VLAN || + first->tx_flags & ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN) || skb->priority != TC_PRIO_CONTROL) { first->tx_flags &= ~ICE_TX_FLAGS_VLAN_PR_M; /* Mask the lower 3 bits to set the 802.1p priority */ @@ -925,7 +926,10 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, /* if this is not already set it means a VLAN 0 + priority needs * to be offloaded */ - first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; + if (tx_ring->flags & ICE_TX_FLAGS_RING_VLAN_L2TAG2) + first->tx_flags |= ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN; + else + first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; } } diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index d981dc6f2323..a1fc676a4665 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -424,6 +424,8 @@ enum ice_rx_flex_desc_status_error_0_bits { enum ice_rx_flex_desc_status_error_1_bits { /* Note: These are predefined bit offsets */ ICE_RX_FLEX_DESC_STATUS1_NAT_S = 4, + /* [10:5] reserved */ + ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11, ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */ }; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 6a7f107a43c5..36507f0dc04e 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1370,6 +1370,7 @@ static void ice_vsi_clear_rings(struct ice_vsi *vsi) */ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) { + bool dvm_ena = ice_is_dvm_ena(&vsi->back->hw); struct ice_pf *pf = vsi->back; struct device *dev; u16 i; @@ -1391,6 +1392,10 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) ring->tx_tstamps = &pf->ptp.port.tx; ring->dev = dev; ring->count = vsi->num_tx_desc; + if (dvm_ena) + ring->flags |= ICE_TX_FLAGS_RING_VLAN_L2TAG2; + else + ring->flags |= ICE_TX_FLAGS_RING_VLAN_L2TAG1; WRITE_ONCE(vsi->tx_rings[i], ring); } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index d21f1c946767..3461aa21641a 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1073,7 +1073,7 @@ ice_is_non_eop(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) { /* if we are the last buffer then there is nothing else to do */ #define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S) - if (likely(ice_test_staterr(rx_desc, ICE_RXD_EOF))) + if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF))) return false; rx_ring->rx_stats.non_eop_descs++; @@ -1135,7 +1135,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) * hardware wrote DD then it will be non-zero */ stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S); - if (!ice_test_staterr(rx_desc, stat_err_bits)) + if (!ice_test_staterr(rx_desc->wb.status_error0, stat_err_bits)) break; /* This memory barrier is needed to keep us from reading @@ -1221,14 +1221,13 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) continue; stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S); - if (unlikely(ice_test_staterr(rx_desc, stat_err_bits))) { + if (unlikely(ice_test_staterr(rx_desc->wb.status_error0, + stat_err_bits))) { dev_kfree_skb_any(skb); continue; } - stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S); - if (ice_test_staterr(rx_desc, stat_err_bits)) - vlan_tag = le16_to_cpu(rx_desc->wb.l2tag1); + vlan_tag = ice_get_vlan_tag_from_rx_desc(rx_desc); /* pad the skb if needed, to make a valid ethernet frame */ if (eth_skb_pad(skb)) { @@ -1910,12 +1909,16 @@ ice_tx_prepare_vlan_flags(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first) if (!skb_vlan_tag_present(skb) && eth_type_vlan(skb->protocol)) return; - /* currently, we always assume 802.1Q for VLAN insertion as VLAN - * insertion for 802.1AD is not supported + /* the VLAN ethertype/tpid is determined by VSI configuration and netdev + * feature flags, which the driver only allows either 802.1Q or 802.1ad + * VLAN offloads exclusively so we only care about the VLAN ID here */ if (skb_vlan_tag_present(skb)) { first->tx_flags |= skb_vlan_tag_get(skb) << ICE_TX_FLAGS_VLAN_S; - first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; + if (tx_ring->flags & ICE_TX_FLAGS_RING_VLAN_L2TAG2) + first->tx_flags |= ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN; + else + first->tx_flags |= ICE_TX_FLAGS_HW_VLAN; } ice_tx_prepare_vlan_flags_dcb(tx_ring, first); @@ -2288,6 +2291,13 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring) /* prepare the VLAN tagging flags for Tx */ ice_tx_prepare_vlan_flags(tx_ring, first); + if (first->tx_flags & ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN) { + offload.cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX | + (ICE_TX_CTX_DESC_IL2TAG2 << + ICE_TXD_CTX_QW1_CMD_S)); + offload.cd_l2tag2 = (first->tx_flags & ICE_TX_FLAGS_VLAN_M) >> + ICE_TX_FLAGS_VLAN_S; + } /* set up TSO offload */ tso = ice_tso(first, &offload); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index c56dd1749903..03bbae035de8 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -123,6 +123,7 @@ static inline int ice_skb_pad(void) #define ICE_TX_FLAGS_IPV4 BIT(5) #define ICE_TX_FLAGS_IPV6 BIT(6) #define ICE_TX_FLAGS_TUNNEL BIT(7) +#define ICE_TX_FLAGS_HW_OUTER_SINGLE_VLAN BIT(8) #define ICE_TX_FLAGS_VLAN_M 0xffff0000 #define ICE_TX_FLAGS_VLAN_PR_M 0xe0000000 #define ICE_TX_FLAGS_VLAN_PR_S 29 @@ -334,6 +335,8 @@ struct ice_tx_ring { spinlock_t tx_lock; u32 txq_teid; /* Added Tx queue TEID */ #define ICE_TX_FLAGS_RING_XDP BIT(0) +#define ICE_TX_FLAGS_RING_VLAN_L2TAG1 BIT(1) +#define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2) u8 flags; u8 dcb_tc; /* Traffic class of ring */ u8 ptp_tx; diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 84a6a3f9d624..9c37d827ed28 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -207,9 +207,14 @@ ice_process_skb_fields(struct ice_rx_ring *rx_ring, void ice_receive_skb(struct ice_rx_ring *rx_ring, struct sk_buff *skb, u16 vlan_tag) { - if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) && - (vlan_tag & VLAN_VID_MASK)) + netdev_features_t features = rx_ring->netdev->features; + bool non_zero_vlan = !!(vlan_tag & VLAN_VID_MASK); + + if ((features & NETIF_F_HW_VLAN_CTAG_RX) && non_zero_vlan) __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag); + else if ((features & NETIF_F_HW_VLAN_STAG_RX) && non_zero_vlan) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021AD), vlan_tag); + napi_gro_receive(&rx_ring->q_vector->napi, skb); } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h index 11b6c1601986..c7d2954dc9ea 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h @@ -7,7 +7,7 @@ /** * ice_test_staterr - tests bits in Rx descriptor status and error fields - * @rx_desc: pointer to receive descriptor (in le64 format) + * @status_err_n: Rx descriptor status_error0 or status_error1 bits * @stat_err_bits: value to mask * * This function does some fast chicanery in order to return the @@ -16,9 +16,9 @@ * at offset zero. */ static inline bool -ice_test_staterr(union ice_32b_rx_flex_desc *rx_desc, const u16 stat_err_bits) +ice_test_staterr(__le16 status_err_n, const u16 stat_err_bits) { - return !!(rx_desc->wb.status_error0 & cpu_to_le16(stat_err_bits)); + return !!(status_err_n & cpu_to_le16(stat_err_bits)); } static inline __le64 @@ -31,6 +31,30 @@ ice_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, u64 td_tag) (td_tag << ICE_TXD_QW1_L2TAG1_S)); } +/** + * ice_get_vlan_tag_from_rx_desc - get VLAN from Rx flex descriptor + * @rx_desc: Rx 32b flex descriptor with RXDID=2 + * + * The OS and current PF implementation only support stripping a single VLAN tag + * at a time, so there should only ever be 0 or 1 tags in the l2tag* fields. If + * one is found return the tag, else return 0 to mean no VLAN tag was found. + */ +static inline u16 +ice_get_vlan_tag_from_rx_desc(union ice_32b_rx_flex_desc *rx_desc) +{ + u16 stat_err_bits; + + stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S); + if (ice_test_staterr(rx_desc->wb.status_error0, stat_err_bits)) + return le16_to_cpu(rx_desc->wb.l2tag1); + + stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S); + if (ice_test_staterr(rx_desc->wb.status_error1, stat_err_bits)) + return le16_to_cpu(rx_desc->wb.l2tag2_2nd); + + return 0; +} + /** * ice_xdp_ring_update_tail - Updates the XDP Tx ring tail register * @xdp_ring: XDP Tx ring diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index ff55cb415b11..5b5fa3df29d5 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -530,7 +530,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) rx_desc = ICE_RX_DESC(rx_ring, rx_ring->next_to_clean); stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S); - if (!ice_test_staterr(rx_desc, stat_err_bits)) + if (!ice_test_staterr(rx_desc->wb.status_error0, stat_err_bits)) break; /* This memory barrier is needed to keep us from reading @@ -582,9 +582,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) total_rx_bytes += skb->len; total_rx_packets++; - stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S); - if (ice_test_staterr(rx_desc, stat_err_bits)) - vlan_tag = le16_to_cpu(rx_desc->wb.l2tag1); + vlan_tag = ice_get_vlan_tag_from_rx_desc(rx_desc); rx_ptype = le16_to_cpu(rx_desc->wb.ptype_flex_flags0) & ICE_RX_FLEX_DESC_PTYPE_M; -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:41 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:41 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 03/14] ice: Add new VSI VLAN ops In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-3-anthony.l.nguyen@intel.com> From: Brett Creeley Incoming changes to support 802.1Q and/or 802.1ad VLAN filtering and offloads require more flexibility when configuring VLANs. The VSI VLAN interface will allow flexibility for configuring VLANs for all VSI types. Add new files to separate the VSI VLAN ops and move functions to make the code more organized. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/Makefile | 2 + drivers/net/ethernet/intel/ice/ice.h | 2 + drivers/net/ethernet/intel/ice/ice_eswitch.c | 2 +- drivers/net/ethernet/intel/ice/ice_lib.c | 207 +---------- drivers/net/ethernet/intel/ice/ice_lib.h | 11 - drivers/net/ethernet/intel/ice/ice_main.c | 30 +- drivers/net/ethernet/intel/ice/ice_osdep.h | 1 + drivers/net/ethernet/intel/ice/ice_switch.h | 9 - drivers/net/ethernet/intel/ice/ice_type.h | 9 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 111 +----- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 326 ++++++++++++++++++ .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 27 ++ .../net/ethernet/intel/ice/ice_vsi_vlan_ops.c | 20 ++ .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 28 ++ 14 files changed, 450 insertions(+), 335 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index c22434a3ec4d..c40b3aa1d195 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -18,6 +18,8 @@ ice-y := ice_main.o \ ice_txrx_lib.o \ ice_txrx.o \ ice_fltr.o \ + ice_vsi_vlan_ops.o \ + ice_vsi_vlan_lib.o \ ice_fdir.o \ ice_ethtool_fdir.o \ ice_flex_pipe.o \ diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 6fa06b00c268..efcc713ba287 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -73,6 +73,7 @@ #include "ice_eswitch.h" #include "ice_lag.h" #include "ice_gnss.h" +#include "ice_vsi_vlan_ops.h" #define ICE_BAR0 0 #define ICE_REQ_DESC_MULTIPLE 32 @@ -370,6 +371,7 @@ struct ice_vsi { u8 irqs_ready:1; u8 current_isup:1; /* Sync 'link up' logging */ u8 stat_offsets_loaded:1; + struct ice_vsi_vlan_ops vlan_ops; u16 num_vlan; /* queue information */ diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 291748553800..0ff1a375f2aa 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -118,7 +118,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; bool rule_added = false; - ice_vsi_manage_vlan_stripping(ctrl_vsi, false); + ctrl_vsi->vlan_ops.dis_stripping(ctrl_vsi); ice_remove_vsi_fltr(&pf->hw, uplink_vsi->idx); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index cc135792834e..b50509584b31 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1694,62 +1694,6 @@ void ice_update_eth_stats(struct ice_vsi *vsi) vsi->stat_offsets_loaded = true; } -/** - * ice_vsi_add_vlan - Add VSI membership for given VLAN - * @vsi: the VSI being configured - * @vid: VLAN ID to be added - * @action: filter action to be performed on match - */ -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) -{ - struct ice_pf *pf = vsi->back; - struct device *dev; - int err = 0; - - dev = ice_pf_to_dev(pf); - - if (!ice_fltr_add_vlan(vsi, vid, action)) { - vsi->num_vlan++; - } else { - err = -ENODEV; - dev_err(dev, "Failure Adding VLAN %d on VSI %i\n", vid, - vsi->vsi_num); - } - - return err; -} - -/** - * ice_vsi_kill_vlan - Remove VSI membership for a given VLAN - * @vsi: the VSI being configured - * @vid: VLAN ID to be removed - * - * Returns 0 on success and negative on failure - */ -int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid) -{ - struct ice_pf *pf = vsi->back; - struct device *dev; - int err; - - dev = ice_pf_to_dev(pf); - - err = ice_fltr_remove_vlan(vsi, vid, ICE_FWD_TO_VSI); - if (!err) { - vsi->num_vlan--; - } else if (err == -ENOENT) { - dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist, error: %d\n", - vid, vsi->vsi_num, err); - err = 0; - } else { - dev_err(dev, "Error removing VLAN %d on vsi %i error: %d\n", - vid, vsi->vsi_num, err); - } - - return err; -} - /** * ice_vsi_cfg_frame_size - setup max frame size and Rx buffer length * @vsi: VSI @@ -2077,96 +2021,6 @@ void ice_vsi_cfg_msix(struct ice_vsi *vsi) } } -/** - * ice_vsi_manage_vlan_insertion - Manage VLAN insertion for the VSI for Tx - * @vsi: the VSI being changed - */ -int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) -{ - struct ice_hw *hw = &vsi->back->hw; - struct ice_vsi_ctx *ctxt; - int ret; - - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - /* Here we are configuring the VSI to let the driver add VLAN tags by - * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag - * insertion happens in the Tx hot path, in ice_tx_map. - */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL; - - /* Preserve existing VLAN strip setting */ - ctxt->info.vlan_flags |= (vsi->info.vlan_flags & - ICE_AQ_VSI_VLAN_EMOD_M); - - ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); - - ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); - if (ret) { - dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN insert failed, err %d aq_err %s\n", - ret, ice_aq_str(hw->adminq.sq_last_status)); - goto out; - } - - vsi->info.vlan_flags = ctxt->info.vlan_flags; -out: - kfree(ctxt); - return ret; -} - -/** - * ice_vsi_manage_vlan_stripping - Manage VLAN stripping for the VSI for Rx - * @vsi: the VSI being changed - * @ena: boolean value indicating if this is a enable or disable request - */ -int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) -{ - struct ice_hw *hw = &vsi->back->hw; - struct ice_vsi_ctx *ctxt; - int ret; - - /* do not allow modifying VLAN stripping when a port VLAN is configured - * on this VSI - */ - if (vsi->info.pvid) - return 0; - - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - /* Here we are configuring what the VSI should do with the VLAN tag in - * the Rx packet. We can either leave the tag in the packet or put it in - * the Rx descriptor. - */ - if (ena) - /* Strip VLAN tag from Rx packet and put it in the desc */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH; - else - /* Disable stripping. Leave tag in packet */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING; - - /* Allow all packets untagged/tagged */ - ctxt->info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL; - - ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); - - ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); - if (ret) { - dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN strip failed, ena = %d err %d aq_err %s\n", - ena, ret, ice_aq_str(hw->adminq.sq_last_status)); - ret = -EIO; - goto out; - } - - vsi->info.vlan_flags = ctxt->info.vlan_flags; -out: - kfree(ctxt); - return ret; -} - /** * ice_vsi_start_all_rx_rings - start/enable all of a VSI's Rx rings * @vsi: the VSI whose rings are to be enabled @@ -2260,61 +2114,6 @@ bool ice_vsi_is_vlan_pruning_ena(struct ice_vsi *vsi) return (vsi->info.sw_flags2 & ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA); } -/** - * ice_cfg_vlan_pruning - enable or disable VLAN pruning on the VSI - * @vsi: VSI to enable or disable VLAN pruning on - * @ena: set to true to enable VLAN pruning and false to disable it - * - * returns 0 if VSI is updated, negative otherwise - */ -int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena) -{ - struct ice_vsi_ctx *ctxt; - struct ice_pf *pf; - int status; - - if (!vsi) - return -EINVAL; - - /* Don't enable VLAN pruning if the netdev is currently in promiscuous - * mode. VLAN pruning will be enabled when the interface exits - * promiscuous mode if any VLAN filters are active. - */ - if (vsi->netdev && vsi->netdev->flags & IFF_PROMISC && ena) - return 0; - - pf = vsi->back; - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - ctxt->info = vsi->info; - - if (ena) - ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - else - ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - - ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SW_VALID); - - status = ice_update_vsi(&pf->hw, vsi->idx, ctxt, NULL); - if (status) { - netdev_err(vsi->netdev, "%sabling VLAN pruning on VSI handle: %d, VSI HW ID: %d failed, err = %d, aq_err = %s\n", - ena ? "En" : "Dis", vsi->idx, vsi->vsi_num, - status, ice_aq_str(pf->hw.adminq.sq_last_status)); - goto err_out; - } - - vsi->info.sw_flags2 = ctxt->info.sw_flags2; - - kfree(ctxt); - return 0; - -err_out: - kfree(ctxt); - return -EIO; -} - static void ice_vsi_set_tc_cfg(struct ice_vsi *vsi) { if (!test_bit(ICE_FLAG_DCB_ENA, vsi->back->flags)) { @@ -2594,6 +2393,8 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, if (ret) goto unroll_get_qs; + ice_vsi_init_vlan_ops(vsi); + switch (vsi->type) { case ICE_VSI_CTRL: case ICE_VSI_SWITCHDEV_CTRL: @@ -3257,6 +3058,8 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) if (vtype == ICE_VSI_VF) vf = &pf->vf[vsi->vf_id]; + ice_vsi_init_vlan_ops(vsi); + coalesce = kcalloc(vsi->num_q_vectors, sizeof(struct ice_coalesce_stored), GFP_KERNEL); if (!coalesce) @@ -4075,7 +3878,7 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) */ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { - return ice_vsi_add_vlan(vsi, 0, ICE_FWD_TO_VSI); + return vsi->vlan_ops.add_vlan(vsi, 0, ICE_FWD_TO_VSI); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 28e0f1147c82..427e5e4e9f17 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -22,15 +22,6 @@ int ice_vsi_cfg_lan_txqs(struct ice_vsi *vsi); void ice_vsi_cfg_msix(struct ice_vsi *vsi); -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); - -int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid); - -int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi); - -int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena); - int ice_vsi_start_all_rx_rings(struct ice_vsi *vsi); int ice_vsi_stop_all_rx_rings(struct ice_vsi *vsi); @@ -45,8 +36,6 @@ int ice_vsi_stop_xdp_tx_rings(struct ice_vsi *vsi); bool ice_vsi_is_vlan_pruning_ena(struct ice_vsi *vsi); -int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena); - void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create); int ice_set_link(struct ice_vsi *vsi, bool ena); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 18ecb1eb85a6..904571527e27 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -401,7 +401,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) ~IFF_PROMISC; goto out_promisc; } - ice_cfg_vlan_pruning(vsi, false); + vsi->vlan_ops.dis_rx_filtering(vsi); } } else { /* Clear Rx filter to remove traffic from wire */ @@ -415,7 +415,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) goto out_promisc; } if (vsi->num_vlan > 1) - ice_cfg_vlan_pruning(vsi, true); + vsi->vlan_ops.ena_rx_filtering(vsi); } } } @@ -3429,7 +3429,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Enable VLAN pruning when a VLAN other than 0 is added */ if (!ice_vsi_is_vlan_pruning_ena(vsi)) { - ret = ice_cfg_vlan_pruning(vsi, true); + ret = vsi->vlan_ops.ena_rx_filtering(vsi); if (ret) return ret; } @@ -3437,7 +3437,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ - ret = ice_vsi_add_vlan(vsi, vid, ICE_FWD_TO_VSI); + ret = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3464,16 +3464,16 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, if (!vid) return 0; - /* Make sure ice_vsi_kill_vlan is successful before updating VLAN + /* Make sure VLAN delete is successful before updating VLAN * information */ - ret = ice_vsi_kill_vlan(vsi, vid); + ret = vsi->vlan_ops.del_vlan(vsi, vid); if (ret) return ret; /* Disable pruning when VLAN 0 is the only VLAN rule */ if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - ret = ice_cfg_vlan_pruning(vsi, false); + vsi->vlan_ops.dis_rx_filtering(vsi); set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); return ret; @@ -5617,24 +5617,24 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = ice_vsi_manage_vlan_stripping(vsi, true); + ret = vsi->vlan_ops.ena_stripping(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = ice_vsi_manage_vlan_stripping(vsi, false); + ret = vsi->vlan_ops.dis_stripping(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_TX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = ice_vsi_manage_vlan_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = ice_vsi_manage_vlan_insertion(vsi); + ret = vsi->vlan_ops.dis_insertion(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = ice_cfg_vlan_pruning(vsi, true); + ret = vsi->vlan_ops.ena_rx_filtering(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = ice_cfg_vlan_pruning(vsi, false); + ret = vsi->vlan_ops.dis_rx_filtering(vsi); if ((features & NETIF_F_NTUPLE) && !(netdev->features & NETIF_F_NTUPLE)) { @@ -5670,9 +5670,9 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) int ret = 0; if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - ret = ice_vsi_manage_vlan_stripping(vsi, true); + ret = vsi->vlan_ops.ena_stripping(vsi); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - ret = ice_vsi_manage_vlan_insertion(vsi); + ret = vsi->vlan_ops.ena_insertion(vsi); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_osdep.h b/drivers/net/ethernet/intel/ice/ice_osdep.h index f57c414bc0a9..380e8ae94fc9 100644 --- a/drivers/net/ethernet/intel/ice/ice_osdep.h +++ b/drivers/net/ethernet/intel/ice/ice_osdep.h @@ -9,6 +9,7 @@ #ifndef CONFIG_64BIT #include #endif +#include #define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg))) #define rd32(a, reg) readl((a)->hw_addr + (reg)) diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index d8334beaaa8a..4fb1a7ae5dbb 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -33,15 +33,6 @@ struct ice_vsi_ctx { struct ice_q_ctx *rdma_q_ctx[ICE_MAX_TRAFFIC_CLASS]; }; -enum ice_sw_fwd_act_type { - ICE_FWD_TO_VSI = 0, - ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */ - ICE_FWD_TO_Q, - ICE_FWD_TO_QGRP, - ICE_DROP_PACKET, - ICE_INVAL_ACT -}; - /* Switch recipe ID enum values are specific to hardware */ enum ice_sw_lkup_type { ICE_SW_LKUP_ETHERTYPE = 0, diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index caf0a02b25f5..ef2ef064a74c 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -1007,6 +1007,15 @@ struct ice_hw_port_stats { u64 fd_sb_match; }; +enum ice_sw_fwd_act_type { + ICE_FWD_TO_VSI = 0, + ICE_FWD_TO_VSI_LIST, /* Do not use this when adding filter */ + ICE_FWD_TO_Q, + ICE_FWD_TO_QGRP, + ICE_DROP_PACKET, + ICE_INVAL_ACT +}; + struct ice_aq_get_set_rss_lut_params { u16 vsi_handle; /* software VSI handle */ u16 lut_size; /* size of the LUT buffer */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index ab03010c822d..6fa0968f0912 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -642,55 +642,6 @@ static void ice_trigger_vf_reset(struct ice_vf *vf, bool is_vflr, bool is_pfr) } } -/** - * ice_vsi_manage_pvid - Enable or disable port VLAN for VSI - * @vsi: the VSI to update - * @pvid_info: VLAN ID and QoS used to set the PVID VSI context field - * @enable: true for enable PVID false for disable - */ -static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) -{ - struct ice_hw *hw = &vsi->back->hw; - struct ice_aqc_vsi_props *info; - struct ice_vsi_ctx *ctxt; - int ret; - - ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); - if (!ctxt) - return -ENOMEM; - - ctxt->info = vsi->info; - info = &ctxt->info; - if (enable) { - info->vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | - ICE_AQ_VSI_PVLAN_INSERT_PVID | - ICE_AQ_VSI_VLAN_EMOD_STR; - info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } else { - info->vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING | - ICE_AQ_VSI_VLAN_MODE_ALL; - info->sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } - - info->pvid = cpu_to_le16(pvid_info); - info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | - ICE_AQ_VSI_PROP_SW_VALID); - - ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); - if (ret) { - dev_info(ice_hw_to_dev(hw), "update VSI for port VLAN failed, err %d aq_err %s\n", - ret, ice_aq_str(hw->adminq.sq_last_status)); - goto out; - } - - vsi->info.vlan_flags = info->vlan_flags; - vsi->info.sw_flags2 = info->sw_flags2; - vsi->info.pvid = info->pvid; -out: - kfree(ctxt); - return ret; -} - /** * ice_vf_get_port_info - Get the VF's port info structure * @vf: VF used to get the port info structure for @@ -815,7 +766,7 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) int err; if (vf->port_vlan_info) { - err = ice_vsi_manage_pvid(vsi, vf->port_vlan_info, true); + err = vsi->vlan_ops.set_port_vlan(vsi, vf->port_vlan_info); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); @@ -826,7 +777,7 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) } /* vlan_id will either be 0 or the port VLAN number */ - err = ice_vsi_add_vlan(vsi, vlan_id, ICE_FWD_TO_VSI); + err = vsi->vlan_ops.add_vlan(vsi, vlan_id, ICE_FWD_TO_VSI); if (err) { dev_err(dev, "failed to add %s VLAN %u filter for VF %u, error %d\n", vf->port_vlan_info ? "port" : "", vlan_id, vf->vf_id, @@ -837,37 +788,6 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) return 0; } -static int ice_cfg_vlan_antispoof(struct ice_vsi *vsi, bool enable) -{ - struct ice_vsi_ctx *ctx; - int err; - - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); - if (!ctx) - return -ENOMEM; - - ctx->info.sec_flags = vsi->info.sec_flags; - ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); - - if (enable) - ctx->info.sec_flags |= ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S; - else - ctx->info.sec_flags &= ~(ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << - ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); - - err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); - if (err) - dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx VLAN anti-spoof %s for VSI %d, error %d\n", - enable ? "ON" : "OFF", vsi->vsi_num, err); - else - vsi->info.sec_flags = ctx->info.sec_flags; - - kfree(ctx); - - return err; -} - static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable) { struct ice_vsi_ctx *ctx; @@ -905,7 +825,7 @@ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) { int err; - err = ice_cfg_vlan_antispoof(vsi, true); + err = vsi->vlan_ops.ena_tx_filtering(vsi); if (err) return err; @@ -920,7 +840,7 @@ static int ice_vsi_dis_spoofchk(struct ice_vsi *vsi) { int err; - err = ice_cfg_vlan_antispoof(vsi, false); + err = vsi->vlan_ops.dis_tx_filtering(vsi); if (err) return err; @@ -3131,9 +3051,9 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) if (vsi->num_vlan || vf->port_vlan_info) { if (rm_promisc) - ret = ice_cfg_vlan_pruning(vsi, true); + ret = vsi->vlan_ops.ena_rx_filtering(vsi); else - ret = ice_cfg_vlan_pruning(vsi, false); + ret = vsi->vlan_ops.dis_rx_filtering(vsi); if (ret) { dev_err(dev, "Failed to configure VLAN pruning in promiscuous mode\n"); v_ret = VIRTCHNL_STATUS_ERR_PARAM; @@ -4330,7 +4250,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - status = ice_vsi_add_vlan(vsi, vid, ICE_FWD_TO_VSI); + status = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4339,7 +4259,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) /* Enable VLAN pruning when non-zero VLAN is added */ if (!vlan_promisc && vid && !ice_vsi_is_vlan_pruning_ena(vsi)) { - status = ice_cfg_vlan_pruning(vsi, true); + status = vsi->vlan_ops.ena_rx_filtering(vsi); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n", @@ -4381,10 +4301,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - /* Make sure ice_vsi_kill_vlan is successful before - * updating VLAN information - */ - status = ice_vsi_kill_vlan(vsi, vid); + status = vsi->vlan_ops.del_vlan(vsi, vid); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4393,7 +4310,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) /* Disable VLAN pruning when only VLAN 0 is left */ if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - ice_cfg_vlan_pruning(vsi, false); + status = vsi->vlan_ops.dis_rx_filtering(vsi); /* Disable Unicast/Multicast VLAN promiscuous mode */ if (vlan_promisc) { @@ -4462,7 +4379,7 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf) } vsi = ice_get_vf_vsi(vf); - if (ice_vsi_manage_vlan_stripping(vsi, true)) + if (vsi->vlan_ops.ena_stripping(vsi)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4497,7 +4414,7 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf) goto error_param; } - if (ice_vsi_manage_vlan_stripping(vsi, false)) + if (vsi->vlan_ops.dis_stripping(vsi)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4527,9 +4444,9 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) - return ice_vsi_manage_vlan_stripping(vsi, true); + return vsi->vlan_ops.ena_stripping(vsi); else - return ice_vsi_manage_vlan_stripping(vsi, false); + return vsi->vlan_ops.dis_stripping(vsi); } static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c new file mode 100644 index 000000000000..6b0a4bf28305 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -0,0 +1,326 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_lib.h" +#include "ice_lib.h" +#include "ice_fltr.h" +#include "ice.h" + +/** + * ice_vsi_add_vlan - default add VLAN implementation for all VSI types + * @vsi: VSI being configured + * @vid: VLAN ID to be added + * @action: filter action to be performed on match + */ +int +ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) +{ + int err = 0; + + if (!ice_fltr_add_vlan(vsi, vid, action)) { + vsi->num_vlan++; + } else { + err = -ENODEV; + dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i\n", + vid, vsi->vsi_num); + } + + return err; +} + +/** + * ice_vsi_del_vlan - default del VLAN implementation for all VSI types + * @vsi: VSI being configured + * @vid: VLAN ID to be removed + */ +int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid) +{ + struct ice_pf *pf = vsi->back; + struct device *dev; + int err; + + dev = ice_pf_to_dev(pf); + + err = ice_fltr_remove_vlan(vsi, vid, ICE_FWD_TO_VSI); + if (!err) { + vsi->num_vlan--; + } else if (err == -ENOENT) { + dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist\n", + vid, vsi->vsi_num); + err = 0; + } else { + dev_err(dev, "Error removing VLAN %d on VSI %i error: %d\n", + vid, vsi->vsi_num, err); + } + + return err; +} + +/** + * ice_vsi_manage_vlan_insertion - Manage VLAN insertion for the VSI for Tx + * @vsi: the VSI being changed + */ +static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + /* Here we are configuring the VSI to let the driver add VLAN tags by + * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag + * insertion happens in the Tx hot path, in ice_tx_map. + */ + ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL; + + /* Preserve existing VLAN strip setting */ + ctxt->info.vlan_flags |= (vsi->info.vlan_flags & + ICE_AQ_VSI_VLAN_EMOD_M); + + ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) { + dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN insert failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + goto out; + } + + vsi->info.vlan_flags = ctxt->info.vlan_flags; +out: + kfree(ctxt); + return err; +} + +/** + * ice_vsi_manage_vlan_stripping - Manage VLAN stripping for the VSI for Rx + * @vsi: the VSI being changed + * @ena: boolean value indicating if this is a enable or disable request + */ +static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + /* do not allow modifying VLAN stripping when a port VLAN is configured + * on this VSI + */ + if (vsi->info.pvid) + return 0; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + /* Here we are configuring what the VSI should do with the VLAN tag in + * the Rx packet. We can either leave the tag in the packet or put it in + * the Rx descriptor. + */ + if (ena) + /* Strip VLAN tag from Rx packet and put it in the desc */ + ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH; + else + /* Disable stripping. Leave tag in packet */ + ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING; + + /* Allow all packets untagged/tagged */ + ctxt->info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL; + + ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) { + dev_err(ice_pf_to_dev(vsi->back), "update VSI for VLAN strip failed, ena = %d err %d aq_err %s\n", + ena, err, ice_aq_str(hw->adminq.sq_last_status)); + goto out; + } + + vsi->info.vlan_flags = ctxt->info.vlan_flags; +out: + kfree(ctxt); + return err; +} + +int ice_vsi_ena_stripping(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_stripping(vsi, true); +} + +int ice_vsi_dis_stripping(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_stripping(vsi, false); +} + +int ice_vsi_ena_insertion(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_insertion(vsi); +} + +int ice_vsi_dis_insertion(struct ice_vsi *vsi) +{ + return ice_vsi_manage_vlan_insertion(vsi); +} + +/** + * ice_vsi_manage_pvid - Enable or disable port VLAN for VSI + * @vsi: the VSI to update + * @pvid_info: VLAN ID and QoS used to set the PVID VSI context field + * @enable: true for enable PVID false for disable + */ +static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_aqc_vsi_props *info; + struct ice_vsi_ctx *ctxt; + int ret; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info = vsi->info; + info = &ctxt->info; + if (enable) { + info->vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | + ICE_AQ_VSI_PVLAN_INSERT_PVID | + ICE_AQ_VSI_VLAN_EMOD_STR; + info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + } else { + info->vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING | + ICE_AQ_VSI_VLAN_MODE_ALL; + info->sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + } + + info->pvid = cpu_to_le16(pvid_info); + info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | + ICE_AQ_VSI_PROP_SW_VALID); + + ret = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (ret) { + dev_info(ice_hw_to_dev(hw), "update VSI for port VLAN failed, err %d aq_err %s\n", + ret, ice_aq_str(hw->adminq.sq_last_status)); + goto out; + } + + vsi->info.vlan_flags = info->vlan_flags; + vsi->info.sw_flags2 = info->sw_flags2; + vsi->info.pvid = info->pvid; +out: + kfree(ctxt); + return ret; +} + +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info) +{ + return ice_vsi_manage_pvid(vsi, pvid_info, true); +} + +/** + * ice_cfg_vlan_pruning - enable or disable VLAN pruning on the VSI + * @vsi: VSI to enable or disable VLAN pruning on + * @ena: set to true to enable VLAN pruning and false to disable it + * + * returns 0 if VSI is updated, negative otherwise + */ +static int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena) +{ + struct ice_vsi_ctx *ctxt; + struct ice_pf *pf; + int status; + + if (!vsi) + return -EINVAL; + + /* Don't enable VLAN pruning if the netdev is currently in promiscuous + * mode. VLAN pruning will be enabled when the interface exits + * promiscuous mode if any VLAN filters are active. + */ + if (vsi->netdev && vsi->netdev->flags & IFF_PROMISC && ena) + return 0; + + pf = vsi->back; + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info = vsi->info; + + if (ena) + ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + else + ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + + ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SW_VALID); + + status = ice_update_vsi(&pf->hw, vsi->idx, ctxt, NULL); + if (status) { + netdev_err(vsi->netdev, "%sabling VLAN pruning on VSI handle: %d, VSI HW ID: %d failed, err = %d, aq_err = %s\n", + ena ? "En" : "Dis", vsi->idx, vsi->vsi_num, status, + ice_aq_str(pf->hw.adminq.sq_last_status)); + goto err_out; + } + + vsi->info.sw_flags2 = ctxt->info.sw_flags2; + + kfree(ctxt); + return 0; + +err_out: + kfree(ctxt); + return status; +} + +int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_pruning(vsi, true); +} + +int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_pruning(vsi, false); +} + +static int ice_cfg_vlan_antispoof(struct ice_vsi *vsi, bool enable) +{ + struct ice_vsi_ctx *ctx; + int err; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + + ctx->info.sec_flags = vsi->info.sec_flags; + ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); + + if (enable) + ctx->info.sec_flags |= ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S; + else + ctx->info.sec_flags &= ~(ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA << + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); + + err = ice_update_vsi(&vsi->back->hw, vsi->idx, ctx, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "Failed to configure Tx VLAN anti-spoof %s for VSI %d, error %d\n", + enable ? "ON" : "OFF", vsi->vsi_num, err); + else + vsi->info.sec_flags = ctx->info.sec_flags; + + kfree(ctx); + + return err; +} + +int ice_vsi_ena_tx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_antispoof(vsi, true); +} + +int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi) +{ + return ice_cfg_vlan_antispoof(vsi, false); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h new file mode 100644 index 000000000000..f9fe33026306 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VSI_VLAN_LIB_H_ +#define _ICE_VSI_VLAN_LIB_H_ + +#include +#include "ice_type.h" + +struct ice_vsi; + +int +ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); +int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid); + +int ice_vsi_ena_stripping(struct ice_vsi *vsi); +int ice_vsi_dis_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_insertion(struct ice_vsi *vsi); +int ice_vsi_dis_insertion(struct ice_vsi *vsi); +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info); + +int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_ena_tx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi); + +#endif /* _ICE_VSI_VLAN_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c new file mode 100644 index 000000000000..3bab6c025856 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_ops.h" +#include "ice.h" + +void ice_vsi_init_vlan_ops(struct ice_vsi *vsi) +{ + vsi->vlan_ops.add_vlan = ice_vsi_add_vlan; + vsi->vlan_ops.del_vlan = ice_vsi_del_vlan; + vsi->vlan_ops.ena_stripping = ice_vsi_ena_stripping; + vsi->vlan_ops.dis_stripping = ice_vsi_dis_stripping; + vsi->vlan_ops.ena_insertion = ice_vsi_ena_insertion; + vsi->vlan_ops.dis_insertion = ice_vsi_dis_insertion; + vsi->vlan_ops.ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vsi->vlan_ops.dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vsi->vlan_ops.ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vsi->vlan_ops.dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + vsi->vlan_ops.set_port_vlan = ice_vsi_set_port_vlan; +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h new file mode 100644 index 000000000000..522169742661 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VSI_VLAN_OPS_H_ +#define _ICE_VSI_VLAN_OPS_H_ + +#include "ice_type.h" +#include "ice_vsi_vlan_lib.h" + +struct ice_vsi; + +struct ice_vsi_vlan_ops { + int (*add_vlan)(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); + int (*del_vlan)(struct ice_vsi *vsi, u16 vid); + int (*ena_stripping)(struct ice_vsi *vsi); + int (*dis_stripping)(struct ice_vsi *vsi); + int (*ena_insertion)(struct ice_vsi *vsi); + int (*dis_insertion)(struct ice_vsi *vsi); + int (*ena_rx_filtering)(struct ice_vsi *vsi); + int (*dis_rx_filtering)(struct ice_vsi *vsi); + int (*ena_tx_filtering)(struct ice_vsi *vsi); + int (*dis_tx_filtering)(struct ice_vsi *vsi); + int (*set_port_vlan)(struct ice_vsi *vsi, u16 pvid_info); +}; + +void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); + +#endif /* _ICE_VSI_VLAN_OPS_H_ */ -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:49 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:49 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 11/14] ice: Support configuring the device to Double VLAN Mode In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-11-anthony.l.nguyen@intel.com> From: Brett Creeley In order to support configuring the device in Double VLAN Mode (DVM), the DDP and FW have to support DVM. If both support DVM, the PF that downloads the package needs to update the default recipes, set the VLAN mode, and update boost TCAM entries. To support updating the default recipes in DVM, add support for updating an existing switch recipe's lkup_idx and mask. This is done by first calling the get recipe AQ (0x0292) with the desired recipe ID. Then, if that is successful update one of the lookup indices (lkup_idx) and its associated mask if the mask is valid otherwise the already existing mask will be used. The VLAN mode of the device has to be configured while the global configuration lock is held while downloading the DDP, specifically after the DDP has been downloaded. If supported, the device will default to DVM. Co-developed-by: Dan Nowlin Signed-off-by: Dan Nowlin Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- v3: Add ICE_DBQ_AQ define drivers/net/ethernet/intel/ice/Makefile | 1 + .../net/ethernet/intel/ice/ice_adminq_cmd.h | 64 ++- drivers/net/ethernet/intel/ice/ice_common.c | 49 +- drivers/net/ethernet/intel/ice/ice_common.h | 3 + .../net/ethernet/intel/ice/ice_flex_pipe.c | 290 ++++++++++-- .../net/ethernet/intel/ice/ice_flex_pipe.h | 13 + .../net/ethernet/intel/ice/ice_flex_type.h | 40 ++ drivers/net/ethernet/intel/ice/ice_main.c | 12 + .../ethernet/intel/ice/ice_pf_vsi_vlan_ops.c | 1 + drivers/net/ethernet/intel/ice/ice_switch.c | 75 +++ drivers/net/ethernet/intel/ice/ice_switch.h | 13 + drivers/net/ethernet/intel/ice/ice_type.h | 10 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 1 + .../net/ethernet/intel/ice/ice_vlan_mode.c | 439 ++++++++++++++++++ .../net/ethernet/intel/ice/ice_vlan_mode.h | 13 + .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 25 +- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 5 - 17 files changed, 995 insertions(+), 59 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_vlan_mode.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vlan_mode.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index 3ece1df919f8..606ff3522bd4 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -23,6 +23,7 @@ ice-y := ice_main.o \ ice_vsi_vlan_lib.o \ ice_fdir.o \ ice_ethtool_fdir.o \ + ice_vlan_mode.o \ ice_flex_pipe.o \ ice_flow.o \ ice_idc.o \ diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index b638f9e9ecd9..a23a9ea10751 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -226,6 +226,15 @@ struct ice_aqc_get_sw_cfg_resp_elem { #define ICE_AQC_GET_SW_CONF_RESP_IS_VF BIT(15) }; +/* Set Port parameters, (direct, 0x0203) */ +struct ice_aqc_set_port_params { + __le16 cmd_flags; +#define ICE_AQC_SET_P_PARAMS_DOUBLE_VLAN_ENA BIT(2) + __le16 bad_frame_vsi; + __le16 swid; + u8 reserved[10]; +}; + /* These resource type defines are used for all switch resource * commands where a resource type is required, such as: * Get Resource Allocation command (indirect 0x0204) @@ -283,6 +292,40 @@ struct ice_aqc_alloc_free_res_elem { struct ice_aqc_res_elem elem[]; }; +/* Request buffer for Set VLAN Mode AQ command (indirect 0x020C) */ +struct ice_aqc_set_vlan_mode { + u8 reserved; + u8 l2tag_prio_tagging; +#define ICE_AQ_VLAN_PRIO_TAG_S 0 +#define ICE_AQ_VLAN_PRIO_TAG_M (0x7 << ICE_AQ_VLAN_PRIO_TAG_S) +#define ICE_AQ_VLAN_PRIO_TAG_NOT_SUPPORTED 0x0 +#define ICE_AQ_VLAN_PRIO_TAG_STAG 0x1 +#define ICE_AQ_VLAN_PRIO_TAG_OUTER_CTAG 0x2 +#define ICE_AQ_VLAN_PRIO_TAG_OUTER_VLAN 0x3 +#define ICE_AQ_VLAN_PRIO_TAG_INNER_CTAG 0x4 +#define ICE_AQ_VLAN_PRIO_TAG_MAX 0x4 +#define ICE_AQ_VLAN_PRIO_TAG_ERROR 0x7 + u8 l2tag_reserved[64]; + u8 rdma_packet; +#define ICE_AQ_VLAN_RDMA_TAG_S 0 +#define ICE_AQ_VLAN_RDMA_TAG_M (0x3F << ICE_AQ_VLAN_RDMA_TAG_S) +#define ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING 0x10 +#define ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING 0x1A + u8 rdma_reserved[2]; + u8 mng_vlan_prot_id; +#define ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER 0x10 +#define ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER 0x11 + u8 prot_id_reserved[30]; +}; + +/* Response buffer for Get VLAN Mode AQ command (indirect 0x020D) */ +struct ice_aqc_get_vlan_mode { + u8 vlan_mode; +#define ICE_AQ_VLAN_MODE_DVM_ENA BIT(0) + u8 l2tag_prio_tagging; + u8 reserved[98]; +}; + /* Add VSI (indirect 0x0210) * Update VSI (indirect 0x0211) * Get VSI (indirect 0x0212) @@ -494,9 +537,13 @@ struct ice_aqc_add_get_recipe { struct ice_aqc_recipe_content { u8 rid; +#define ICE_AQ_RECIPE_ID_S 0 +#define ICE_AQ_RECIPE_ID_M (0x3F << ICE_AQ_RECIPE_ID_S) #define ICE_AQ_RECIPE_ID_IS_ROOT BIT(7) #define ICE_AQ_SW_ID_LKUP_IDX 0 u8 lkup_indx[5]; +#define ICE_AQ_RECIPE_LKUP_DATA_S 0 +#define ICE_AQ_RECIPE_LKUP_DATA_M (0x3F << ICE_AQ_RECIPE_LKUP_DATA_S) #define ICE_AQ_RECIPE_LKUP_IGNORE BIT(7) #define ICE_AQ_SW_ID_LKUP_MASK 0x00FF __le16 mask[5]; @@ -507,15 +554,25 @@ struct ice_aqc_recipe_content { u8 rsvd0[3]; u8 act_ctrl_join_priority; u8 act_ctrl_fwd_priority; +#define ICE_AQ_RECIPE_FWD_PRIORITY_S 0 +#define ICE_AQ_RECIPE_FWD_PRIORITY_M (0xF << ICE_AQ_RECIPE_FWD_PRIORITY_S) u8 act_ctrl; +#define ICE_AQ_RECIPE_ACT_NEED_PASS_L2 BIT(0) +#define ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2 BIT(1) #define ICE_AQ_RECIPE_ACT_INV_ACT BIT(2) +#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_S 4 +#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_M (0x3 << ICE_AQ_RECIPE_ACT_PRUNE_INDX_S) u8 rsvd1; __le32 dflt_act; +#define ICE_AQ_RECIPE_DFLT_ACT_S 0 +#define ICE_AQ_RECIPE_DFLT_ACT_M (0x7FFFF << ICE_AQ_RECIPE_DFLT_ACT_S) +#define ICE_AQ_RECIPE_DFLT_ACT_VALID BIT(31) }; struct ice_aqc_recipe_data_elem { u8 recipe_indx; u8 resp_bits; +#define ICE_AQ_RECIPE_WAS_UPDATED BIT(0) u8 rsvd0[2]; u8 recipe_bitmap[8]; u8 rsvd1[4]; @@ -1906,7 +1963,7 @@ struct ice_aqc_get_clear_fw_log { }; /* Download Package (indirect 0x0C40) */ -/* Also used for Update Package (indirect 0x0C42) */ +/* Also used for Update Package (indirect 0x0C41 and 0x0C42) */ struct ice_aqc_download_pkg { u8 flags; #define ICE_AQC_DOWNLOAD_PKG_LAST_BUF 0x01 @@ -2032,6 +2089,7 @@ struct ice_aq_desc { struct ice_aqc_sff_eeprom read_write_sff_param; struct ice_aqc_set_port_id_led set_port_id_led; struct ice_aqc_get_sw_cfg get_sw_conf; + struct ice_aqc_set_port_params set_port_params; struct ice_aqc_sw_rules sw_rules; struct ice_aqc_add_get_recipe add_get_recipe; struct ice_aqc_recipe_to_profile recipe_to_profile; @@ -2135,10 +2193,13 @@ enum ice_adminq_opc { /* internal switch commands */ ice_aqc_opc_get_sw_cfg = 0x0200, + ice_aqc_opc_set_port_params = 0x0203, /* Alloc/Free/Get Resources */ ice_aqc_opc_alloc_res = 0x0208, ice_aqc_opc_free_res = 0x0209, + ice_aqc_opc_set_vlan_mode_parameters = 0x020C, + ice_aqc_opc_get_vlan_mode_parameters = 0x020D, /* VSI commands */ ice_aqc_opc_add_vsi = 0x0210, @@ -2230,6 +2291,7 @@ enum ice_adminq_opc { /* package commands */ ice_aqc_opc_download_pkg = 0x0C40, + ice_aqc_opc_upload_section = 0x0C41, ice_aqc_opc_update_pkg = 0x0C42, ice_aqc_opc_get_pkg_info_list = 0x0C43, diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 44ed1c9161dc..ede131189a8f 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1518,16 +1518,27 @@ ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf, /* When a package download is in process (i.e. when the firmware's * Global Configuration Lock resource is held), only the Download - * Package, Get Version, Get Package Info List and Release Resource - * (with resource ID set to Global Config Lock) AdminQ commands are - * allowed; all others must block until the package download completes - * and the Global Config Lock is released. See also - * ice_acquire_global_cfg_lock(). + * Package, Get Version, Get Package Info List, Upload Section, + * Update Package, Set Port Parameters, Get/Set VLAN Mode Parameters, + * Add Recipe, Set Recipes to Profile Association, Get Recipe, and Get + * Recipes to Profile Association, and Release Resource (with resource + * ID set to Global Config Lock) AdminQ commands are allowed; all others + * must block until the package download completes and the Global Config + * Lock is released. See also ice_acquire_global_cfg_lock(). */ switch (le16_to_cpu(desc->opcode)) { case ice_aqc_opc_download_pkg: case ice_aqc_opc_get_pkg_info_list: case ice_aqc_opc_get_ver: + case ice_aqc_opc_upload_section: + case ice_aqc_opc_update_pkg: + case ice_aqc_opc_set_port_params: + case ice_aqc_opc_get_vlan_mode_parameters: + case ice_aqc_opc_set_vlan_mode_parameters: + case ice_aqc_opc_add_recipe: + case ice_aqc_opc_recipe_to_profile: + case ice_aqc_opc_get_recipe: + case ice_aqc_opc_get_recipe_to_profile: break; case ice_aqc_opc_release_res: if (le16_to_cpu(cmd->res_id) == ICE_AQC_RES_ID_GLBL_LOCK) @@ -2737,6 +2748,34 @@ void ice_clear_pxe_mode(struct ice_hw *hw) ice_aq_clear_pxe_mode(hw); } +/** + * ice_aq_set_port_params - set physical port parameters. + * @pi: pointer to the port info struct + * @double_vlan: if set double VLAN is enabled + * @cd: pointer to command details structure or NULL + * + * Set Physical port parameters (0x0203) + */ +int +ice_aq_set_port_params(struct ice_port_info *pi, bool double_vlan, + struct ice_sq_cd *cd) + +{ + struct ice_aqc_set_port_params *cmd; + struct ice_hw *hw = pi->hw; + struct ice_aq_desc desc; + u16 cmd_flags = 0; + + cmd = &desc.params.set_port_params; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_port_params); + if (double_vlan) + cmd_flags |= ICE_AQC_SET_P_PARAMS_DOUBLE_VLAN_ENA; + cmd->cmd_flags = cpu_to_le16(cmd_flags); + + return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); +} + /** * ice_get_link_speed_based_on_phy_type - returns link speed * @phy_type_low: lower part of phy_type diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h index 209a3cc113d4..893333b8b738 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.h +++ b/drivers/net/ethernet/intel/ice/ice_common.h @@ -85,6 +85,9 @@ int ice_aq_send_driver_ver(struct ice_hw *hw, struct ice_driver_ver *dv, struct ice_sq_cd *cd); int +ice_aq_set_port_params(struct ice_port_info *pi, bool double_vlan, + struct ice_sq_cd *cd); +int ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode, struct ice_aqc_get_phy_caps_data *caps, struct ice_sq_cd *cd); diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c index b197d3a72014..434169351052 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c @@ -5,9 +5,17 @@ #include "ice_flex_pipe.h" #include "ice_flow.h" +/* For supporting double VLAN mode, it is necessary to enable or disable certain + * boost tcam entries. The metadata labels names that match the following + * prefixes will be saved to allow enabling double VLAN mode. + */ +#define ICE_DVM_PRE "BOOST_MAC_VLAN_DVM" /* enable these entries */ +#define ICE_SVM_PRE "BOOST_MAC_VLAN_SVM" /* disable these entries */ + /* To support tunneling entries by PF, the package will append the PF number to * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc. */ +#define ICE_TNL_PRE "TNL_" static const struct ice_tunnel_type_scan tnls[] = { { TNL_VXLAN, "TNL_VXLAN_PF" }, { TNL_GENEVE, "TNL_GENEVE_PF" }, @@ -525,6 +533,55 @@ ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state, return label->name; } +/** + * ice_add_tunnel_hint + * @hw: pointer to the HW structure + * @label_name: label text + * @val: value of the tunnel port boost entry + */ +static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val) +{ + if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { + u16 i; + + for (i = 0; tnls[i].type != TNL_LAST; i++) { + size_t len = strlen(tnls[i].label_prefix); + + /* Look for matching label start, before continuing */ + if (strncmp(label_name, tnls[i].label_prefix, len)) + continue; + + /* Make sure this label matches our PF. Note that the PF + * character ('0' - '7') will be located where our + * prefix string's null terminator is located. + */ + if ((label_name[len] - '0') == hw->pf_id) { + hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; + hw->tnl.tbl[hw->tnl.count].valid = false; + hw->tnl.tbl[hw->tnl.count].boost_addr = val; + hw->tnl.tbl[hw->tnl.count].port = 0; + hw->tnl.count++; + break; + } + } + } +} + +/** + * ice_add_dvm_hint + * @hw: pointer to the HW structure + * @val: value of the boost entry + * @enable: true if entry needs to be enabled, or false if needs to be disabled + */ +static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable) +{ + if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) { + hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr = val; + hw->dvm_upd.tbl[hw->dvm_upd.count].enable = enable; + hw->dvm_upd.count++; + } +} + /** * ice_init_pkg_hints * @hw: pointer to the HW structure @@ -551,32 +608,23 @@ static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) label_name = ice_enum_labels(ice_seg, ICE_SID_LBL_RXPARSER_TMEM, &state, &val); - while (label_name && hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { - for (i = 0; tnls[i].type != TNL_LAST; i++) { - size_t len = strlen(tnls[i].label_prefix); + while (label_name) { + if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE))) + /* check for a tunnel entry */ + ice_add_tunnel_hint(hw, label_name, val); - /* Look for matching label start, before continuing */ - if (strncmp(label_name, tnls[i].label_prefix, len)) - continue; + /* check for a dvm mode entry */ + else if (!strncmp(label_name, ICE_DVM_PRE, strlen(ICE_DVM_PRE))) + ice_add_dvm_hint(hw, val, true); - /* Make sure this label matches our PF. Note that the PF - * character ('0' - '7') will be located where our - * prefix string's null terminator is located. - */ - if ((label_name[len] - '0') == hw->pf_id) { - hw->tnl.tbl[hw->tnl.count].type = tnls[i].type; - hw->tnl.tbl[hw->tnl.count].valid = false; - hw->tnl.tbl[hw->tnl.count].boost_addr = val; - hw->tnl.tbl[hw->tnl.count].port = 0; - hw->tnl.count++; - break; - } - } + /* check for a svm mode entry */ + else if (!strncmp(label_name, ICE_SVM_PRE, strlen(ICE_SVM_PRE))) + ice_add_dvm_hint(hw, val, false); label_name = ice_enum_labels(NULL, 0, &state, &val); } - /* Cache the appropriate boost TCAM entry pointers */ + /* Cache the appropriate boost TCAM entry pointers for tunnels */ for (i = 0; i < hw->tnl.count; i++) { ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr, &hw->tnl.tbl[i].boost_entry); @@ -586,6 +634,11 @@ static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) hw->tnl.valid_count[hw->tnl.tbl[i].type]++; } } + + /* Cache the appropriate boost TCAM entry pointers for DVM and SVM */ + for (i = 0; i < hw->dvm_upd.count; i++) + ice_find_boost_entry(ice_seg, hw->dvm_upd.tbl[i].boost_addr, + &hw->dvm_upd.tbl[i].boost_entry); } /* Key creation */ @@ -876,6 +929,27 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, return status; } +/** + * ice_aq_upload_section + * @hw: pointer to the hardware structure + * @pkg_buf: the package buffer which will receive the section + * @buf_size: the size of the package buffer + * @cd: pointer to command details structure or NULL + * + * Upload Section (0x0C41) + */ +int +ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, struct ice_sq_cd *cd) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); +} + /** * ice_aq_update_pkg * @hw: pointer to the hardware structure @@ -960,26 +1034,21 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, } /** - * ice_update_pkg + * ice_update_pkg_no_lock * @hw: pointer to the hardware structure * @bufs: pointer to an array of buffers * @count: the number of buffers in the array - * - * Obtains change lock and updates package. */ static int -ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 count) { - u32 offset, info, i; - int status; - - status = ice_acquire_change_lock(hw, ICE_RES_WRITE); - if (status) - return status; + int status = 0; + u32 i; for (i = 0; i < count; i++) { struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i); bool last = ((i + 1) == count); + u32 offset, info; status = ice_aq_update_pkg(hw, bh, le16_to_cpu(bh->data_end), last, &offset, &info, NULL); @@ -991,6 +1060,27 @@ ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) } } + return status; +} + +/** + * ice_update_pkg + * @hw: pointer to the hardware structure + * @bufs: pointer to an array of buffers + * @count: the number of buffers in the array + * + * Obtains change lock and updates package. + */ +static int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) +{ + int status; + + status = ice_acquire_change_lock(hw, ICE_RES_WRITE); + if (status) + return status; + + status = ice_update_pkg_no_lock(hw, bufs, count); + ice_release_change_lock(hw); return status; @@ -1085,6 +1175,13 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) break; } + if (!status) { + status = ice_set_vlan_mode(hw); + if (status) + ice_debug(hw, ICE_DBG_PKG, "Failed to set VLAN mode: err %d\n", + status); + } + ice_release_global_cfg_lock(hw); return state; @@ -1122,6 +1219,7 @@ static enum ice_ddp_state ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg) { struct ice_buf_table *ice_buf_tbl; + int status; ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n", ice_seg->hdr.seg_format_ver.major, @@ -1138,8 +1236,12 @@ ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg) ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n", le32_to_cpu(ice_buf_tbl->buf_count)); - return ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, - le32_to_cpu(ice_buf_tbl->buf_count)); + status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, + le32_to_cpu(ice_buf_tbl->buf_count)); + + ice_post_pkg_dwnld_vlan_mode_cfg(hw); + + return status; } /** @@ -1902,7 +2004,7 @@ void ice_init_prof_result_bm(struct ice_hw *hw) * * Frees a package buffer */ -static void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) +void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) { devm_kfree(ice_hw_to_dev(hw), bld); } @@ -2001,6 +2103,43 @@ ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size) return NULL; } +/** + * ice_pkg_buf_alloc_single_section + * @hw: pointer to the HW structure + * @type: the section type value + * @size: the size of the section to reserve (in bytes) + * @section: returns pointer to the section + * + * Allocates a package buffer with a single section. + * Note: all package contents must be in Little Endian form. + */ +struct ice_buf_build * +ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, + void **section) +{ + struct ice_buf_build *buf; + + if (!section) + return NULL; + + buf = ice_pkg_buf_alloc(hw); + if (!buf) + return NULL; + + if (ice_pkg_buf_reserve_section(buf, 1)) + goto ice_pkg_buf_alloc_single_section_err; + + *section = ice_pkg_buf_alloc_section(buf, type, size); + if (!*section) + goto ice_pkg_buf_alloc_single_section_err; + + return buf; + +ice_pkg_buf_alloc_single_section_err: + ice_pkg_buf_free(hw, buf); + return NULL; +} + /** * ice_pkg_buf_get_active_sections * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) @@ -2028,7 +2167,7 @@ static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld) * * Return a pointer to the buffer's header */ -static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) +struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) { if (!bld) return NULL; @@ -2064,6 +2203,89 @@ ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, return res; } +/** + * ice_upd_dvm_boost_entry + * @hw: pointer to the HW structure + * @entry: pointer to double vlan boost entry info + */ +static int +ice_upd_dvm_boost_entry(struct ice_hw *hw, struct ice_dvm_entry *entry) +{ + struct ice_boost_tcam_section *sect_rx, *sect_tx; + int status = -ENOSPC; + struct ice_buf_build *bld; + u8 val, dc, nm; + + bld = ice_pkg_buf_alloc(hw); + if (!bld) + return -ENOMEM; + + /* allocate 2 sections, one for Rx parser, one for Tx parser */ + if (ice_pkg_buf_reserve_section(bld, 2)) + goto ice_upd_dvm_boost_entry_err; + + sect_rx = ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM, + struct_size(sect_rx, tcam, 1)); + if (!sect_rx) + goto ice_upd_dvm_boost_entry_err; + sect_rx->count = cpu_to_le16(1); + + sect_tx = ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM, + struct_size(sect_tx, tcam, 1)); + if (!sect_tx) + goto ice_upd_dvm_boost_entry_err; + sect_tx->count = cpu_to_le16(1); + + /* copy original boost entry to update package buffer */ + memcpy(sect_rx->tcam, entry->boost_entry, sizeof(*sect_rx->tcam)); + + /* re-write the don't care and never match bits accordingly */ + if (entry->enable) { + /* all bits are don't care */ + val = 0x00; + dc = 0xFF; + nm = 0x00; + } else { + /* disable, one never match bit, the rest are don't care */ + val = 0x00; + dc = 0xF7; + nm = 0x08; + } + + ice_set_key((u8 *)§_rx->tcam[0].key, sizeof(sect_rx->tcam[0].key), + &val, NULL, &dc, &nm, 0, sizeof(u8)); + + /* exact copy of entry to Tx section entry */ + memcpy(sect_tx->tcam, sect_rx->tcam, sizeof(*sect_tx->tcam)); + + status = ice_update_pkg_no_lock(hw, ice_pkg_buf(bld), 1); + +ice_upd_dvm_boost_entry_err: + ice_pkg_buf_free(hw, bld); + + return status; +} + +/** + * ice_set_dvm_boost_entries + * @hw: pointer to the HW structure + * + * Enable double vlan by updating the appropriate boost tcam entries. + */ +int ice_set_dvm_boost_entries(struct ice_hw *hw) +{ + int status; + u16 i; + + for (i = 0; i < hw->dvm_upd.count; i++) { + status = ice_upd_dvm_boost_entry(hw, &hw->dvm_upd.tbl[i]); + if (status) + return status; + } + + return 0; +} + /** * ice_tunnel_idx_to_entry - convert linear index to the sparse one * @hw: pointer to the HW structure diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h index dd602285c78e..4f0b151e9e9c 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h @@ -89,6 +89,12 @@ ice_init_prof_result_bm(struct ice_hw *hw); int ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, unsigned long *bm, struct list_head *fv_list); +int +ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count); +u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld); +int +ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, + u16 buf_size, struct ice_sq_cd *cd); bool ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, enum ice_tunnel_type type); @@ -96,6 +102,7 @@ int ice_udp_tunnel_set_port(struct net_device *netdev, unsigned int table, unsigned int idx, struct udp_tunnel_info *ti); int ice_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table, unsigned int idx, struct udp_tunnel_info *ti); +int ice_set_dvm_boost_entries(struct ice_hw *hw); /* Rx parser PTYPE functions */ bool ice_hw_ptype_ena(struct ice_hw *hw, u16 ptype); @@ -120,4 +127,10 @@ void ice_clear_hw_tbls(struct ice_hw *hw); void ice_free_hw_tbls(struct ice_hw *hw); int ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id); +struct ice_buf_build * +ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, + void **section); +struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld); +void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld); + #endif /* _ICE_FLEX_PIPE_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_flex_type.h b/drivers/net/ethernet/intel/ice/ice_flex_type.h index fc087e0b5292..5735e9542a49 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_type.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_type.h @@ -162,6 +162,7 @@ struct ice_meta_sect { #define ICE_SID_RXPARSER_MARKER_PTYPE 55 #define ICE_SID_RXPARSER_BOOST_TCAM 56 +#define ICE_SID_RXPARSER_METADATA_INIT 58 #define ICE_SID_TXPARSER_BOOST_TCAM 66 #define ICE_SID_XLT0_PE 80 @@ -442,6 +443,19 @@ struct ice_tunnel_table { u16 valid_count[__TNL_TYPE_CNT]; }; +struct ice_dvm_entry { + u16 boost_addr; + u16 enable; + struct ice_boost_tcam_entry *boost_entry; +}; + +#define ICE_DVM_MAX_ENTRIES 48 + +struct ice_dvm_table { + struct ice_dvm_entry tbl[ICE_DVM_MAX_ENTRIES]; + u16 count; +}; + struct ice_pkg_es { __le16 count; __le16 offset; @@ -662,4 +676,30 @@ enum ice_prof_type { ICE_PROF_TUN_ALL = 0x6, ICE_PROF_ALL = 0xFF, }; + +/* Number of bits/bytes contained in meta init entry. Note, this should be a + * multiple of 32 bits. + */ +#define ICE_META_INIT_BITS 192 +#define ICE_META_INIT_DW_CNT (ICE_META_INIT_BITS / (sizeof(__le32) * \ + BITS_PER_BYTE)) + +/* The meta init Flag field starts at this bit */ +#define ICE_META_FLAGS_ST 123 + +/* The entry and bit to check for Double VLAN Mode (DVM) support */ +#define ICE_META_VLAN_MODE_ENTRY 0 +#define ICE_META_FLAG_VLAN_MODE 60 +#define ICE_META_VLAN_MODE_BIT (ICE_META_FLAGS_ST + \ + ICE_META_FLAG_VLAN_MODE) + +struct ice_meta_init_entry { + __le32 bm[ICE_META_INIT_DW_CNT]; +}; + +struct ice_meta_init_section { + __le16 count; + __le16 offset; + struct ice_meta_init_entry entry; +}; #endif /* _ICE_FLEX_TYPE_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index ff2b721e0e45..563b597b0a85 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3555,12 +3555,17 @@ static int ice_tc_indir_block_register(struct ice_vsi *vsi) static int ice_setup_pf_sw(struct ice_pf *pf) { struct device *dev = ice_pf_to_dev(pf); + bool dvm = ice_is_dvm_ena(&pf->hw); struct ice_vsi *vsi; int status; if (ice_is_reset_in_progress(pf->state)) return -EBUSY; + status = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); + if (status) + return -EIO; + vsi = ice_pf_vsi_setup(pf, pf->hw.port_info); if (!vsi) return -ENOMEM; @@ -6649,6 +6654,7 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) { struct device *dev = ice_pf_to_dev(pf); struct ice_hw *hw = &pf->hw; + bool dvm; int err; if (test_bit(ICE_DOWN, pf->state)) @@ -6712,6 +6718,12 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) goto err_init_ctrlq; } + dvm = ice_is_dvm_ena(hw); + + err = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); + if (err) + goto err_init_ctrlq; + err = ice_sched_init_port(hw->port_info); if (err) goto err_sched_init_port; diff --git a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c index b00360ca6e92..976a03d3bdd5 100644 --- a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c @@ -3,6 +3,7 @@ #include "ice_vsi_vlan_ops.h" #include "ice_vsi_vlan_lib.h" +#include "ice_vlan_mode.h" #include "ice.h" #include "ice_pf_vsi_vlan_ops.h" diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index f851a81a7240..04308e5fa224 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -1096,6 +1096,64 @@ ice_aq_get_recipe(struct ice_hw *hw, return status; } +/** + * ice_update_recipe_lkup_idx - update a default recipe based on the lkup_idx + * @hw: pointer to the HW struct + * @params: parameters used to update the default recipe + * + * This function only supports updating default recipes and it only supports + * updating a single recipe based on the lkup_idx at a time. + * + * This is done as a read-modify-write operation. First, get the current recipe + * contents based on the recipe's ID. Then modify the field vector index and + * mask if it's valid at the lkup_idx. Finally, use the add recipe AQ to update + * the pre-existing recipe with the modifications. + */ +int +ice_update_recipe_lkup_idx(struct ice_hw *hw, + struct ice_update_recipe_lkup_idx_params *params) +{ + struct ice_aqc_recipe_data_elem *rcp_list; + u16 num_recps = ICE_MAX_NUM_RECIPES; + int status; + + rcp_list = kcalloc(num_recps, sizeof(*rcp_list), GFP_KERNEL); + if (!rcp_list) + return -ENOMEM; + + /* read current recipe list from firmware */ + rcp_list->recipe_indx = params->rid; + status = ice_aq_get_recipe(hw, rcp_list, &num_recps, params->rid, NULL); + if (status) { + ice_debug(hw, ICE_DBG_SW, "Failed to get recipe %d, status %d\n", + params->rid, status); + goto error_out; + } + + /* only modify existing recipe's lkup_idx and mask if valid, while + * leaving all other fields the same, then update the recipe firmware + */ + rcp_list->content.lkup_indx[params->lkup_idx] = params->fv_idx; + if (params->mask_valid) + rcp_list->content.mask[params->lkup_idx] = + cpu_to_le16(params->mask); + + if (params->ignore_valid) + rcp_list->content.lkup_indx[params->lkup_idx] |= + ICE_AQ_RECIPE_LKUP_IGNORE; + + status = ice_aq_add_recipe(hw, &rcp_list[0], 1, NULL); + if (status) + ice_debug(hw, ICE_DBG_SW, "Failed to update recipe %d lkup_idx %d fv_idx %d mask %d mask_valid %s, status %d\n", + params->rid, params->lkup_idx, params->fv_idx, + params->mask, params->mask_valid ? "true" : "false", + status); + +error_out: + kfree(rcp_list); + return status; +} + /** * ice_aq_map_recipe_to_profile - Map recipe to packet profile * @hw: pointer to the HW struct @@ -3873,6 +3931,23 @@ ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, return ICE_MAX_NUM_RECIPES; } +/** + * ice_change_proto_id_to_dvm - change proto id in prot_id_tbl + * + * As protocol id for outer vlan is different in dvm and svm, if dvm is + * supported protocol array record for outer vlan has to be modified to + * reflect the value proper for DVM. + */ +void ice_change_proto_id_to_dvm(void) +{ + u8 i; + + for (i = 0; i < ARRAY_SIZE(ice_prot_id_tbl); i++) + if (ice_prot_id_tbl[i].type == ICE_VLAN_OFOS && + ice_prot_id_tbl[i].protocol_id != ICE_VLAN_OF_HW) + ice_prot_id_tbl[i].protocol_id = ICE_VLAN_OF_HW; +} + /** * ice_prot_type_to_id - get protocol ID from protocol type * @type: protocol type diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index 5000cc8276cd..7b42c51a3eb0 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -118,6 +118,15 @@ struct ice_fltr_info { u8 lan_en; /* Indicate if packet can be forwarded to the uplink */ }; +struct ice_update_recipe_lkup_idx_params { + u16 rid; + u16 fv_idx; + bool ignore_valid; + u16 mask; + bool mask_valid; + u8 lkup_idx; +}; + struct ice_adv_lkup_elem { enum ice_protocol_type type; union ice_prot_hdr h_u; /* Header values */ @@ -360,4 +369,8 @@ void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw); int ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz, u8 num_rules, enum ice_adminq_opc opc, struct ice_sq_cd *cd); +int +ice_update_recipe_lkup_idx(struct ice_hw *hw, + struct ice_update_recipe_lkup_idx_params *params); +void ice_change_proto_id_to_dvm(void); #endif /* _ICE_SWITCH_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index ef2ef064a74c..bb492e0eaf1b 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -14,6 +14,7 @@ #include "ice_flex_type.h" #include "ice_protocol_type.h" #include "ice_sbq_cmd.h" +#include "ice_vlan_mode.h" static inline bool ice_is_tc_ena(unsigned long bitmap, u8 tc) { @@ -53,6 +54,11 @@ static inline u32 ice_round_to_num(u32 N, u32 R) #define ICE_DBG_AQ_DESC BIT_ULL(25) #define ICE_DBG_AQ_DESC_BUF BIT_ULL(26) #define ICE_DBG_AQ_CMD BIT_ULL(27) +#define ICE_DBG_AQ (ICE_DBG_AQ_MSG | \ + ICE_DBG_AQ_DESC | \ + ICE_DBG_AQ_DESC_BUF | \ + ICE_DBG_AQ_CMD) + #define ICE_DBG_USER BIT_ULL(31) enum ice_aq_res_ids { @@ -919,6 +925,9 @@ struct ice_hw { struct udp_tunnel_nic_shared udp_tunnel_shared; struct udp_tunnel_nic_info udp_tunnel_nic; + /* dvm boost update information */ + struct ice_dvm_table dvm_upd; + /* HW block tables */ struct ice_blk_info blk[ICE_BLK_COUNT]; struct mutex fl_profs_locks[ICE_BLK_COUNT]; /* lock fltr profiles */ @@ -942,6 +951,7 @@ struct ice_hw { struct list_head rss_list_head; struct ice_mbx_snapshot mbx_snapshot; DECLARE_BITMAP(hw_ptype, ICE_FLOW_PTYPE_MAX); + u8 dvm_ena; u16 io_expander_handle; }; diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index d89577843d68..4be29f97365c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -3,6 +3,7 @@ #include "ice_vsi_vlan_ops.h" #include "ice_vsi_vlan_lib.h" +#include "ice_vlan_mode.h" #include "ice.h" #include "ice_vf_vsi_vlan_ops.h" #include "ice_virtchnl_pf.h" diff --git a/drivers/net/ethernet/intel/ice/ice_vlan_mode.c b/drivers/net/ethernet/intel/ice/ice_vlan_mode.c new file mode 100644 index 000000000000..1b618de592b7 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vlan_mode.c @@ -0,0 +1,439 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_common.h" + +/** + * ice_pkg_get_supported_vlan_mode - determine if DDP supports Double VLAN mode + * @hw: pointer to the HW struct + * @dvm: output variable to determine if DDP supports DVM(true) or SVM(false) + */ +static int +ice_pkg_get_supported_vlan_mode(struct ice_hw *hw, bool *dvm) +{ + u16 meta_init_size = sizeof(struct ice_meta_init_section); + struct ice_meta_init_section *sect; + struct ice_buf_build *bld; + int status; + + /* if anything fails, we assume there is no DVM support */ + *dvm = false; + + bld = ice_pkg_buf_alloc_single_section(hw, + ICE_SID_RXPARSER_METADATA_INIT, + meta_init_size, (void **)§); + if (!bld) + return -ENOMEM; + + /* only need to read a single section */ + sect->count = cpu_to_le16(1); + sect->offset = cpu_to_le16(ICE_META_VLAN_MODE_ENTRY); + + status = ice_aq_upload_section(hw, + (struct ice_buf_hdr *)ice_pkg_buf(bld), + ICE_PKG_BUF_SIZE, NULL); + if (!status) { + DECLARE_BITMAP(entry, ICE_META_INIT_BITS); + u32 arr[ICE_META_INIT_DW_CNT]; + u16 i; + + /* convert to host bitmap format */ + for (i = 0; i < ICE_META_INIT_DW_CNT; i++) + arr[i] = le32_to_cpu(sect->entry.bm[i]); + + bitmap_from_arr32(entry, arr, (u16)ICE_META_INIT_BITS); + + /* check if DVM is supported */ + *dvm = test_bit(ICE_META_VLAN_MODE_BIT, entry); + } + + ice_pkg_buf_free(hw, bld); + + return status; +} + +/** + * ice_aq_get_vlan_mode - get the VLAN mode of the device + * @hw: pointer to the HW structure + * @get_params: structure FW fills in based on the current VLAN mode config + * + * Get VLAN Mode Parameters (0x020D) + */ +static int +ice_aq_get_vlan_mode(struct ice_hw *hw, + struct ice_aqc_get_vlan_mode *get_params) +{ + struct ice_aq_desc desc; + + if (!get_params) + return -EINVAL; + + ice_fill_dflt_direct_cmd_desc(&desc, + ice_aqc_opc_get_vlan_mode_parameters); + + return ice_aq_send_cmd(hw, &desc, get_params, sizeof(*get_params), + NULL); +} + +/** + * ice_aq_is_dvm_ena - query FW to check if double VLAN mode is enabled + * @hw: pointer to the HW structure + * + * Returns true if the hardware/firmware is configured in double VLAN mode, + * else return false signaling that the hardware/firmware is configured in + * single VLAN mode. + * + * Also, return false if this call fails for any reason (i.e. firmware doesn't + * support this AQ call). + */ +static bool ice_aq_is_dvm_ena(struct ice_hw *hw) +{ + struct ice_aqc_get_vlan_mode get_params = { 0 }; + int status; + + status = ice_aq_get_vlan_mode(hw, &get_params); + if (status) { + ice_debug(hw, ICE_DBG_AQ, "Failed to get VLAN mode, status %d\n", + status); + return false; + } + + return (get_params.vlan_mode & ICE_AQ_VLAN_MODE_DVM_ENA); +} + +/** + * ice_is_dvm_ena - check if double VLAN mode is enabled + * @hw: pointer to the HW structure + * + * The device is configured in single or double VLAN mode on initialization and + * this cannot be dynamically changed during runtime. Based on this there is no + * need to make an AQ call every time the driver needs to know the VLAN mode. + * Instead, use the cached VLAN mode. + */ +bool ice_is_dvm_ena(struct ice_hw *hw) +{ + return hw->dvm_ena; +} + +/** + * ice_cache_vlan_mode - cache VLAN mode after DDP is downloaded + * @hw: pointer to the HW structure + * + * This is only called after downloading the DDP and after the global + * configuration lock has been released because all ports on a device need to + * cache the VLAN mode. + */ +static void ice_cache_vlan_mode(struct ice_hw *hw) +{ + hw->dvm_ena = ice_aq_is_dvm_ena(hw) ? true : false; +} + +/** + * ice_pkg_supports_dvm - find out if DDP supports DVM + * @hw: pointer to the HW structure + */ +static bool ice_pkg_supports_dvm(struct ice_hw *hw) +{ + bool pkg_supports_dvm; + int status; + + status = ice_pkg_get_supported_vlan_mode(hw, &pkg_supports_dvm); + if (status) { + ice_debug(hw, ICE_DBG_PKG, "Failed to get supported VLAN mode, status %d\n", + status); + return false; + } + + return pkg_supports_dvm; +} + +/** + * ice_fw_supports_dvm - find out if FW supports DVM + * @hw: pointer to the HW structure + */ +static bool ice_fw_supports_dvm(struct ice_hw *hw) +{ + struct ice_aqc_get_vlan_mode get_vlan_mode = { 0 }; + int status; + + /* If firmware returns success, then it supports DVM, else it only + * supports SVM + */ + status = ice_aq_get_vlan_mode(hw, &get_vlan_mode); + if (status) { + ice_debug(hw, ICE_DBG_NVM, "Failed to get VLAN mode, status %d\n", + status); + return false; + } + + return true; +} + +/** + * ice_is_dvm_supported - check if Double VLAN Mode is supported + * @hw: pointer to the hardware structure + * + * Returns true if Double VLAN Mode (DVM) is supported and false if only Single + * VLAN Mode (SVM) is supported. In order for DVM to be supported the DDP and + * firmware must support it, otherwise only SVM is supported. This function + * should only be called while the global config lock is held and after the + * package has been successfully downloaded. + */ +static bool ice_is_dvm_supported(struct ice_hw *hw) +{ + if (!ice_pkg_supports_dvm(hw)) { + ice_debug(hw, ICE_DBG_PKG, "DDP doesn't support DVM\n"); + return false; + } + + if (!ice_fw_supports_dvm(hw)) { + ice_debug(hw, ICE_DBG_PKG, "FW doesn't support DVM\n"); + return false; + } + + return true; +} + +#define ICE_EXTERNAL_VLAN_ID_FV_IDX 11 +#define ICE_SW_LKUP_VLAN_LOC_LKUP_IDX 1 +#define ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX 2 +#define ICE_SW_LKUP_PROMISC_VLAN_LOC_LKUP_IDX 2 +#define ICE_PKT_FLAGS_0_TO_15_FV_IDX 1 +#define ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK 0xD000 +static struct ice_update_recipe_lkup_idx_params ice_dvm_dflt_recipes[] = { + { + /* Update recipe ICE_SW_LKUP_VLAN to filter based on the + * outer/single VLAN in DVM + */ + .rid = ICE_SW_LKUP_VLAN, + .fv_idx = ICE_EXTERNAL_VLAN_ID_FV_IDX, + .ignore_valid = true, + .mask = 0, + .mask_valid = false, /* use pre-existing mask */ + .lkup_idx = ICE_SW_LKUP_VLAN_LOC_LKUP_IDX, + }, + { + /* Update recipe ICE_SW_LKUP_VLAN to filter based on the VLAN + * packet flags to support VLAN filtering on multiple VLAN + * ethertypes (i.e. 0x8100 and 0x88a8) in DVM + */ + .rid = ICE_SW_LKUP_VLAN, + .fv_idx = ICE_PKT_FLAGS_0_TO_15_FV_IDX, + .ignore_valid = false, + .mask = ICE_PKT_FLAGS_0_TO_15_VLAN_FLAGS_MASK, + .mask_valid = true, + .lkup_idx = ICE_SW_LKUP_VLAN_PKT_FLAGS_LKUP_IDX, + }, + { + /* Update recipe ICE_SW_LKUP_PROMISC_VLAN to filter based on the + * outer/single VLAN in DVM + */ + .rid = ICE_SW_LKUP_PROMISC_VLAN, + .fv_idx = ICE_EXTERNAL_VLAN_ID_FV_IDX, + .ignore_valid = true, + .mask = 0, + .mask_valid = false, /* use pre-existing mask */ + .lkup_idx = ICE_SW_LKUP_PROMISC_VLAN_LOC_LKUP_IDX, + }, +}; + +/** + * ice_dvm_update_dflt_recipes - update default switch recipes in DVM + * @hw: hardware structure used to update the recipes + */ +static int ice_dvm_update_dflt_recipes(struct ice_hw *hw) +{ + unsigned long i; + + for (i = 0; i < ARRAY_SIZE(ice_dvm_dflt_recipes); i++) { + struct ice_update_recipe_lkup_idx_params *params; + int status; + + params = &ice_dvm_dflt_recipes[i]; + + status = ice_update_recipe_lkup_idx(hw, params); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to update RID %d lkup_idx %d fv_idx %d mask_valid %s mask 0x%04x\n", + params->rid, params->lkup_idx, params->fv_idx, + params->mask_valid ? "true" : "false", + params->mask); + return status; + } + } + + return 0; +} + +/** + * ice_aq_set_vlan_mode - set the VLAN mode of the device + * @hw: pointer to the HW structure + * @set_params: requested VLAN mode configuration + * + * Set VLAN Mode Parameters (0x020C) + */ +static int +ice_aq_set_vlan_mode(struct ice_hw *hw, + struct ice_aqc_set_vlan_mode *set_params) +{ + u8 rdma_packet, mng_vlan_prot_id; + struct ice_aq_desc desc; + + if (!set_params) + return -EINVAL; + + if (set_params->l2tag_prio_tagging > ICE_AQ_VLAN_PRIO_TAG_MAX) + return -EINVAL; + + rdma_packet = set_params->rdma_packet; + if (rdma_packet != ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING && + rdma_packet != ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING) + return -EINVAL; + + mng_vlan_prot_id = set_params->mng_vlan_prot_id; + if (mng_vlan_prot_id != ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER && + mng_vlan_prot_id != ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER) + return -EINVAL; + + ice_fill_dflt_direct_cmd_desc(&desc, + ice_aqc_opc_set_vlan_mode_parameters); + desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD); + + return ice_aq_send_cmd(hw, &desc, set_params, sizeof(*set_params), + NULL); +} + +/** + * ice_set_dvm - sets up software and hardware for double VLAN mode + * @hw: pointer to the hardware structure + */ +static int ice_set_dvm(struct ice_hw *hw) +{ + struct ice_aqc_set_vlan_mode params = { 0 }; + int status; + + params.l2tag_prio_tagging = ICE_AQ_VLAN_PRIO_TAG_OUTER_CTAG; + params.rdma_packet = ICE_AQ_DVM_VLAN_RDMA_PKT_FLAG_SETTING; + params.mng_vlan_prot_id = ICE_AQ_VLAN_MNG_PROTOCOL_ID_OUTER; + + status = ice_aq_set_vlan_mode(hw, ¶ms); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set double VLAN mode parameters, status %d\n", + status); + return status; + } + + status = ice_dvm_update_dflt_recipes(hw); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to update default recipes for double VLAN mode, status %d\n", + status); + return status; + } + + status = ice_aq_set_port_params(hw->port_info, true, NULL); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set port in double VLAN mode, status %d\n", + status); + return status; + } + + status = ice_set_dvm_boost_entries(hw); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set boost TCAM entries for double VLAN mode, status %d\n", + status); + return status; + } + + return 0; +} + +/** + * ice_set_svm - set single VLAN mode + * @hw: pointer to the HW structure + */ +static int ice_set_svm(struct ice_hw *hw) +{ + struct ice_aqc_set_vlan_mode *set_params; + int status; + + status = ice_aq_set_port_params(hw->port_info, false, NULL); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to set port parameters for single VLAN mode\n"); + return status; + } + + set_params = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*set_params), + GFP_KERNEL); + if (!set_params) + return -ENOMEM; + + /* default configuration for SVM configurations */ + set_params->l2tag_prio_tagging = ICE_AQ_VLAN_PRIO_TAG_INNER_CTAG; + set_params->rdma_packet = ICE_AQ_SVM_VLAN_RDMA_PKT_FLAG_SETTING; + set_params->mng_vlan_prot_id = ICE_AQ_VLAN_MNG_PROTOCOL_ID_INNER; + + status = ice_aq_set_vlan_mode(hw, set_params); + if (status) + ice_debug(hw, ICE_DBG_INIT, "Failed to configure port in single VLAN mode\n"); + + devm_kfree(ice_hw_to_dev(hw), set_params); + return status; +} + +/** + * ice_set_vlan_mode + * @hw: pointer to the HW structure + */ +int ice_set_vlan_mode(struct ice_hw *hw) +{ + if (!ice_is_dvm_supported(hw)) + return 0; + + if (!ice_set_dvm(hw)) + return 0; + + return ice_set_svm(hw); +} + +/** + * ice_print_dvm_not_supported - print if DDP and/or FW doesn't support DVM + * @hw: pointer to the HW structure + * + * The purpose of this function is to print that QinQ is not supported due to + * incompatibilty from the DDP and/or FW. This will give a hint to the user to + * update one and/or both components if they expect QinQ functionality. + */ +static void ice_print_dvm_not_supported(struct ice_hw *hw) +{ + bool pkg_supports_dvm = ice_pkg_supports_dvm(hw); + bool fw_supports_dvm = ice_fw_supports_dvm(hw); + + if (!fw_supports_dvm && !pkg_supports_dvm) + dev_info(ice_hw_to_dev(hw), "QinQ functionality cannot be enabled on this device. Update your DDP package and NVM to versions that support QinQ.\n"); + else if (!pkg_supports_dvm) + dev_info(ice_hw_to_dev(hw), "QinQ functionality cannot be enabled on this device. Update your DDP package to a version that supports QinQ.\n"); + else if (!fw_supports_dvm) + dev_info(ice_hw_to_dev(hw), "QinQ functionality cannot be enabled on this device. Update your NVM to a version that supports QinQ.\n"); +} + +/** + * ice_post_pkg_dwnld_vlan_mode_cfg - configure VLAN mode after DDP download + * @hw: pointer to the HW structure + * + * This function is meant to configure any VLAN mode specific functionality + * after the global configuration lock has been released and the DDP has been + * downloaded. + * + * Since only one PF downloads the DDP and configures the VLAN mode there needs + * to be a way to configure the other PFs after the DDP has been downloaded and + * the global configuration lock has been released. All such code should go in + * this function. + */ +void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw) +{ + ice_cache_vlan_mode(hw); + + if (ice_is_dvm_ena(hw)) + ice_change_proto_id_to_dvm(); + else + ice_print_dvm_not_supported(hw); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vlan_mode.h b/drivers/net/ethernet/intel/ice/ice_vlan_mode.h new file mode 100644 index 000000000000..a0fb743d08e2 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vlan_mode.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VLAN_MODE_H_ +#define _ICE_VLAN_MODE_H_ + +struct ice_hw; + +bool ice_is_dvm_ena(struct ice_hw *hw); +int ice_set_vlan_mode(struct ice_hw *hw); +void ice_post_pkg_dwnld_vlan_mode_cfg(struct ice_hw *hw); + +#endif /* _ICE_VLAN_MODE_H */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 62a2630d6fab..5b4a0abb4607 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -39,20 +39,20 @@ static bool validate_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) */ int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - int err = 0; + int err; if (!validate_vlan(vsi, vlan)) return -EINVAL; - if (!ice_fltr_add_vlan(vsi, vlan)) { - vsi->num_vlan++; - } else { - err = -ENODEV; - dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i\n", - vlan->vid, vsi->vsi_num); + err = ice_fltr_add_vlan(vsi, vlan); + if (err && err != -EEXIST) { + dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i, status %d\n", + vlan->vid, vsi->vsi_num, err); + return err; } - return err; + vsi->num_vlan++; + return 0; } /** @@ -72,16 +72,13 @@ int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) dev = ice_pf_to_dev(pf); err = ice_fltr_remove_vlan(vsi, vlan); - if (!err) { + if (!err) vsi->num_vlan--; - } else if (err == -ENOENT) { - dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist\n", - vlan->vid, vsi->vsi_num); + else if (err == -ENOENT || err == -EBUSY) err = 0; - } else { + else dev_err(dev, "Error removing VLAN %d on VSI %i error: %d\n", vlan->vid, vsi->vsi_num, err); - } return err; } diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 30d02d2b8e5f..5b47568f6256 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -23,11 +23,6 @@ struct ice_vsi_vlan_ops { int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); }; -static inline bool ice_is_dvm_ena(struct ice_hw __always_unused *hw) -{ - return false; -} - void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); struct ice_vsi_vlan_ops *ice_get_compat_vsi_vlan_ops(struct ice_vsi *vsi); -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:43 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:43 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 05/14] ice: Refactor vf->port_vlan_info to use ice_vlan In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-5-anthony.l.nguyen@intel.com> From: Brett Creeley The current vf->port_vlan_info variable is a packed u16 that contains the port VLAN ID and QoS/prio value. This is fine, but changes are incoming that allow for an 802.1ad port VLAN. Add flexibility by changing the vf->port_vlan_info member to be an ice_vlan structure. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 76 ++++++++++--------- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 3 +- 2 files changed, 44 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index d580120dbb93..4971e547432c 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -751,6 +751,21 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf) return 0; } +static u16 ice_vf_get_port_vlan_id(struct ice_vf *vf) +{ + return vf->port_vlan_info.vid; +} + +static u8 ice_vf_get_port_vlan_prio(struct ice_vf *vf) +{ + return vf->port_vlan_info.prio; +} + +static bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) +{ + return (ice_vf_get_port_vlan_id(vf) || ice_vf_get_port_vlan_prio(vf)); +} + /** * ice_vf_rebuild_host_vlan_cfg - add VLAN 0 filter or rebuild the Port VLAN * @vf: VF to add MAC filters for @@ -760,16 +775,12 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf) */ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) { - u8 vlan_prio = (vf->port_vlan_info & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; - u16 vlan_id = vf->port_vlan_info & VLAN_VID_MASK; struct device *dev = ice_pf_to_dev(vf->pf); struct ice_vsi *vsi = ice_get_vf_vsi(vf); - struct ice_vlan vlan; int err; - vlan = ICE_VLAN(vlan_id, vlan_prio); - if (vf->port_vlan_info) { - err = vsi->vlan_ops.set_port_vlan(vsi, &vlan); + if (ice_vf_is_port_vlan_ena(vf)) { + err = vsi->vlan_ops.set_port_vlan(vsi, &vf->port_vlan_info); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); @@ -777,12 +788,11 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) } } - /* vlan_id will either be 0 or the port VLAN number */ - err = vsi->vlan_ops.add_vlan(vsi, &vlan); + err = vsi->vlan_ops.add_vlan(vsi, &vf->port_vlan_info); if (err) { - dev_err(dev, "failed to add %s VLAN %u filter for VF %u, error %d\n", - vf->port_vlan_info ? "port" : "", vlan_id, vf->vf_id, - err); + dev_err(dev, "failed to add VLAN %u filter for VF %u during VF rebuild, error %d\n", + ice_vf_is_port_vlan_ena(vf) ? + ice_vf_get_port_vlan_id(vf) : 0, vf->vf_id, err); return err; } @@ -1255,9 +1265,9 @@ static int ice_vf_set_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 pro struct ice_hw *hw = &vsi->back->hw; int status; - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_set_vsi_promisc(hw, vsi->idx, promisc_m, - vf->port_vlan_info & VLAN_VID_MASK); + ice_vf_get_port_vlan_id(vf)); else if (vsi->num_vlan > 1) status = ice_fltr_set_vlan_vsi_promisc(hw, vsi, promisc_m); else @@ -1277,9 +1287,9 @@ static int ice_vf_clear_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 p struct ice_hw *hw = &vsi->back->hw; int status; - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_clear_vsi_promisc(hw, vsi->idx, promisc_m, - vf->port_vlan_info & VLAN_VID_MASK); + ice_vf_get_port_vlan_id(vf)); else if (vsi->num_vlan > 1) status = ice_fltr_clear_vlan_vsi_promisc(hw, vsi, promisc_m); else @@ -1654,7 +1664,7 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr) */ if (test_bit(ICE_VF_STATE_UC_PROMISC, vf->vf_states) || test_bit(ICE_VF_STATE_MC_PROMISC, vf->vf_states)) { - if (vf->port_vlan_info || vsi->num_vlan) + if (ice_vf_is_port_vlan_ena(vf) || vsi->num_vlan) promisc_m = ICE_UCAST_VLAN_PROMISC_BITS; else promisc_m = ICE_UCAST_PROMISC_BITS; @@ -2277,7 +2287,7 @@ static u16 ice_vc_get_max_frame_size(struct ice_vf *vf) max_frame_size = pi->phy.link_info.max_frame_size; - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) max_frame_size -= VLAN_HLEN; return max_frame_size; @@ -2326,7 +2336,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) goto err; } - if (!vsi->info.pvid) + if (!ice_vf_is_port_vlan_ena(vf)) vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_VLAN; if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { @@ -3050,7 +3060,7 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) rm_promisc = !allmulti && !alluni; - if (vsi->num_vlan || vf->port_vlan_info) { + if (vsi->num_vlan || ice_vf_is_port_vlan_ena(vf)) { if (rm_promisc) ret = vsi->vlan_ops.ena_rx_filtering(vsi); else @@ -3086,7 +3096,7 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) } else { u8 mcast_m, ucast_m; - if (vf->port_vlan_info || vsi->num_vlan > 1) { + if (ice_vf_is_port_vlan_ena(vf) || vsi->num_vlan > 1) { mcast_m = ICE_MCAST_VLAN_PROMISC_BITS; ucast_m = ICE_UCAST_VLAN_PROMISC_BITS; } else { @@ -3669,7 +3679,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) /* add space for the port VLAN since the VF driver is not * expected to account for it in the MTU calculation */ - if (vf->port_vlan_info) + if (ice_vf_is_port_vlan_ena(vf)) vsi->max_frame += VLAN_HLEN; if (ice_vsi_cfg_single_rxq(vsi, q_idx)) { @@ -4097,7 +4107,6 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, struct ice_pf *pf = ice_netdev_to_pf(netdev); struct device *dev; struct ice_vf *vf; - u16 vlanprio; int ret; dev = ice_pf_to_dev(pf); @@ -4120,20 +4129,19 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, if (ret) return ret; - vlanprio = vlan_id | (qos << VLAN_PRIO_SHIFT); - - if (vf->port_vlan_info == vlanprio) { + if (ice_vf_get_port_vlan_prio(vf) == qos && + ice_vf_get_port_vlan_id(vf) == vlan_id) { /* duplicate request, so just return success */ - dev_dbg(dev, "Duplicate pvid %d request\n", vlanprio); + dev_dbg(dev, "Duplicate port VLAN %u, QoS %u request\n", + vlan_id, qos); return 0; } mutex_lock(&vf->cfg_lock); - vf->port_vlan_info = vlanprio; - - if (vf->port_vlan_info) - dev_info(dev, "Setting VLAN %d, QoS 0x%x on VF %d\n", + vf->port_vlan_info = ICE_VLAN(vlan_id, qos); + if (ice_vf_is_port_vlan_ena(vf)) + dev_info(dev, "Setting VLAN %u, QoS %u on VF %d\n", vlan_id, qos, vf_id); else dev_info(dev, "Clearing port VLAN on VF %d\n", vf_id); @@ -4219,7 +4227,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - if (vsi->info.pvid) { + if (ice_vf_is_port_vlan_ena(vf)) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; } @@ -4445,7 +4453,7 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return -EINVAL; /* don't modify stripping if port VLAN is configured */ - if (vsi->info.pvid) + if (ice_vf_is_port_vlan_ena(vf)) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) @@ -4815,8 +4823,8 @@ ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi) ether_addr_copy(ivi->mac, vf->hw_lan_addr.addr); /* VF configuration for VLAN and applicable QoS */ - ivi->vlan = vf->port_vlan_info & VLAN_VID_MASK; - ivi->qos = (vf->port_vlan_info & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + ivi->vlan = ice_vf_get_port_vlan_id(vf); + ivi->qos = ice_vf_get_port_vlan_prio(vf); ivi->trusted = vf->trusted; ivi->spoofchk = vf->spoofchk; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index 752487a1bdd6..5079a3b72698 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -5,6 +5,7 @@ #define _ICE_VIRTCHNL_PF_H_ #include "ice.h" #include "ice_virtchnl_fdir.h" +#include "ice_vsi_vlan_ops.h" /* Restrict number of MAC Addr and VLAN that non-trusted VF can programmed */ #define ICE_MAX_VLAN_PER_VF 8 @@ -119,7 +120,7 @@ struct ice_vf { struct ice_time_mac legacy_last_added_umac; DECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF); DECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF); - u16 port_vlan_info; /* Port VLAN ID and QoS */ + struct ice_vlan port_vlan_info; /* Port VLAN ID and QoS */ u8 pf_set_mac:1; /* VF MAC address set by VMM admin */ u8 trusted:1; u8 spoofchk:1; -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:50 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:50 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 12/14] ice: Advertise 802.1ad VLAN filtering and offloads for PF netdev In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-12-anthony.l.nguyen@intel.com> From: Brett Creeley In order for the driver to support 802.1ad VLAN filtering and offloads, it needs to advertise those VLAN features and also support modifying those VLAN features, so make the necessary changes to ice_set_netdev_features(). By default, enable CTAG insertion/stripping and CTAG filtering for both Single and Double VLAN Modes (SVM/DVM). Also, in DVM, enable STAG filtering by default. This is done by setting the feature bits in netdev->features. Also, in DVM, support toggling of STAG insertion/stripping, but don't enable them by default. This is done by setting the feature bits in netdev->hw_features. Since 802.1ad VLAN filtering and offloads are only supported in DVM, make sure they are not enabled by default and that they cannot be enabled during runtime, when the device is in SVM. Add an implementation for the ndo_fix_features() callback. This is needed since the hardware cannot support multiple VLAN ethertypes for VLAN insertion/stripping simultaneously and all supported VLAN filtering must either be enabled or disabled together. Disable inner VLAN stripping by default when DVM is enabled. If a VSI supports stripping the inner VLAN in DVM, then it will have to configure that during runtime. For example if a VF is configured in a port VLAN while DVM is enabled it will be allowed to offload inner VLANs. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_lib.c | 27 ++- drivers/net/ethernet/intel/ice/ice_main.c | 260 ++++++++++++++++++---- 2 files changed, 238 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 36507f0dc04e..de37928c2870 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -796,11 +796,12 @@ static void ice_vsi_set_rss_params(struct ice_vsi *vsi) /** * ice_set_dflt_vsi_ctx - Set default VSI context before adding a VSI + * @hw: HW structure used to determine the VLAN mode of the device * @ctxt: the VSI context being set * * This initializes a default VSI context for all sections except the Queues. */ -static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) +static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt) { u32 table = 0; @@ -811,13 +812,27 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE; /* Traffic from VSI can be sent to LAN */ ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; - /* By default bits 3 and 4 in inner_vlan_flags are 0's which results in legacy - * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all - * packets untagged/tagged. - */ + /* allow all untagged/tagged packets by default on Tx */ ctxt->info.inner_vlan_flags = ((ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL & ICE_AQ_VSI_INNER_VLAN_TX_MODE_M) >> ICE_AQ_VSI_INNER_VLAN_TX_MODE_S); + /* SVM - by default bits 3 and 4 in inner_vlan_flags are 0's which + * results in legacy behavior (show VLAN, DEI, and UP) in descriptor. + * + * DVM - leave inner VLAN in packet by default + */ + if (ice_is_dvm_ena(hw)) { + ctxt->info.inner_vlan_flags |= + ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; + ctxt->info.outer_vlan_flags = + (ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M; + ctxt->info.outer_vlan_flags |= + (ICE_AQ_VSI_OUTER_TAG_VLAN_8100 << + ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M; + } /* Have 1:1 UP mapping for both ingress/egress tables */ table |= ICE_UP_TABLE_TRANSLATE(0, 0); table |= ICE_UP_TABLE_TRANSLATE(1, 1); @@ -1094,7 +1109,7 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; } - ice_set_dflt_vsi_ctx(ctxt); + ice_set_dflt_vsi_ctx(hw, ctxt); if (test_bit(ICE_FLAG_FD_ENA, pf->flags)) ice_set_fd_vsi_ctx(ctxt, vsi); /* if the switch is in VEB mode, allow VSI loopback */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 563b597b0a85..851dbd70d809 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -416,7 +416,8 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) IFF_PROMISC; goto out_promisc; } - if (vsi->num_vlan > 1) + if (vsi->current_netdev_flags & + NETIF_F_HW_VLAN_CTAG_FILTER) vlan_ops->ena_rx_filtering(vsi); } } @@ -3240,6 +3241,7 @@ static void ice_set_ops(struct net_device *netdev) static void ice_set_netdev_features(struct net_device *netdev) { struct ice_pf *pf = ice_netdev_to_pf(netdev); + bool is_dvm_ena = ice_is_dvm_ena(&pf->hw); netdev_features_t csumo_features; netdev_features_t vlano_features; netdev_features_t dflt_features; @@ -3266,6 +3268,10 @@ static void ice_set_netdev_features(struct net_device *netdev) NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX; + /* Enable CTAG/STAG filtering by default in Double VLAN Mode (DVM) */ + if (is_dvm_ena) + vlano_features |= NETIF_F_HW_VLAN_STAG_FILTER; + tso_features = NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | @@ -3297,6 +3303,15 @@ static void ice_set_netdev_features(struct net_device *netdev) tso_features; netdev->vlan_features |= dflt_features | csumo_features | tso_features; + + /* advertise support but don't enable by default since only one type of + * VLAN offload can be enabled at a time (i.e. CTAG or STAG). When one + * type turns on the other has to be turned off. This is enforced by the + * ice_fix_features() ndo callback. + */ + if (is_dvm_ena) + netdev->hw_features |= NETIF_F_HW_VLAN_STAG_RX | + NETIF_F_HW_VLAN_STAG_TX; } /** @@ -3432,13 +3447,6 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); - /* Enable VLAN pruning when a VLAN other than 0 is added */ - if (!ice_vsi_is_vlan_pruning_ena(vsi)) { - ret = vlan_ops->ena_rx_filtering(vsi); - if (ret) - return ret; - } - /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ @@ -3481,12 +3489,8 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) if (ret) return ret; - /* Disable pruning when VLAN 0 is the only VLAN rule */ - if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - vlan_ops->dis_rx_filtering(vsi); - set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); - return ret; + return 0; } /** @@ -5596,6 +5600,194 @@ ice_fdb_del(struct ndmsg *ndm, __always_unused struct nlattr *tb[], return err; } +#define NETIF_VLAN_OFFLOAD_FEATURES (NETIF_F_HW_VLAN_CTAG_RX | \ + NETIF_F_HW_VLAN_CTAG_TX | \ + NETIF_F_HW_VLAN_STAG_RX | \ + NETIF_F_HW_VLAN_STAG_TX) + +#define NETIF_VLAN_FILTERING_FEATURES (NETIF_F_HW_VLAN_CTAG_FILTER | \ + NETIF_F_HW_VLAN_STAG_FILTER) + +/** + * ice_fix_features - fix the netdev features flags based on device limitations + * @netdev: ptr to the netdev that flags are being fixed on + * @features: features that need to be checked and possibly fixed + * + * Make sure any fixups are made to features in this callback. This enables the + * driver to not have to check unsupported configurations throughout the driver + * because that's the responsiblity of this callback. + * + * Single VLAN Mode (SVM) Supported Features: + * NETIF_F_HW_VLAN_CTAG_FILTER + * NETIF_F_HW_VLAN_CTAG_RX + * NETIF_F_HW_VLAN_CTAG_TX + * + * Double VLAN Mode (DVM) Supported Features: + * NETIF_F_HW_VLAN_CTAG_FILTER + * NETIF_F_HW_VLAN_CTAG_RX + * NETIF_F_HW_VLAN_CTAG_TX + * + * NETIF_F_HW_VLAN_STAG_FILTER + * NETIF_HW_VLAN_STAG_RX + * NETIF_HW_VLAN_STAG_TX + * + * Features that need fixing: + * Cannot simultaneously enable CTAG and STAG stripping and/or insertion. + * These are mutually exlusive as the VSI context cannot support multiple + * VLAN ethertypes simultaneously for stripping and/or insertion. If this + * is not done, then default to clearing the requested STAG offload + * settings. + * + * All supported filtering has to be enabled or disabled together. For + * example, in DVM, CTAG and STAG filtering have to be enabled and disabled + * together. If this is not done, then default to VLAN filtering disabled. + * These are mutually exclusive as there is currently no way to + * enable/disable VLAN filtering based on VLAN ethertype when using VLAN + * prune rules. + */ +static netdev_features_t +ice_fix_features(struct net_device *netdev, netdev_features_t features) +{ + struct ice_netdev_priv *np = netdev_priv(netdev); + netdev_features_t supported_vlan_filtering; + netdev_features_t requested_vlan_filtering; + struct ice_vsi *vsi = np->vsi; + + requested_vlan_filtering = features & NETIF_VLAN_FILTERING_FEATURES; + + /* make sure supported_vlan_filtering works for both SVM and DVM */ + supported_vlan_filtering = NETIF_F_HW_VLAN_CTAG_FILTER; + if (ice_is_dvm_ena(&vsi->back->hw)) + supported_vlan_filtering |= NETIF_F_HW_VLAN_STAG_FILTER; + + if (requested_vlan_filtering && + requested_vlan_filtering != supported_vlan_filtering) { + if (requested_vlan_filtering & NETIF_F_HW_VLAN_CTAG_FILTER) { + netdev_warn(netdev, "cannot support requested VLAN filtering settings, enabling all supported VLAN filtering settings\n"); + features |= supported_vlan_filtering; + } else { + netdev_warn(netdev, "cannot support requested VLAN filtering settings, clearing all supported VLAN filtering settings\n"); + features &= ~supported_vlan_filtering; + } + } + + if ((features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) && + (features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX))) { + netdev_warn(netdev, "cannot support CTAG and STAG VLAN stripping and/or insertion simultaneously since CTAG and STAG offloads are mutually exclusive, clearing STAG offload settings\n"); + features &= ~(NETIF_F_HW_VLAN_STAG_RX | + NETIF_F_HW_VLAN_STAG_TX); + } + + return features; +} + +/** + * ice_set_vlan_offload_features - set VLAN offload features for the PF VSI + * @vsi: PF's VSI + * @features: features used to determine VLAN offload settings + * + * First, determine the vlan_ethertype based on the VLAN offload bits in + * features. Then determine if stripping and insertion should be enabled or + * disabled. Finally enable or disable VLAN stripping and insertion. + */ +static int +ice_set_vlan_offload_features(struct ice_vsi *vsi, netdev_features_t features) +{ + bool enable_stripping = true, enable_insertion = true; + struct ice_vsi_vlan_ops *vlan_ops; + int strip_err = 0, insert_err = 0; + u16 vlan_ethertype = 0; + + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + + if (features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX)) + vlan_ethertype = ETH_P_8021AD; + else if (features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX)) + vlan_ethertype = ETH_P_8021Q; + + if (!(features & (NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_CTAG_RX))) + enable_stripping = false; + if (!(features & (NETIF_F_HW_VLAN_STAG_TX | NETIF_F_HW_VLAN_CTAG_TX))) + enable_insertion = false; + + if (enable_stripping) + strip_err = vlan_ops->ena_stripping(vsi, vlan_ethertype); + else + strip_err = vlan_ops->dis_stripping(vsi); + + if (enable_insertion) + insert_err = vlan_ops->ena_insertion(vsi, vlan_ethertype); + else + insert_err = vlan_ops->dis_insertion(vsi); + + if (strip_err || insert_err) + return -EIO; + + return 0; +} + +/** + * ice_set_vlan_filtering_features - set VLAN filtering features for the PF VSI + * @vsi: PF's VSI + * @features: features used to determine VLAN filtering settings + * + * Enable or disable Rx VLAN filtering based on the VLAN filtering bits in the + * features. + */ +static int +ice_set_vlan_filtering_features(struct ice_vsi *vsi, netdev_features_t features) +{ + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + int err = 0; + + /* support Single VLAN Mode (SVM) and Double VLAN Mode (DVM) by checking + * if either bit is set + */ + if (features & + (NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER)) + err = vlan_ops->ena_rx_filtering(vsi); + else + err = vlan_ops->dis_rx_filtering(vsi); + + return err; +} + +/** + * ice_set_vlan_features - set VLAN settings based on suggested feature set + * @netdev: ptr to the netdev being adjusted + * @features: the feature set that the stack is suggesting + * + * Only update VLAN settings if the requested_vlan_features are different than + * the current_vlan_features. + */ +static int +ice_set_vlan_features(struct net_device *netdev, netdev_features_t features) +{ + netdev_features_t current_vlan_features, requested_vlan_features; + struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi *vsi = np->vsi; + int err; + + current_vlan_features = netdev->features & NETIF_VLAN_OFFLOAD_FEATURES; + requested_vlan_features = features & NETIF_VLAN_OFFLOAD_FEATURES; + if (current_vlan_features ^ requested_vlan_features) { + err = ice_set_vlan_offload_features(vsi, features); + if (err) + return err; + } + + current_vlan_features = netdev->features & + NETIF_VLAN_FILTERING_FEATURES; + requested_vlan_features = features & NETIF_VLAN_FILTERING_FEATURES; + if (current_vlan_features ^ requested_vlan_features) { + err = ice_set_vlan_filtering_features(vsi, features); + if (err) + return err; + } + + return 0; +} + /** * ice_set_features - set the netdev feature flags * @netdev: ptr to the netdev being adjusted @@ -5605,7 +5797,6 @@ static int ice_set_features(struct net_device *netdev, netdev_features_t features) { struct ice_netdev_priv *np = netdev_priv(netdev); - struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; int ret = 0; @@ -5622,8 +5813,6 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) return -EBUSY; } - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); - /* Multiple features can be changed in one call so keep features in * separate if/else statements to guarantee each feature is checked */ @@ -5633,26 +5822,9 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) netdev->features & NETIF_F_RXHASH) ice_vsi_manage_rss_lut(vsi, false); - if ((features & NETIF_F_HW_VLAN_CTAG_RX) && - !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vlan_ops->ena_stripping(vsi, ETH_P_8021Q); - else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && - (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vlan_ops->dis_stripping(vsi); - - if ((features & NETIF_F_HW_VLAN_CTAG_TX) && - !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vlan_ops->ena_insertion(vsi, ETH_P_8021Q); - else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && - (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vlan_ops->dis_insertion(vsi); - - if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && - !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vlan_ops->ena_rx_filtering(vsi); - else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && - (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vlan_ops->dis_rx_filtering(vsi); + ret = ice_set_vlan_features(netdev, features); + if (ret) + return ret; if ((features & NETIF_F_NTUPLE) && !(netdev->features & NETIF_F_NTUPLE)) { @@ -5676,7 +5848,7 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) else clear_bit(ICE_FLAG_CLS_FLOWER, pf->flags); - return ret; + return 0; } /** @@ -5685,14 +5857,15 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) */ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) { - struct ice_vsi_vlan_ops *vlan_ops; + int err; - vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + err = ice_set_vlan_offload_features(vsi, vsi->netdev->features); + if (err) + return err; - if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - vlan_ops->ena_stripping(vsi, ETH_P_8021Q); - if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - vlan_ops->ena_insertion(vsi, ETH_P_8021Q); + err = ice_set_vlan_filtering_features(vsi, vsi->netdev->features); + if (err) + return err; return ice_vsi_add_vlan_zero(vsi); } @@ -8549,6 +8722,7 @@ static const struct net_device_ops ice_netdev_ops = { .ndo_start_xmit = ice_start_xmit, .ndo_select_queue = ice_select_queue, .ndo_features_check = ice_features_check, + .ndo_fix_features = ice_fix_features, .ndo_set_rx_mode = ice_set_rx_mode, .ndo_set_mac_address = ice_set_mac_address, .ndo_validate_addr = eth_validate_addr, -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:51 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:51 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 13/14] ice: Add support for 802.1ad port VLANs VF In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-13-anthony.l.nguyen@intel.com> From: Brett Creeley Currently there is only support for 802.1Q port VLANs on SR-IOV VFs. Add support to also allow 802.1ad port VLANs when double VLAN mode is enabled. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 51 ++++++++++++++++--- 1 file changed, 44 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index de74a2b4f846..f1802de98b82 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -768,6 +768,11 @@ bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) return (ice_vf_get_port_vlan_id(vf) || ice_vf_get_port_vlan_prio(vf)); } +static u16 ice_vf_get_port_vlan_tpid(struct ice_vf *vf) +{ + return vf->port_vlan_info.tpid; +} + /** * ice_vf_rebuild_host_vlan_cfg - add VLAN 0 filter or rebuild the Port VLAN * @vf: VF to add MAC filters for @@ -4129,6 +4134,33 @@ static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg) v_ret, (u8 *)vfres, sizeof(*vfres)); } +/** + * ice_is_supported_port_vlan_proto - make sure the vlan_proto is supported + * @hw: hardware structure used to check the VLAN mode + * @vlan_proto: VLAN TPID being checked + * + * If the device is configured in Double VLAN Mode (DVM), then both ETH_P_8021Q + * and ETH_P_8021AD are supported. If the device is configured in Single VLAN + * Mode (SVM), then only ETH_P_8021Q is supported. + */ +static bool +ice_is_supported_port_vlan_proto(struct ice_hw *hw, u16 vlan_proto) +{ + bool is_supported = false; + + switch (vlan_proto) { + case ETH_P_8021Q: + is_supported = true; + break; + case ETH_P_8021AD: + if (ice_is_dvm_ena(hw)) + is_supported = true; + break; + } + + return is_supported; +} + /** * ice_set_vf_port_vlan * @netdev: network interface device structure @@ -4144,6 +4176,7 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, __be16 vlan_proto) { struct ice_pf *pf = ice_netdev_to_pf(netdev); + u16 local_vlan_proto = ntohs(vlan_proto); struct device *dev; struct ice_vf *vf; int ret; @@ -4158,8 +4191,9 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, return -EINVAL; } - if (vlan_proto != htons(ETH_P_8021Q)) { - dev_err(dev, "VF VLAN protocol is not supported\n"); + if (!ice_is_supported_port_vlan_proto(&pf->hw, local_vlan_proto)) { + dev_err(dev, "VF VLAN protocol 0x%04x is not supported\n", + local_vlan_proto); return -EPROTONOSUPPORT; } @@ -4169,19 +4203,20 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos, return ret; if (ice_vf_get_port_vlan_prio(vf) == qos && + ice_vf_get_port_vlan_tpid(vf) == local_vlan_proto && ice_vf_get_port_vlan_id(vf) == vlan_id) { /* duplicate request, so just return success */ - dev_dbg(dev, "Duplicate port VLAN %u, QoS %u request\n", - vlan_id, qos); + dev_dbg(dev, "Duplicate port VLAN %u, QoS %u, TPID 0x%04x request\n", + vlan_id, qos, local_vlan_proto); return 0; } mutex_lock(&vf->cfg_lock); - vf->port_vlan_info = ICE_VLAN(ETH_P_8021Q, vlan_id, qos); + vf->port_vlan_info = ICE_VLAN(local_vlan_proto, vlan_id, qos); if (ice_vf_is_port_vlan_ena(vf)) - dev_info(dev, "Setting VLAN %u, QoS %u on VF %d\n", - vlan_id, qos, vf_id); + dev_info(dev, "Setting VLAN %u, QoS %u, TPID 0x%04x on VF %d\n", + vlan_id, qos, local_vlan_proto, vf_id); else dev_info(dev, "Clearing port VLAN on VF %d\n", vf_id); @@ -5904,6 +5939,8 @@ ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi) /* VF configuration for VLAN and applicable QoS */ ivi->vlan = ice_vf_get_port_vlan_id(vf); ivi->qos = ice_vf_get_port_vlan_prio(vf); + if (ice_vf_is_port_vlan_ena(vf)) + ivi->vlan_proto = cpu_to_be16(ice_vf_get_port_vlan_tpid(vf)); ivi->trusted = vf->trusted; ivi->spoofchk = vf->spoofchk; -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:46 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:46 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 08/14] ice: Add outer_vlan_ops and VSI specific VLAN ops implementations In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-8-anthony.l.nguyen@intel.com> From: Brett Creeley Add a new outer_vlan_ops member to the ice_vsi structure as outer VLAN ops are only available when the device is in Double VLAN Mode (DVM). Depending on the VSI type, the requirements for what operations to use/allow differ. By default all VSI's have unsupported inner and outer VSI VLAN ops. This implementation was chosen to prevent unexpected crashes due to null pointer dereferences. Instead, if a VSI calls an unsupported op, it will just return -EOPNOTSUPP. Add implementations to support modifying outer VLAN fields for VSI context. This includes the ability to modify VLAN stripping, insertion, and the port VLAN based on the outer VLAN handling fields of the VSI context. These functions should only ever be used if DVM is enabled because that means the firmware supports the outer VLAN fields in the VSI context. If the device is in DVM, then always use the outer_vlan_ops, else use the vlan_ops since the device is in Single VLAN Mode (SVM). Also, move adding the untagged VLAN 0 filter from ice_vsi_setup() to ice_vsi_vlan_setup() as the latter function is specific to the PF and all other VSI types that need an untagged VLAN 0 filter already do this in their specific flows. Without this change, Flow Director is failing to initialize because it does not implement any VSI VLAN ops. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/Makefile | 3 +- drivers/net/ethernet/intel/ice/ice.h | 3 +- drivers/net/ethernet/intel/ice/ice_eswitch.c | 5 +- drivers/net/ethernet/intel/ice/ice_lib.c | 111 +++++- drivers/net/ethernet/intel/ice/ice_lib.h | 3 + drivers/net/ethernet/intel/ice/ice_main.c | 60 +-- .../ethernet/intel/ice/ice_pf_vsi_vlan_ops.c | 37 ++ .../ethernet/intel/ice/ice_pf_vsi_vlan_ops.h | 13 + .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.c | 72 ++++ .../ethernet/intel/ice/ice_vf_vsi_vlan_ops.h | 16 + .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 101 +++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 6 + .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 344 +++++++++++++++++- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 6 + .../net/ethernet/intel/ice/ice_vsi_vlan_ops.c | 107 +++++- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 6 + 16 files changed, 808 insertions(+), 85 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c create mode 100644 drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h create mode 100644 drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c create mode 100644 drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile index c40b3aa1d195..3ece1df919f8 100644 --- a/drivers/net/ethernet/intel/ice/Makefile +++ b/drivers/net/ethernet/intel/ice/Makefile @@ -18,6 +18,7 @@ ice-y := ice_main.o \ ice_txrx_lib.o \ ice_txrx.o \ ice_fltr.o \ + ice_pf_vsi_vlan_ops.o \ ice_vsi_vlan_ops.o \ ice_vsi_vlan_lib.o \ ice_fdir.o \ @@ -32,7 +33,7 @@ ice-y := ice_main.o \ ice_repr.o \ ice_tc_lib.o ice-$(CONFIG_PCI_IOV) += ice_virtchnl_allowlist.o -ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o ice_virtchnl_fdir.o +ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o ice_virtchnl_fdir.o ice_vf_vsi_vlan_ops.o ice-$(CONFIG_PTP_1588_CLOCK) += ice_ptp.o ice_ptp_hw.o ice_gnss.o ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index efcc713ba287..14aaca8dbbb7 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -371,7 +371,8 @@ struct ice_vsi { u8 irqs_ready:1; u8 current_isup:1; /* Sync 'link up' logging */ u8 stat_offsets_loaded:1; - struct ice_vsi_vlan_ops vlan_ops; + struct ice_vsi_vlan_ops inner_vlan_ops; + struct ice_vsi_vlan_ops outer_vlan_ops; u16 num_vlan; /* queue information */ diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c index 0ff1a375f2aa..30a00fe59c52 100644 --- a/drivers/net/ethernet/intel/ice/ice_eswitch.c +++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c @@ -116,9 +116,12 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) struct ice_vsi *uplink_vsi = pf->switchdev.uplink_vsi; struct net_device *uplink_netdev = uplink_vsi->netdev; struct ice_vsi *ctrl_vsi = pf->switchdev.control_vsi; + struct ice_vsi_vlan_ops *vlan_ops; bool rule_added = false; - ctrl_vsi->vlan_ops.dis_stripping(ctrl_vsi); + vlan_ops = ice_get_compat_vsi_vlan_ops(ctrl_vsi); + if (vlan_ops->dis_stripping(ctrl_vsi)) + return -ENODEV; ice_remove_vsi_fltr(&pf->hw, uplink_vsi->idx); diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index c8991711b754..6a7f107a43c5 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -8,6 +8,7 @@ #include "ice_fltr.h" #include "ice_dcb_lib.h" #include "ice_devlink.h" +#include "ice_vsi_vlan_ops.h" /** * ice_vsi_type_str - maps VSI type enum to string equivalents @@ -2415,17 +2416,6 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, if (ret) goto unroll_vector_base; - /* Always add VLAN ID 0 switch rule by default. This is needed - * in order to allow all untagged and 0 tagged priority traffic - * if Rx VLAN pruning is enabled. Also there are cases where we - * don't get the call to add VLAN 0 via ice_vlan_rx_add_vid() - * so this handles those cases (i.e. adding the PF to a bridge - * without the 8021q module loaded). - */ - ret = ice_vsi_add_vlan_zero(vsi); - if (ret) - goto unroll_clear_rings; - ice_vsi_map_rings_to_vectors(vsi); /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ @@ -3875,13 +3865,110 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) /** * ice_vsi_add_vlan_zero - add VLAN 0 filter(s) for this VSI * @vsi: VSI used to add VLAN filters + * + * In Single VLAN Mode (SVM), single VLAN filters via ICE_SW_LKUP_VLAN are based + * on the inner VLAN ID, so the VLAN TPID (i.e. 0x8100 or 0x888a8) doesn't + * matter. In Double VLAN Mode (DVM), outer/single VLAN filters via + * ICE_SW_LKUP_VLAN are based on the outer/single VLAN ID + VLAN TPID. + * + * For both modes add a VLAN 0 + no VLAN TPID filter to handle untagged traffic + * when VLAN pruning is enabled. Also, this handles VLAN 0 priority tagged + * traffic in SVM, since the VLAN TPID isn't part of filtering. + * + * If DVM is enabled then an explicit VLAN 0 + VLAN TPID filter needs to be + * added to allow VLAN 0 priority tagged traffic in DVM, since the VLAN TPID is + * part of filtering. */ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + struct ice_vlan vlan; + int err; + + vlan = ICE_VLAN(0, 0, 0); + err = vlan_ops->add_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + /* in SVM both VLAN 0 filters are identical */ + if (!ice_is_dvm_ena(&vsi->back->hw)) + return 0; + + vlan = ICE_VLAN(ETH_P_8021Q, 0, 0); + err = vlan_ops->add_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + return 0; +} + +/** + * ice_vsi_del_vlan_zero - delete VLAN 0 filter(s) for this VSI + * @vsi: VSI used to add VLAN filters + * + * Delete the VLAN 0 filters in the same manner that they were added in + * ice_vsi_add_vlan_zero. + */ +int ice_vsi_del_vlan_zero(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); struct ice_vlan vlan; + int err; vlan = ICE_VLAN(0, 0, 0); - return vsi->vlan_ops.add_vlan(vsi, &vlan); + err = vlan_ops->del_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + /* in SVM both VLAN 0 filters are identical */ + if (!ice_is_dvm_ena(&vsi->back->hw)) + return 0; + + vlan = ICE_VLAN(ETH_P_8021Q, 0, 0); + err = vlan_ops->del_vlan(vsi, &vlan); + if (err && err != -EEXIST) + return err; + + return 0; +} + +/** + * ice_vsi_num_zero_vlans - get number of VLAN 0 filters based on VLAN mode + * @vsi: VSI used to get the VLAN mode + * + * If DVM is enabled then 2 VLAN 0 filters are added, else if SVM is enabled + * then 1 VLAN 0 filter is added. See ice_vsi_add_vlan_zero for more details. + */ +static u16 ice_vsi_num_zero_vlans(struct ice_vsi *vsi) +{ +#define ICE_DVM_NUM_ZERO_VLAN_FLTRS 2 +#define ICE_SVM_NUM_ZERO_VLAN_FLTRS 1 + /* no VLAN 0 filter is created when a port VLAN is active */ + if (vsi->type == ICE_VSI_VF && + ice_vf_is_port_vlan_ena(&vsi->back->vf[vsi->vf_id])) + return 0; + if (ice_is_dvm_ena(&vsi->back->hw)) + return ICE_DVM_NUM_ZERO_VLAN_FLTRS; + else + return ICE_SVM_NUM_ZERO_VLAN_FLTRS; +} + +/** + * ice_vsi_has_non_zero_vlans - check if VSI has any non-zero VLANs + * @vsi: VSI used to determine if any non-zero VLANs have been added + */ +bool ice_vsi_has_non_zero_vlans(struct ice_vsi *vsi) +{ + return (vsi->num_vlan > ice_vsi_num_zero_vlans(vsi)); +} + +/** + * ice_vsi_num_non_zero_vlans - get the number of non-zero VLANs for this VSI + * @vsi: VSI used to get the number of non-zero VLANs added + */ +u16 ice_vsi_num_non_zero_vlans(struct ice_vsi *vsi) +{ + return (vsi->num_vlan - ice_vsi_num_zero_vlans(vsi)); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 8f42a3f3a949..0d61f1772ae3 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -124,6 +124,9 @@ void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx); void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx); int ice_vsi_add_vlan_zero(struct ice_vsi *vsi); +int ice_vsi_del_vlan_zero(struct ice_vsi *vsi); +bool ice_vsi_has_non_zero_vlans(struct ice_vsi *vsi); +u16 ice_vsi_num_non_zero_vlans(struct ice_vsi *vsi); bool ice_is_feature_supported(struct ice_pf *pf, enum ice_feature f); void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_init_feature_support(struct ice_pf *pf); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 6843b8e87441..ff2b721e0e45 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -21,6 +21,7 @@ #include "ice_trace.h" #include "ice_eswitch.h" #include "ice_tc_lib.h" +#include "ice_vsi_vlan_ops.h" #define DRV_SUMMARY "Intel(R) Ethernet Connection E800 Series Linux Driver" static const char ice_driver_string[] = DRV_SUMMARY; @@ -249,7 +250,7 @@ static int ice_set_promisc(struct ice_vsi *vsi, u8 promisc_m) if (vsi->type != ICE_VSI_PF) return 0; - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_set_vlan_vsi_promisc(&vsi->back->hw, vsi, promisc_m); else status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, 0); @@ -270,7 +271,7 @@ static int ice_clear_promisc(struct ice_vsi *vsi, u8 promisc_m) if (vsi->type != ICE_VSI_PF) return 0; - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_clear_vlan_vsi_promisc(&vsi->back->hw, vsi, promisc_m); else status = ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, 0); @@ -286,6 +287,7 @@ static int ice_clear_promisc(struct ice_vsi *vsi, u8 promisc_m) */ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); struct device *dev = ice_pf_to_dev(vsi->back); struct net_device *netdev = vsi->netdev; bool promisc_forced_on = false; @@ -358,7 +360,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) /* check for changes in promiscuous modes */ if (changed_flags & IFF_ALLMULTI) { if (vsi->current_netdev_flags & IFF_ALLMULTI) { - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) promisc_m = ICE_MCAST_VLAN_PROMISC_BITS; else promisc_m = ICE_MCAST_PROMISC_BITS; @@ -372,7 +374,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) } } else { /* !(vsi->current_netdev_flags & IFF_ALLMULTI) */ - if (vsi->num_vlan > 1) + if (ice_vsi_has_non_zero_vlans(vsi)) promisc_m = ICE_MCAST_VLAN_PROMISC_BITS; else promisc_m = ICE_MCAST_PROMISC_BITS; @@ -401,7 +403,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) ~IFF_PROMISC; goto out_promisc; } - vsi->vlan_ops.dis_rx_filtering(vsi); + vlan_ops->dis_rx_filtering(vsi); } } else { /* Clear Rx filter to remove traffic from wire */ @@ -415,7 +417,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi) goto out_promisc; } if (vsi->num_vlan > 1) - vsi->vlan_ops.ena_rx_filtering(vsi); + vlan_ops->ena_rx_filtering(vsi); } } } @@ -3419,6 +3421,7 @@ static int ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_vlan vlan; int ret; @@ -3427,9 +3430,11 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) if (!vid) return 0; + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + /* Enable VLAN pruning when a VLAN other than 0 is added */ if (!ice_vsi_is_vlan_pruning_ena(vsi)) { - ret = vsi->vlan_ops.ena_rx_filtering(vsi); + ret = vlan_ops->ena_rx_filtering(vsi); if (ret) return ret; } @@ -3438,7 +3443,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid) * packets aren't pruned by the device's internal switch on Rx */ vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); - ret = vsi->vlan_ops.add_vlan(vsi, &vlan); + ret = vlan_ops->add_vlan(vsi, &vlan); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3457,6 +3462,7 @@ static int ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) { struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_vlan vlan; int ret; @@ -3465,17 +3471,19 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid) if (!vid) return 0; + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + /* Make sure VLAN delete is successful before updating VLAN * information */ vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0); - ret = vsi->vlan_ops.del_vlan(vsi, &vlan); + ret = vlan_ops->del_vlan(vsi, &vlan); if (ret) return ret; /* Disable pruning when VLAN 0 is the only VLAN rule */ if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi)) - vsi->vlan_ops.dis_rx_filtering(vsi); + vlan_ops->dis_rx_filtering(vsi); set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); return ret; @@ -5592,6 +5600,7 @@ static int ice_set_features(struct net_device *netdev, netdev_features_t features) { struct ice_netdev_priv *np = netdev_priv(netdev); + struct ice_vsi_vlan_ops *vlan_ops; struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; int ret = 0; @@ -5608,6 +5617,8 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) return -EBUSY; } + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + /* Multiple features can be changed in one call so keep features in * separate if/else statements to guarantee each feature is checked */ @@ -5619,24 +5630,24 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); + ret = vlan_ops->ena_stripping(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)) - ret = vsi->vlan_ops.dis_stripping(vsi); + ret = vlan_ops->dis_stripping(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_TX) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); + ret = vlan_ops->ena_insertion(vsi, ETH_P_8021Q); else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) && (netdev->features & NETIF_F_HW_VLAN_CTAG_TX)) - ret = vsi->vlan_ops.dis_insertion(vsi); + ret = vlan_ops->dis_insertion(vsi); if ((features & NETIF_F_HW_VLAN_CTAG_FILTER) && !(netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vsi->vlan_ops.ena_rx_filtering(vsi); + ret = vlan_ops->ena_rx_filtering(vsi); else if (!(features & NETIF_F_HW_VLAN_CTAG_FILTER) && (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) - ret = vsi->vlan_ops.dis_rx_filtering(vsi); + ret = vlan_ops->dis_rx_filtering(vsi); if ((features & NETIF_F_NTUPLE) && !(netdev->features & NETIF_F_NTUPLE)) { @@ -5664,19 +5675,21 @@ ice_set_features(struct net_device *netdev, netdev_features_t features) } /** - * ice_vsi_vlan_setup - Setup VLAN offload properties on a VSI + * ice_vsi_vlan_setup - Setup VLAN offload properties on a PF VSI * @vsi: VSI to setup VLAN properties for */ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) { - int ret = 0; + struct ice_vsi_vlan_ops *vlan_ops; + + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) - ret = vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); + vlan_ops->ena_stripping(vsi, ETH_P_8021Q); if (vsi->netdev->features & NETIF_F_HW_VLAN_CTAG_TX) - ret = vsi->vlan_ops.ena_insertion(vsi, ETH_P_8021Q); + vlan_ops->ena_insertion(vsi, ETH_P_8021Q); - return ret; + return ice_vsi_add_vlan_zero(vsi); } /** @@ -6279,11 +6292,12 @@ static void ice_napi_disable_all(struct ice_vsi *vsi) */ int ice_down(struct ice_vsi *vsi) { - int i, tx_err, rx_err, link_err = 0; + int i, tx_err, rx_err, link_err = 0, vlan_err = 0; WARN_ON(!test_bit(ICE_VSI_DOWN, vsi->state)); if (vsi->netdev && vsi->type == ICE_VSI_PF) { + vlan_err = ice_vsi_del_vlan_zero(vsi); if (!ice_is_e810(&vsi->back->hw)) ice_ptp_link_change(vsi->back, vsi->back->hw.pf_id, false); netif_carrier_off(vsi->netdev); @@ -6325,7 +6339,7 @@ int ice_down(struct ice_vsi *vsi) ice_for_each_rxq(vsi, i) ice_clean_rx_ring(vsi->rx_rings[i]); - if (tx_err || rx_err || link_err) { + if (tx_err || rx_err || link_err || vlan_err) { netdev_err(vsi->netdev, "Failed to close VSI 0x%04X on switch 0x%04X\n", vsi->vsi_num, vsi->vsw->sw_id); return -EIO; diff --git a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c new file mode 100644 index 000000000000..b00360ca6e92 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.c @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_ops.h" +#include "ice_vsi_vlan_lib.h" +#include "ice.h" +#include "ice_pf_vsi_vlan_ops.h" + +void ice_pf_vsi_init_vlan_ops(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops; + + if (ice_is_dvm_ena(&vsi->back->hw)) { + vlan_ops = &vsi->outer_vlan_ops; + + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_outer_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_outer_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_outer_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_outer_insertion; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + } else { + vlan_ops = &vsi->inner_vlan_ops; + + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + } +} + diff --git a/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h new file mode 100644 index 000000000000..6741ec8c5f6b --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_pf_vsi_vlan_ops.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_PF_VSI_VLAN_OPS_H_ +#define _ICE_PF_VSI_VLAN_OPS_H_ + +#include "ice_vsi_vlan_ops.h" + +struct ice_vsi; + +void ice_pf_vsi_init_vlan_ops(struct ice_vsi *vsi); + +#endif /* _ICE_PF_VSI_VLAN_OPS_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c new file mode 100644 index 000000000000..741b041606a2 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -0,0 +1,72 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#include "ice_vsi_vlan_ops.h" +#include "ice_vsi_vlan_lib.h" +#include "ice.h" +#include "ice_vf_vsi_vlan_ops.h" +#include "ice_virtchnl_pf.h" + +static int +noop_vlan_arg(struct ice_vsi __always_unused *vsi, + struct ice_vlan __always_unused *vlan) +{ + return 0; +} + +/** + * ice_vf_vsi_init_vlan_ops - Initialize default VSI VLAN ops for VF VSI + * @vsi: VF's VSI being configured + */ +void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) +{ + struct ice_vsi_vlan_ops *vlan_ops; + struct ice_pf *pf = vsi->back; + struct ice_vf *vf; + + vf = &pf->vf[vsi->vf_id]; + + if (ice_is_dvm_ena(&pf->hw)) { + vlan_ops = &vsi->outer_vlan_ops; + + /* outer VLAN ops regardless of port VLAN config */ + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + + if (ice_vf_is_port_vlan_ena(vf)) { + /* setup outer VLAN ops */ + vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; + + /* setup inner VLAN ops */ + vlan_ops = &vsi->inner_vlan_ops; + vlan_ops->add_vlan = noop_vlan_arg; + vlan_ops->del_vlan = noop_vlan_arg; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } + } else { + vlan_ops = &vsi->inner_vlan_ops; + + /* inner VLAN ops regardless of port VLAN config */ + vlan_ops->add_vlan = ice_vsi_add_vlan; + vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; + vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; + vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; + + if (ice_vf_is_port_vlan_ena(vf)) { + vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; + } else { + vlan_ops->del_vlan = ice_vsi_del_vlan; + vlan_ops->ena_stripping = ice_vsi_ena_inner_stripping; + vlan_ops->dis_stripping = ice_vsi_dis_inner_stripping; + vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; + vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; + } + } +} diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h new file mode 100644 index 000000000000..8ea13628a5e1 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VF_VSI_VLAN_OPS_H_ +#define _ICE_VF_VSI_VLAN_OPS_H_ + +#include "ice_vsi_vlan_ops.h" + +struct ice_vsi; + +#ifdef CONFIG_PCI_IOV +void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi); +#else +static inline void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) { } +#endif /* CONFIG_PCI_IOV */ +#endif /* _ICE_PF_VSI_VLAN_OPS_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index e576cd201a48..100c86c8ad9a 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -10,6 +10,7 @@ #include "ice_eswitch.h" #include "ice_virtchnl_allowlist.h" #include "ice_flex_pipe.h" +#include "ice_vf_vsi_vlan_ops.h" #define FIELD_SELECTOR(proto_hdr_field) \ BIT((proto_hdr_field) & PROTO_HDR_FIELD_MASK) @@ -761,7 +762,7 @@ static u8 ice_vf_get_port_vlan_prio(struct ice_vf *vf) return vf->port_vlan_info.prio; } -static bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) +bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) { return (ice_vf_get_port_vlan_id(vf) || ice_vf_get_port_vlan_prio(vf)); } @@ -769,26 +770,30 @@ static bool ice_vf_is_port_vlan_ena(struct ice_vf *vf) /** * ice_vf_rebuild_host_vlan_cfg - add VLAN 0 filter or rebuild the Port VLAN * @vf: VF to add MAC filters for + * @vsi: Pointer to VSI * * Called after a VF VSI has been re-added/rebuilt during reset. The PF driver * always re-adds either a VLAN 0 or port VLAN based filter after reset. */ -static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) +static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf, struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); struct device *dev = ice_pf_to_dev(vf->pf); - struct ice_vsi *vsi = ice_get_vf_vsi(vf); int err; if (ice_vf_is_port_vlan_ena(vf)) { - err = vsi->vlan_ops.set_port_vlan(vsi, &vf->port_vlan_info); + err = vlan_ops->set_port_vlan(vsi, &vf->port_vlan_info); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); return err; } + + err = vlan_ops->add_vlan(vsi, &vf->port_vlan_info); + } else { + err = ice_vsi_add_vlan_zero(vsi); } - err = vsi->vlan_ops.add_vlan(vsi, &vf->port_vlan_info); if (err) { dev_err(dev, "failed to add VLAN %u filter for VF %u during VF rebuild, error %d\n", ice_vf_is_port_vlan_ena(vf) ? @@ -834,9 +839,12 @@ static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable) */ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops; int err; - err = vsi->vlan_ops.ena_tx_filtering(vsi); + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + + err = vlan_ops->ena_tx_filtering(vsi); if (err) return err; @@ -849,9 +857,12 @@ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi) */ static int ice_vsi_dis_spoofchk(struct ice_vsi *vsi) { + struct ice_vsi_vlan_ops *vlan_ops; int err; - err = vsi->vlan_ops.dis_tx_filtering(vsi); + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + + err = vlan_ops->dis_tx_filtering(vsi); if (err) return err; @@ -1268,7 +1279,7 @@ static int ice_vf_set_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 pro if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_set_vsi_promisc(hw, vsi->idx, promisc_m, ice_vf_get_port_vlan_id(vf)); - else if (vsi->num_vlan > 1) + else if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_set_vlan_vsi_promisc(hw, vsi, promisc_m); else status = ice_fltr_set_vsi_promisc(hw, vsi->idx, promisc_m, 0); @@ -1290,7 +1301,7 @@ static int ice_vf_clear_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 p if (ice_vf_is_port_vlan_ena(vf)) status = ice_fltr_clear_vsi_promisc(hw, vsi->idx, promisc_m, ice_vf_get_port_vlan_id(vf)); - else if (vsi->num_vlan > 1) + else if (ice_vsi_has_non_zero_vlans(vsi)) status = ice_fltr_clear_vlan_vsi_promisc(hw, vsi, promisc_m); else status = ice_fltr_clear_vsi_promisc(hw, vsi->idx, promisc_m, 0); @@ -1375,7 +1386,7 @@ static void ice_vf_rebuild_host_cfg(struct ice_vf *vf) dev_err(dev, "failed to rebuild default MAC configuration for VF %d\n", vf->vf_id); - if (ice_vf_rebuild_host_vlan_cfg(vf)) + if (ice_vf_rebuild_host_vlan_cfg(vf, vsi)) dev_err(dev, "failed to rebuild VLAN configuration for VF %u\n", vf->vf_id); @@ -3022,6 +3033,7 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) bool rm_promisc, alluni = false, allmulti = false; struct virtchnl_promisc_info *info = (struct virtchnl_promisc_info *)msg; + struct ice_vsi_vlan_ops *vlan_ops; int mcast_err = 0, ucast_err = 0; struct ice_pf *pf = vf->pf; struct ice_vsi *vsi; @@ -3060,16 +3072,15 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) rm_promisc = !allmulti && !alluni; - if (vsi->num_vlan || ice_vf_is_port_vlan_ena(vf)) { - if (rm_promisc) - ret = vsi->vlan_ops.ena_rx_filtering(vsi); - else - ret = vsi->vlan_ops.dis_rx_filtering(vsi); - if (ret) { - dev_err(dev, "Failed to configure VLAN pruning in promiscuous mode\n"); - v_ret = VIRTCHNL_STATUS_ERR_PARAM; - goto error_param; - } + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); + if (rm_promisc) + ret = vlan_ops->ena_rx_filtering(vsi); + else + ret = vlan_ops->dis_rx_filtering(vsi); + if (ret) { + dev_err(dev, "Failed to configure VLAN pruning in promiscuous mode\n"); + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto error_param; } if (!test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags)) { @@ -3096,7 +3107,8 @@ static int ice_vc_cfg_promiscuous_mode_msg(struct ice_vf *vf, u8 *msg) } else { u8 mcast_m, ucast_m; - if (ice_vf_is_port_vlan_ena(vf) || vsi->num_vlan > 1) { + if (ice_vf_is_port_vlan_ena(vf) || + ice_vsi_has_non_zero_vlans(vsi)) { mcast_m = ICE_MCAST_VLAN_PROMISC_BITS; ucast_m = ICE_UCAST_VLAN_PROMISC_BITS; } else { @@ -4163,6 +4175,27 @@ static bool ice_vf_vlan_offload_ena(u32 caps) return !!(caps & VIRTCHNL_VF_OFFLOAD_VLAN); } +/** + * ice_vf_has_max_vlans - check if VF already has the max allowed VLAN filters + * @vf: VF to check against + * @vsi: VF's VSI + * + * If the VF is trusted then the VF is allowed to add as many VLANs as it + * wants to, so return false. + * + * When the VF is untrusted compare the number of non-zero VLANs + 1 to the max + * allowed VLANs for an untrusted VF. Return the result of this comparison. + */ +static bool ice_vf_has_max_vlans(struct ice_vf *vf, struct ice_vsi *vsi) +{ + if (ice_is_vf_trusted(vf)) + return false; + +#define ICE_VF_ADDED_VLAN_ZERO_FLTRS 1 + return ((ice_vsi_num_non_zero_vlans(vsi) + + ICE_VF_ADDED_VLAN_ZERO_FLTRS) >= ICE_MAX_VLAN_PER_VF); +} + /** * ice_vc_process_vlan_msg * @vf: pointer to the VF info @@ -4176,6 +4209,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; struct virtchnl_vlan_filter_list *vfl = (struct virtchnl_vlan_filter_list *)msg; + struct ice_vsi_vlan_ops *vlan_ops; struct ice_pf *pf = vf->pf; bool vlan_promisc = false; struct ice_vsi *vsi; @@ -4217,8 +4251,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) goto error_param; } - if (add_v && !ice_is_vf_trusted(vf) && - vsi->num_vlan >= ICE_MAX_VLAN_PER_VF) { + if (add_v && ice_vf_has_max_vlans(vf, vsi)) { dev_info(dev, "VF-%d is not trusted, switch the VF to trusted mode, in order to add more VLAN addresses\n", vf->vf_id); /* There is no need to let VF know about being not trusted, @@ -4237,13 +4270,13 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) test_bit(ICE_FLAG_VF_TRUE_PROMISC_ENA, pf->flags)) vlan_promisc = true; + vlan_ops = ice_get_compat_vsi_vlan_ops(vsi); if (add_v) { for (i = 0; i < vfl->num_elements; i++) { u16 vid = vfl->vlan_id[i]; struct ice_vlan vlan; - if (!ice_is_vf_trusted(vf) && - vsi->num_vlan >= ICE_MAX_VLAN_PER_VF) { + if (ice_vf_has_max_vlans(vf, vsi)) { dev_info(dev, "VF-%d is not trusted, switch the VF to trusted mode, in order to add more VLAN addresses\n", vf->vf_id); /* There is no need to let VF know about being @@ -4261,7 +4294,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) continue; vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); - status = vsi->vlan_ops.add_vlan(vsi, &vlan); + status = vsi->inner_vlan_ops.add_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4270,7 +4303,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) /* Enable VLAN pruning when non-zero VLAN is added */ if (!vlan_promisc && vid && !ice_vsi_is_vlan_pruning_ena(vsi)) { - status = vsi->vlan_ops.ena_rx_filtering(vsi); + status = vlan_ops->ena_rx_filtering(vsi); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n", @@ -4314,16 +4347,16 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) continue; vlan = ICE_VLAN(ETH_P_8021Q, vid, 0); - status = vsi->vlan_ops.del_vlan(vsi, &vlan); + status = vsi->inner_vlan_ops.del_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; } /* Disable VLAN pruning when only VLAN 0 is left */ - if (vsi->num_vlan == 1 && + if (!ice_vsi_has_non_zero_vlans(vsi) && ice_vsi_is_vlan_pruning_ena(vsi)) - status = vsi->vlan_ops.dis_rx_filtering(vsi); + status = vlan_ops->dis_rx_filtering(vsi); /* Disable Unicast/Multicast VLAN promiscuous mode */ if (vlan_promisc) { @@ -4392,7 +4425,7 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf) } vsi = ice_get_vf_vsi(vf); - if (vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q)) + if (vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4427,7 +4460,7 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf) goto error_param; } - if (vsi->vlan_ops.dis_stripping(vsi)) + if (vsi->inner_vlan_ops.dis_stripping(vsi)) v_ret = VIRTCHNL_STATUS_ERR_PARAM; error_param: @@ -4457,9 +4490,9 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf) return 0; if (ice_vf_vlan_offload_ena(vf->driver_caps)) - return vsi->vlan_ops.ena_stripping(vsi, ETH_P_8021Q); + return vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q); else - return vsi->vlan_ops.dis_stripping(vsi); + return vsi->inner_vlan_ops.dis_stripping(vsi); } static struct ice_vc_vf_ops ice_vc_vf_dflt_ops = { diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h index b06ca1f97833..4110847e0699 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h @@ -211,6 +211,7 @@ int ice_vc_send_msg_to_vf(struct ice_vf *vf, u32 v_opcode, enum virtchnl_status_code v_retval, u8 *msg, u16 msglen); bool ice_vc_isvalid_vsi_id(struct ice_vf *vf, u16 vsi_id); +bool ice_vf_is_port_vlan_ena(struct ice_vf *vf); #else /* CONFIG_PCI_IOV */ static inline void ice_process_vflr_event(struct ice_pf *pf) { } static inline void ice_free_vfs(struct ice_pf *pf) { } @@ -343,5 +344,10 @@ static inline bool ice_is_any_vf_in_promisc(struct ice_pf __always_unused *pf) { return false; } + +static inline bool ice_vf_is_port_vlan_ena(struct ice_vf __always_unused *vf) +{ + return false; +} #endif /* CONFIG_PCI_IOV */ #endif /* _ICE_VIRTCHNL_PF_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 0b130505b68a..62a2630d6fab 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -23,7 +23,8 @@ static void print_invalid_tpid(struct ice_vsi *vsi, u16 tpid) */ static bool validate_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - if (vlan->tpid != ETH_P_8021Q && (vlan->tpid || vlan->vid)) { + if (vlan->tpid != ETH_P_8021Q && vlan->tpid != ETH_P_8021AD && + vlan->tpid != ETH_P_QINQ1 && (vlan->tpid || vlan->vid)) { print_invalid_tpid(vsi, vlan->tpid); return false; } @@ -366,3 +367,344 @@ int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi) { return ice_cfg_vlan_antispoof(vsi, false); } + +/** + * tpid_to_vsi_outer_vlan_type - convert from TPID to VSI context based tag_type + * @tpid: tpid used to translate into VSI context based tag_type + * @tag_type: output variable to hold the VSI context based tag type + */ +static int tpid_to_vsi_outer_vlan_type(u16 tpid, u8 *tag_type) +{ + switch (tpid) { + case ETH_P_8021Q: + *tag_type = ICE_AQ_VSI_OUTER_TAG_VLAN_8100; + break; + case ETH_P_8021AD: + *tag_type = ICE_AQ_VSI_OUTER_TAG_STAG; + break; + case ETH_P_QINQ1: + *tag_type = ICE_AQ_VSI_OUTER_TAG_VLAN_9100; + break; + default: + *tag_type = 0; + return -EINVAL; + } + + return 0; +} + +/** + * ice_vsi_ena_outer_stripping - enable outer VLAN stripping + * @vsi: VSI to configure + * @tpid: TPID to enable outer VLAN stripping for + * + * Enable outer VLAN stripping via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Since the VSI context only supports a single TPID for insertion and + * stripping, setting the TPID for stripping will affect the TPID for insertion. + * Callers need to be aware of this limitation. + * + * Only modify outer VLAN stripping settings and the VLAN TPID. Outer VLAN + * insertion settings are unmodified. + * + * This enables hardware to strip a VLAN tag with the specified TPID to be + * stripped from the packet and placed in the receive descriptor. + */ +int ice_vsi_ena_outer_stripping(struct ice_vsi *vsi, u16 tpid) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + u8 tag_type; + int err; + + /* do not allow modifying VLAN stripping when a port VLAN is configured + * on this VSI + */ + if (vsi->info.port_based_outer_vlan) + return 0; + + if (tpid_to_vsi_outer_vlan_type(tpid, &tag_type)) + return -EINVAL; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN strip settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~(ICE_AQ_VSI_OUTER_VLAN_EMODE_M | ICE_AQ_VSI_OUTER_TAG_TYPE_M); + ctxt->info.outer_vlan_flags |= + ((ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH << + ICE_AQ_VSI_OUTER_VLAN_EMODE_S) | + ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M)); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for enabling outer VLAN stripping failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_dis_outer_stripping - disable outer VLAN stripping + * @vsi: VSI to configure + * + * Disable outer VLAN stripping via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Only modify the outer VLAN stripping settings. The VLAN TPID and outer VLAN + * insertion settings are unmodified. + * + * This tells the hardware to not strip any VLAN tagged packets, thus leaving + * them in the packet. This enables software offloaded VLAN stripping and + * disables hardware offloaded VLAN stripping. + */ +int ice_vsi_dis_outer_stripping(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + if (vsi->info.port_based_outer_vlan) + return 0; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN strip settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~ICE_AQ_VSI_OUTER_VLAN_EMODE_M; + ctxt->info.outer_vlan_flags |= ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING << + ICE_AQ_VSI_OUTER_VLAN_EMODE_S; + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for disabling outer VLAN stripping failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_ena_outer_insertion - enable outer VLAN insertion + * @vsi: VSI to configure + * @tpid: TPID to enable outer VLAN insertion for + * + * Enable outer VLAN insertion via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Since the VSI context only supports a single TPID for insertion and + * stripping, setting the TPID for insertion will affect the TPID for stripping. + * Callers need to be aware of this limitation. + * + * Only modify outer VLAN insertion settings and the VLAN TPID. Outer VLAN + * stripping settings are unmodified. + * + * This allows a VLAN tag with the specified TPID to be inserted in the transmit + * descriptor. + */ +int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + u8 tag_type; + int err; + + if (vsi->info.port_based_outer_vlan) + return 0; + + if (tpid_to_vsi_outer_vlan_type(tpid, &tag_type)) + return -EINVAL; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN insertion settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT | + ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M | + ICE_AQ_VSI_OUTER_TAG_TYPE_M); + ctxt->info.outer_vlan_flags |= + ((ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M) | + ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for enabling outer VLAN insertion failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_dis_outer_insertion - disable outer VLAN insertion + * @vsi: VSI to configure + * + * Disable outer VLAN insertion via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * Only modify the outer VLAN insertion settings. The VLAN TPID and outer VLAN + * settings are unmodified. + * + * This tells the hardware to not allow any VLAN tagged packets in the transmit + * descriptor. This enables software offloaded VLAN insertion and disables + * hardware offloaded VLAN insertion. + */ +int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + int err; + + if (vsi->info.port_based_outer_vlan) + return 0; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID); + /* clear current outer VLAN insertion settings */ + ctxt->info.outer_vlan_flags = vsi->info.outer_vlan_flags & + ~(ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT | + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M); + ctxt->info.outer_vlan_flags |= + ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | + ((ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) & + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) + dev_err(ice_pf_to_dev(vsi->back), "update VSI for disabling outer VLAN insertion failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + else + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + + kfree(ctxt); + return err; +} + +/** + * __ice_vsi_set_outer_port_vlan - set the outer port VLAN and related settings + * @vsi: VSI to configure + * @vlan_info: packed u16 that contains the VLAN prio and ID + * @tpid: TPID of the port VLAN + * + * Set the port VLAN prio, ID, and TPID. + * + * Enable VLAN pruning so the VSI doesn't receive any traffic that doesn't match + * a VLAN prune rule. The caller should take care to add a VLAN prune rule that + * matches the port VLAN ID and TPID. + * + * Tell hardware to strip outer VLAN tagged packets on receive and don't put + * them in the receive descriptor. VSI(s) in port VLANs should not be aware of + * the port VLAN ID or TPID they are assigned to. + * + * Tell hardware to prevent outer VLAN tag insertion on transmit and only allow + * untagged outer packets from the transmit descriptor. + * + * Also, tell the hardware to insert the port VLAN on transmit. + */ +static int +__ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, u16 vlan_info, u16 tpid) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_vsi_ctx *ctxt; + u8 tag_type; + int err; + + if (tpid_to_vsi_outer_vlan_type(tpid, &tag_type)) + return -EINVAL; + + ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); + if (!ctxt) + return -ENOMEM; + + ctxt->info = vsi->info; + + ctxt->info.sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; + + ctxt->info.port_based_outer_vlan = cpu_to_le16(vlan_info); + ctxt->info.outer_vlan_flags = + (ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW << + ICE_AQ_VSI_OUTER_VLAN_EMODE_S) | + ((tag_type << ICE_AQ_VSI_OUTER_TAG_TYPE_S) & + ICE_AQ_VSI_OUTER_TAG_TYPE_M) | + ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC | + (ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED << + ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) | + ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT; + + ctxt->info.valid_sections = + cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID | + ICE_AQ_VSI_PROP_SW_VALID); + + err = ice_update_vsi(hw, vsi->idx, ctxt, NULL); + if (err) { + dev_err(ice_pf_to_dev(vsi->back), "update VSI for setting outer port based VLAN failed, err %d aq_err %s\n", + err, ice_aq_str(hw->adminq.sq_last_status)); + } else { + vsi->info.port_based_outer_vlan = ctxt->info.port_based_outer_vlan; + vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags; + vsi->info.sw_flags2 = ctxt->info.sw_flags2; + } + + kfree(ctxt); + return err; +} + +/** + * ice_vsi_set_outer_port_vlan - public version of __ice_vsi_set_outer_port_vlan + * @vsi: VSI to configure + * @vlan: ice_vlan structure used to set the port VLAN + * + * Set the outer port VLAN via VSI context. This function should only be + * used if DVM is supported. Also, this function should never be called directly + * as it should be part of ice_vsi_vlan_ops if it's needed. + * + * This function does not support clearing the port VLAN as there is currently + * no use case for this. + * + * Use the ice_vlan structure passed in to set this VSI in a port VLAN. + */ +int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) +{ + u16 port_vlan_info; + + if (vlan->prio > (VLAN_PRIO_MASK >> VLAN_PRIO_SHIFT)) + return -EINVAL; + + port_vlan_info = vlan->vid | (vlan->prio << VLAN_PRIO_SHIFT); + + return __ice_vsi_set_outer_port_vlan(vsi, port_vlan_info, vlan->tpid); +} diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index a10671133e36..f459909490ec 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -23,4 +23,10 @@ int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_ena_tx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_tx_vlan_filtering(struct ice_vsi *vsi); +int ice_vsi_ena_outer_stripping(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_outer_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi); +int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); + #endif /* _ICE_VSI_VLAN_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c index 6a6b49581c70..4a6c850d83ac 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c @@ -1,20 +1,103 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (C) 2019-2021, Intel Corporation. */ -#include "ice_vsi_vlan_ops.h" +#include "ice_pf_vsi_vlan_ops.h" +#include "ice_vf_vsi_vlan_ops.h" +#include "ice_lib.h" #include "ice.h" +static int +op_unsupported_vlan_arg(struct ice_vsi * __always_unused vsi, + struct ice_vlan * __always_unused vlan) +{ + return -EOPNOTSUPP; +} + +static int +op_unsupported_tpid_arg(struct ice_vsi *__always_unused vsi, + u16 __always_unused tpid) +{ + return -EOPNOTSUPP; +} + +static int op_unsupported(struct ice_vsi *__always_unused vsi) +{ + return -EOPNOTSUPP; +} + +/* If any new ops are added to the VSI VLAN ops interface then an unsupported + * implementation should be set here. + */ +static struct ice_vsi_vlan_ops ops_unsupported = { + .add_vlan = op_unsupported_vlan_arg, + .del_vlan = op_unsupported_vlan_arg, + .ena_stripping = op_unsupported_tpid_arg, + .dis_stripping = op_unsupported, + .ena_insertion = op_unsupported_tpid_arg, + .dis_insertion = op_unsupported, + .ena_rx_filtering = op_unsupported, + .dis_rx_filtering = op_unsupported, + .ena_tx_filtering = op_unsupported, + .dis_tx_filtering = op_unsupported, + .set_port_vlan = op_unsupported_vlan_arg, +}; + +/** + * ice_vsi_init_unsupported_vlan_ops - init all VSI VLAN ops to unsupported + * @vsi: VSI to initialize VSI VLAN ops to unsupported for + * + * By default all inner and outer VSI VLAN ops return -EOPNOTSUPP. This was done + * as oppsed to leaving the ops null to prevent unexpected crashes. Instead if + * an unsupported VSI VLAN op is called it will just return -EOPNOTSUPP. + * + */ +static void ice_vsi_init_unsupported_vlan_ops(struct ice_vsi *vsi) +{ + vsi->outer_vlan_ops = ops_unsupported; + vsi->inner_vlan_ops = ops_unsupported; +} + +/** + * ice_vsi_init_vlan_ops - initialize type specific VSI VLAN ops + * @vsi: VSI to initialize ops for + * + * If any VSI types are added and/or require different ops than the PF or VF VSI + * then they will have to add a case here to handle that. Also, VSI type + * specific files should be added in the same manner that was done for PF VSI. + */ void ice_vsi_init_vlan_ops(struct ice_vsi *vsi) { - vsi->vlan_ops.add_vlan = ice_vsi_add_vlan; - vsi->vlan_ops.del_vlan = ice_vsi_del_vlan; - vsi->vlan_ops.ena_stripping = ice_vsi_ena_inner_stripping; - vsi->vlan_ops.dis_stripping = ice_vsi_dis_inner_stripping; - vsi->vlan_ops.ena_insertion = ice_vsi_ena_inner_insertion; - vsi->vlan_ops.dis_insertion = ice_vsi_dis_inner_insertion; - vsi->vlan_ops.ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; - vsi->vlan_ops.dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; - vsi->vlan_ops.ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; - vsi->vlan_ops.dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; - vsi->vlan_ops.set_port_vlan = ice_vsi_set_inner_port_vlan; + /* Initialize all VSI types to have unsupported VSI VLAN ops */ + ice_vsi_init_unsupported_vlan_ops(vsi); + + switch (vsi->type) { + case ICE_VSI_PF: + case ICE_VSI_SWITCHDEV_CTRL: + ice_pf_vsi_init_vlan_ops(vsi); + break; + case ICE_VSI_VF: + ice_vf_vsi_init_vlan_ops(vsi); + break; + default: + dev_dbg(ice_pf_to_dev(vsi->back), "%s does not support VLAN operations\n", + ice_vsi_type_str(vsi->type)); + break; + } +} + +/** + * ice_get_compat_vsi_vlan_ops - Get VSI VLAN ops based on VLAN mode + * @vsi: VSI used to get the VSI VLAN ops + * + * This function is meant to be used when the caller doesn't know which VLAN ops + * to use (i.e. inner or outer). This allows backward compatibility for VLANs + * since most of the Outer VSI VLAN functins are not supported when + * the device is configured in Single VLAN Mode (SVM). + */ +struct ice_vsi_vlan_ops *ice_get_compat_vsi_vlan_ops(struct ice_vsi *vsi) +{ + if (ice_is_dvm_ena(&vsi->back->hw)) + return &vsi->outer_vlan_ops; + else + return &vsi->inner_vlan_ops; } diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 76e55b259bc8..30d02d2b8e5f 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -23,6 +23,12 @@ struct ice_vsi_vlan_ops { int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); }; +static inline bool ice_is_dvm_ena(struct ice_hw __always_unused *hw) +{ + return false; +} + void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); +struct ice_vsi_vlan_ops *ice_get_compat_vsi_vlan_ops(struct ice_vsi *vsi); #endif /* _ICE_VSI_VLAN_OPS_H_ */ -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:42 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:42 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 04/14] ice: Introduce ice_vlan struct In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-4-anthony.l.nguyen@intel.com> From: Brett Creeley Add a new struct for VLAN related information. Currently this holds VLAN ID and priority values, but will be expanded to hold TPID value. This reduces the changes necessary if any other values are added in future. Remove the action argument from these calls as it's always ICE_FWD_VSI. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_fltr.c | 35 +++++++------------ drivers/net/ethernet/intel/ice/ice_fltr.h | 10 +++--- drivers/net/ethernet/intel/ice/ice_lib.c | 5 ++- drivers/net/ethernet/intel/ice/ice_lib.h | 1 + drivers/net/ethernet/intel/ice/ice_main.c | 8 +++-- .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 19 ++++++---- drivers/net/ethernet/intel/ice/ice_vlan.h | 17 +++++++++ .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 31 +++++++++------- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 9 +++-- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.h | 6 ++-- 10 files changed, 82 insertions(+), 59 deletions(-) create mode 100644 drivers/net/ethernet/intel/ice/ice_vlan.h diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c index cf07eef39e9d..8f543851e39f 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.c +++ b/drivers/net/ethernet/intel/ice/ice_fltr.c @@ -206,21 +206,20 @@ ice_fltr_add_mac_to_list(struct ice_vsi *vsi, struct list_head *list, * ice_fltr_add_vlan_to_list - add VLAN filter info to exsisting list * @vsi: pointer to VSI struct * @list: list to add filter info to - * @vlan_id: VLAN ID to add - * @action: filter action + * @vlan: VLAN filter details */ static int ice_fltr_add_vlan_to_list(struct ice_vsi *vsi, struct list_head *list, - u16 vlan_id, enum ice_sw_fwd_act_type action) + struct ice_vlan *vlan) { struct ice_fltr_info info = { 0 }; info.flag = ICE_FLTR_TX; info.src_id = ICE_SRC_ID_VSI; info.lkup_type = ICE_SW_LKUP_VLAN; - info.fltr_act = action; + info.fltr_act = ICE_FWD_TO_VSI; info.vsi_handle = vsi->idx; - info.l_data.vlan.vlan_id = vlan_id; + info.l_data.vlan.vlan_id = vlan->vid; return ice_fltr_add_entry_to_list(ice_pf_to_dev(vsi->back), &info, list); @@ -313,19 +312,17 @@ ice_fltr_prepare_mac_and_broadcast(struct ice_vsi *vsi, const u8 *mac, /** * ice_fltr_prepare_vlan - add or remove VLAN filter * @vsi: pointer to VSI struct - * @vlan_id: VLAN ID to add - * @action: action to be performed on filter match + * @vlan: VLAN filter details * @vlan_action: pointer to add or remove VLAN function */ static int -ice_fltr_prepare_vlan(struct ice_vsi *vsi, u16 vlan_id, - enum ice_sw_fwd_act_type action, +ice_fltr_prepare_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan, int (*vlan_action)(struct ice_vsi *, struct list_head *)) { LIST_HEAD(tmp_list); int result; - if (ice_fltr_add_vlan_to_list(vsi, &tmp_list, vlan_id, action)) + if (ice_fltr_add_vlan_to_list(vsi, &tmp_list, vlan)) return -ENOMEM; result = vlan_action(vsi, &tmp_list); @@ -398,27 +395,21 @@ int ice_fltr_remove_mac(struct ice_vsi *vsi, const u8 *mac, /** * ice_fltr_add_vlan - add single VLAN filter * @vsi: pointer to VSI struct - * @vlan_id: VLAN ID to add - * @action: action to be performed on filter match + * @vlan: VLAN filter details */ -int ice_fltr_add_vlan(struct ice_vsi *vsi, u16 vlan_id, - enum ice_sw_fwd_act_type action) +int ice_fltr_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - return ice_fltr_prepare_vlan(vsi, vlan_id, action, - ice_fltr_add_vlan_list); + return ice_fltr_prepare_vlan(vsi, vlan, ice_fltr_add_vlan_list); } /** * ice_fltr_remove_vlan - remove VLAN filter * @vsi: pointer to VSI struct - * @vlan_id: filter VLAN to remove - * @action: action to remove + * @vlan: VLAN filter details */ -int ice_fltr_remove_vlan(struct ice_vsi *vsi, u16 vlan_id, - enum ice_sw_fwd_act_type action) +int ice_fltr_remove_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - return ice_fltr_prepare_vlan(vsi, vlan_id, action, - ice_fltr_remove_vlan_list); + return ice_fltr_prepare_vlan(vsi, vlan, ice_fltr_remove_vlan_list); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.h b/drivers/net/ethernet/intel/ice/ice_fltr.h index d271f61e0d34..4f7fe09d10e9 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.h +++ b/drivers/net/ethernet/intel/ice/ice_fltr.h @@ -4,6 +4,8 @@ #ifndef _ICE_FLTR_H_ #define _ICE_FLTR_H_ +#include "ice_vlan.h" + void ice_fltr_free_list(struct device *dev, struct list_head *h); int ice_fltr_set_vlan_vsi_promisc(struct ice_hw *hw, struct ice_vsi *vsi, u8 promisc_mask); @@ -32,12 +34,8 @@ ice_fltr_remove_mac(struct ice_vsi *vsi, const u8 *mac, int ice_fltr_remove_mac_list(struct ice_vsi *vsi, struct list_head *list); -int -ice_fltr_add_vlan(struct ice_vsi *vsi, u16 vid, - enum ice_sw_fwd_act_type action); -int -ice_fltr_remove_vlan(struct ice_vsi *vsi, u16 vid, - enum ice_sw_fwd_act_type action); +int ice_fltr_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_fltr_remove_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_fltr_add_eth(struct ice_vsi *vsi, u16 ethertype, u16 flag, diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index b50509584b31..55a2aef54922 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3878,7 +3878,10 @@ int ice_set_link(struct ice_vsi *vsi, bool ena) */ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi) { - return vsi->vlan_ops.add_vlan(vsi, 0, ICE_FWD_TO_VSI); + struct ice_vlan vlan; + + vlan = ICE_VLAN(0, 0); + return vsi->vlan_ops.add_vlan(vsi, &vlan); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 427e5e4e9f17..8f42a3f3a949 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -5,6 +5,7 @@ #define _ICE_LIB_H_ #include "ice.h" +#include "ice_vlan.h" const char *ice_vsi_type_str(enum ice_vsi_type vsi_type); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 904571527e27..8669858d104c 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3421,6 +3421,7 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; + struct ice_vlan vlan; int ret; /* VLAN 0 is added by default during load/reset */ @@ -3437,7 +3438,8 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto, /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged * packets aren't pruned by the device's internal switch on Rx */ - ret = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); + vlan = ICE_VLAN(vid, 0); + ret = vsi->vlan_ops.add_vlan(vsi, &vlan); if (!ret) set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state); @@ -3458,6 +3460,7 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; + struct ice_vlan vlan; int ret; /* don't allow removal of VLAN 0 */ @@ -3467,7 +3470,8 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto, /* Make sure VLAN delete is successful before updating VLAN * information */ - ret = vsi->vlan_ops.del_vlan(vsi, vid); + vlan = ICE_VLAN(vid, 0); + ret = vsi->vlan_ops.del_vlan(vsi, &vlan); if (ret) return ret; diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c index 6fa0968f0912..d580120dbb93 100644 --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c @@ -760,24 +760,25 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf) */ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf) { + u8 vlan_prio = (vf->port_vlan_info & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; + u16 vlan_id = vf->port_vlan_info & VLAN_VID_MASK; struct device *dev = ice_pf_to_dev(vf->pf); struct ice_vsi *vsi = ice_get_vf_vsi(vf); - u16 vlan_id = 0; + struct ice_vlan vlan; int err; + vlan = ICE_VLAN(vlan_id, vlan_prio); if (vf->port_vlan_info) { - err = vsi->vlan_ops.set_port_vlan(vsi, vf->port_vlan_info); + err = vsi->vlan_ops.set_port_vlan(vsi, &vlan); if (err) { dev_err(dev, "failed to configure port VLAN via VSI parameters for VF %u, error %d\n", vf->vf_id, err); return err; } - - vlan_id = vf->port_vlan_info & VLAN_VID_MASK; } /* vlan_id will either be 0 or the port VLAN number */ - err = vsi->vlan_ops.add_vlan(vsi, vlan_id, ICE_FWD_TO_VSI); + err = vsi->vlan_ops.add_vlan(vsi, &vlan); if (err) { dev_err(dev, "failed to add %s VLAN %u filter for VF %u, error %d\n", vf->port_vlan_info ? "port" : "", vlan_id, vf->vf_id, @@ -4231,6 +4232,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (add_v) { for (i = 0; i < vfl->num_elements; i++) { u16 vid = vfl->vlan_id[i]; + struct ice_vlan vlan; if (!ice_is_vf_trusted(vf) && vsi->num_vlan >= ICE_MAX_VLAN_PER_VF) { @@ -4250,7 +4252,8 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - status = vsi->vlan_ops.add_vlan(vsi, vid, ICE_FWD_TO_VSI); + vlan = ICE_VLAN(vid, 0); + status = vsi->vlan_ops.add_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; @@ -4293,6 +4296,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) num_vf_vlan = vsi->num_vlan; for (i = 0; i < vfl->num_elements && i < num_vf_vlan; i++) { u16 vid = vfl->vlan_id[i]; + struct ice_vlan vlan; /* we add VLAN 0 by default for each VF so we can enable * Tx VLAN anti-spoof without triggering MDD events so @@ -4301,7 +4305,8 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v) if (!vid) continue; - status = vsi->vlan_ops.del_vlan(vsi, vid); + vlan = ICE_VLAN(vid, 0); + status = vsi->vlan_ops.del_vlan(vsi, &vlan); if (status) { v_ret = VIRTCHNL_STATUS_ERR_PARAM; goto error_param; diff --git a/drivers/net/ethernet/intel/ice/ice_vlan.h b/drivers/net/ethernet/intel/ice/ice_vlan.h new file mode 100644 index 000000000000..3fad0cba2da6 --- /dev/null +++ b/drivers/net/ethernet/intel/ice/ice_vlan.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019-2021, Intel Corporation. */ + +#ifndef _ICE_VLAN_H_ +#define _ICE_VLAN_H_ + +#include +#include "ice_type.h" + +struct ice_vlan { + u16 vid; + u8 prio; +}; + +#define ICE_VLAN(vid, prio) ((struct ice_vlan){ vid, prio }) + +#endif /* _ICE_VLAN_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 6b0a4bf28305..74b6dec0744b 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -9,20 +9,18 @@ /** * ice_vsi_add_vlan - default add VLAN implementation for all VSI types * @vsi: VSI being configured - * @vid: VLAN ID to be added - * @action: filter action to be performed on match + * @vlan: VLAN filter to add */ -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) +int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { int err = 0; - if (!ice_fltr_add_vlan(vsi, vid, action)) { + if (!ice_fltr_add_vlan(vsi, vlan)) { vsi->num_vlan++; } else { err = -ENODEV; dev_err(ice_pf_to_dev(vsi->back), "Failure Adding VLAN %d on VSI %i\n", - vid, vsi->vsi_num); + vlan->vid, vsi->vsi_num); } return err; @@ -31,9 +29,9 @@ ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action) /** * ice_vsi_del_vlan - default del VLAN implementation for all VSI types * @vsi: VSI being configured - * @vid: VLAN ID to be removed + * @vlan: VLAN filter to delete */ -int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid) +int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { struct ice_pf *pf = vsi->back; struct device *dev; @@ -41,16 +39,16 @@ int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid) dev = ice_pf_to_dev(pf); - err = ice_fltr_remove_vlan(vsi, vid, ICE_FWD_TO_VSI); + err = ice_fltr_remove_vlan(vsi, vlan); if (!err) { vsi->num_vlan--; } else if (err == -ENOENT) { dev_dbg(dev, "Failed to remove VLAN %d on VSI %i, it does not exist\n", - vid, vsi->vsi_num); + vlan->vid, vsi->vsi_num); err = 0; } else { dev_err(dev, "Error removing VLAN %d on VSI %i error: %d\n", - vid, vsi->vsi_num, err); + vlan->vid, vsi->vsi_num, err); } return err; @@ -214,9 +212,16 @@ static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) return ret; } -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info) +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { - return ice_vsi_manage_pvid(vsi, pvid_info, true); + u16 port_vlan_info; + + if (vlan->prio > 7) + return -EINVAL; + + port_vlan_info = vlan->vid | (vlan->prio << VLAN_PRIO_SHIFT); + + return ice_vsi_manage_pvid(vsi, port_vlan_info, true); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index f9fe33026306..a0305007896c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -5,19 +5,18 @@ #define _ICE_VSI_VLAN_LIB_H_ #include -#include "ice_type.h" +#include "ice_vlan.h" struct ice_vsi; -int -ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); -int ice_vsi_del_vlan(struct ice_vsi *vsi, u16 vid); +int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_ena_stripping(struct ice_vsi *vsi); int ice_vsi_dis_stripping(struct ice_vsi *vsi); int ice_vsi_ena_insertion(struct ice_vsi *vsi); int ice_vsi_dis_insertion(struct ice_vsi *vsi); -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, u16 pvid_info); +int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h index 522169742661..c944f04acd3c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.h @@ -10,8 +10,8 @@ struct ice_vsi; struct ice_vsi_vlan_ops { - int (*add_vlan)(struct ice_vsi *vsi, u16 vid, enum ice_sw_fwd_act_type action); - int (*del_vlan)(struct ice_vsi *vsi, u16 vid); + int (*add_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); + int (*del_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); int (*ena_stripping)(struct ice_vsi *vsi); int (*dis_stripping)(struct ice_vsi *vsi); int (*ena_insertion)(struct ice_vsi *vsi); @@ -20,7 +20,7 @@ struct ice_vsi_vlan_ops { int (*dis_rx_filtering)(struct ice_vsi *vsi); int (*ena_tx_filtering)(struct ice_vsi *vsi); int (*dis_tx_filtering)(struct ice_vsi *vsi); - int (*set_port_vlan)(struct ice_vsi *vsi, u16 pvid_info); + int (*set_port_vlan)(struct ice_vsi *vsi, struct ice_vlan *vlan); }; void ice_vsi_init_vlan_ops(struct ice_vsi *vsi); -- 2.20.1 From anthony.l.nguyen at intel.com Thu Dec 2 16:38:45 2021 From: anthony.l.nguyen at intel.com (Tony Nguyen) Date: Thu, 2 Dec 2021 08:38:45 -0800 Subject: [Intel-wired-lan] [PATCH net-next v3 07/14] ice: Adjust naming for inner VLAN operations In-Reply-To: <20211202163852.36436-1-anthony.l.nguyen@intel.com> References: <20211202163852.36436-1-anthony.l.nguyen@intel.com> Message-ID: <20211202163852.36436-7-anthony.l.nguyen@intel.com> From: Brett Creeley Current operations act on inner VLAN fields. To support double VLAN, outer VLAN operations and functions will be implemented. Add the "inner" naming to existing VLAN operations to distinguish them from the upcoming outer values and functions. Some spacing adjustments are made to align values. Note that the inner is not talking about a tunneled VLAN, but the second VLAN in the packet. For SVM the driver uses inner or single VLAN filtering and offloads and in Double VLAN Mode the driver uses the inner filtering and offloads for SR-IOV VFs in port VLANs in order to support offloading the guest VLAN while a port VLAN is configured. Signed-off-by: Brett Creeley Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_adminq_cmd.h | 191 +++++++++--------- drivers/net/ethernet/intel/ice/ice_lib.c | 8 +- drivers/net/ethernet/intel/ice/ice_main.c | 6 +- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.c | 57 +++--- .../net/ethernet/intel/ice/ice_vsi_vlan_lib.h | 10 +- .../net/ethernet/intel/ice/ice_vsi_vlan_ops.c | 10 +- 6 files changed, 140 insertions(+), 142 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index f3afbba4a66d..b638f9e9ecd9 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -343,108 +343,113 @@ struct ice_aqc_vsi_props { #define ICE_AQ_VSI_SW_FLAG_SRC_PRUNE BIT(7) u8 sw_flags2; #define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S 0 -#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M \ - (0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S) +#define ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_M (0xF << ICE_AQ_VSI_SW_FLAG_RX_PRUNE_EN_S) #define ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA BIT(0) #define ICE_AQ_VSI_SW_FLAG_LAN_ENA BIT(4) u8 veb_stat_id; #define ICE_AQ_VSI_SW_VEB_STAT_ID_S 0 -#define ICE_AQ_VSI_SW_VEB_STAT_ID_M (0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S) +#define ICE_AQ_VSI_SW_VEB_STAT_ID_M (0x1F << ICE_AQ_VSI_SW_VEB_STAT_ID_S) #define ICE_AQ_VSI_SW_VEB_STAT_ID_VALID BIT(5) /* security section */ u8 sec_flags; #define ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD BIT(0) #define ICE_AQ_VSI_SEC_FLAG_ENA_MAC_ANTI_SPOOF BIT(2) -#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S 4 -#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M (0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S) +#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S 4 +#define ICE_AQ_VSI_SEC_TX_PRUNE_ENA_M (0xF << ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S) #define ICE_AQ_VSI_SEC_TX_VLAN_PRUNE_ENA BIT(0) u8 sec_reserved; /* VLAN section */ - __le16 pvid; /* VLANS include priority bits */ - u8 pvlan_reserved[2]; - u8 vlan_flags; -#define ICE_AQ_VSI_VLAN_MODE_S 0 -#define ICE_AQ_VSI_VLAN_MODE_M (0x3 << ICE_AQ_VSI_VLAN_MODE_S) -#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED 0x1 -#define ICE_AQ_VSI_VLAN_MODE_TAGGED 0x2 -#define ICE_AQ_VSI_VLAN_MODE_ALL 0x3 -#define ICE_AQ_VSI_PVLAN_INSERT_PVID BIT(2) -#define ICE_AQ_VSI_VLAN_EMOD_S 3 -#define ICE_AQ_VSI_VLAN_EMOD_M (0x3 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH (0x0 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_STR_UP (0x1 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_STR (0x2 << ICE_AQ_VSI_VLAN_EMOD_S) -#define ICE_AQ_VSI_VLAN_EMOD_NOTHING (0x3 << ICE_AQ_VSI_VLAN_EMOD_S) - u8 pvlan_reserved2[3]; + __le16 port_based_inner_vlan; /* VLANS include priority bits */ + u8 inner_vlan_reserved[2]; + u8 inner_vlan_flags; +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_S 0 +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_M (0x3 << ICE_AQ_VSI_INNER_VLAN_TX_MODE_S) +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED 0x1 +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTTAGGED 0x2 +#define ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL 0x3 +#define ICE_AQ_VSI_INNER_VLAN_INSERT_PVID BIT(2) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_S 3 +#define ICE_AQ_VSI_INNER_VLAN_EMODE_M (0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH (0x0 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR_UP (0x1 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_STR (0x2 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) +#define ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING (0x3 << ICE_AQ_VSI_INNER_VLAN_EMODE_S) + u8 inner_vlan_reserved2[3]; /* ingress egress up sections */ __le32 ingress_table; /* bitmap, 3 bits per up */ -#define ICE_AQ_VSI_UP_TABLE_UP0_S 0 -#define ICE_AQ_VSI_UP_TABLE_UP0_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S) -#define ICE_AQ_VSI_UP_TABLE_UP1_S 3 -#define ICE_AQ_VSI_UP_TABLE_UP1_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S) -#define ICE_AQ_VSI_UP_TABLE_UP2_S 6 -#define ICE_AQ_VSI_UP_TABLE_UP2_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S) -#define ICE_AQ_VSI_UP_TABLE_UP3_S 9 -#define ICE_AQ_VSI_UP_TABLE_UP3_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S) -#define ICE_AQ_VSI_UP_TABLE_UP4_S 12 -#define ICE_AQ_VSI_UP_TABLE_UP4_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S) -#define ICE_AQ_VSI_UP_TABLE_UP5_S 15 -#define ICE_AQ_VSI_UP_TABLE_UP5_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S) -#define ICE_AQ_VSI_UP_TABLE_UP6_S 18 -#define ICE_AQ_VSI_UP_TABLE_UP6_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S) -#define ICE_AQ_VSI_UP_TABLE_UP7_S 21 -#define ICE_AQ_VSI_UP_TABLE_UP7_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S) +#define ICE_AQ_VSI_UP_TABLE_UP0_S 0 +#define ICE_AQ_VSI_UP_TABLE_UP0_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP0_S) +#define ICE_AQ_VSI_UP_TABLE_UP1_S 3 +#define ICE_AQ_VSI_UP_TABLE_UP1_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP1_S) +#define ICE_AQ_VSI_UP_TABLE_UP2_S 6 +#define ICE_AQ_VSI_UP_TABLE_UP2_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP2_S) +#define ICE_AQ_VSI_UP_TABLE_UP3_S 9 +#define ICE_AQ_VSI_UP_TABLE_UP3_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP3_S) +#define ICE_AQ_VSI_UP_TABLE_UP4_S 12 +#define ICE_AQ_VSI_UP_TABLE_UP4_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP4_S) +#define ICE_AQ_VSI_UP_TABLE_UP5_S 15 +#define ICE_AQ_VSI_UP_TABLE_UP5_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP5_S) +#define ICE_AQ_VSI_UP_TABLE_UP6_S 18 +#define ICE_AQ_VSI_UP_TABLE_UP6_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP6_S) +#define ICE_AQ_VSI_UP_TABLE_UP7_S 21 +#define ICE_AQ_VSI_UP_TABLE_UP7_M (0x7 << ICE_AQ_VSI_UP_TABLE_UP7_S) __le32 egress_table; /* same defines as for ingress table */ /* outer tags section */ - __le16 outer_tag; - u8 outer_tag_flags; -#define ICE_AQ_VSI_OUTER_TAG_MODE_S 0 -#define ICE_AQ_VSI_OUTER_TAG_MODE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_MODE_S) -#define ICE_AQ_VSI_OUTER_TAG_NOTHING 0x0 -#define ICE_AQ_VSI_OUTER_TAG_REMOVE 0x1 -#define ICE_AQ_VSI_OUTER_TAG_COPY 0x2 -#define ICE_AQ_VSI_OUTER_TAG_TYPE_S 2 -#define ICE_AQ_VSI_OUTER_TAG_TYPE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S) -#define ICE_AQ_VSI_OUTER_TAG_NONE 0x0 -#define ICE_AQ_VSI_OUTER_TAG_STAG 0x1 -#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100 0x2 -#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100 0x3 -#define ICE_AQ_VSI_OUTER_TAG_INSERT BIT(4) -#define ICE_AQ_VSI_OUTER_TAG_ACCEPT_HOST BIT(6) - u8 outer_tag_reserved; + __le16 port_based_outer_vlan; + u8 outer_vlan_flags; +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_S 0 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_M (0x3 << ICE_AQ_VSI_OUTER_VLAN_EMODE_S) +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_BOTH 0x0 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW_UP 0x1 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_SHOW 0x2 +#define ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING 0x3 +#define ICE_AQ_VSI_OUTER_TAG_TYPE_S 2 +#define ICE_AQ_VSI_OUTER_TAG_TYPE_M (0x3 << ICE_AQ_VSI_OUTER_TAG_TYPE_S) +#define ICE_AQ_VSI_OUTER_TAG_NONE 0x0 +#define ICE_AQ_VSI_OUTER_TAG_STAG 0x1 +#define ICE_AQ_VSI_OUTER_TAG_VLAN_8100 0x2 +#define ICE_AQ_VSI_OUTER_TAG_VLAN_9100 0x3 +#define ICE_AQ_VSI_OUTER_VLAN_PORT_BASED_INSERT BIT(4) +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S 5 +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M (0x3 << ICE_AQ_VSI_OUTER_VLAN_TX_MODE_S) +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTUNTAGGED 0x1 +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ACCEPTTAGGED 0x2 +#define ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL 0x3 +#define ICE_AQ_VSI_OUTER_VLAN_BLOCK_TX_DESC BIT(7) + u8 outer_vlan_reserved; /* queue mapping section */ __le16 mapping_flags; -#define ICE_AQ_VSI_Q_MAP_CONTIG 0x0 -#define ICE_AQ_VSI_Q_MAP_NONCONTIG BIT(0) +#define ICE_AQ_VSI_Q_MAP_CONTIG 0x0 +#define ICE_AQ_VSI_Q_MAP_NONCONTIG BIT(0) __le16 q_mapping[16]; -#define ICE_AQ_VSI_Q_S 0 -#define ICE_AQ_VSI_Q_M (0x7FF << ICE_AQ_VSI_Q_S) +#define ICE_AQ_VSI_Q_S 0 +#define ICE_AQ_VSI_Q_M (0x7FF << ICE_AQ_VSI_Q_S) __le16 tc_mapping[8]; -#define ICE_AQ_VSI_TC_Q_OFFSET_S 0 -#define ICE_AQ_VSI_TC_Q_OFFSET_M (0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S) -#define ICE_AQ_VSI_TC_Q_NUM_S 11 -#define ICE_AQ_VSI_TC_Q_NUM_M (0xF << ICE_AQ_VSI_TC_Q_NUM_S) +#define ICE_AQ_VSI_TC_Q_OFFSET_S 0 +#define ICE_AQ_VSI_TC_Q_OFFSET_M (0x7FF << ICE_AQ_VSI_TC_Q_OFFSET_S) +#define ICE_AQ_VSI_TC_Q_NUM_S 11 +#define ICE_AQ_VSI_TC_Q_NUM_M (0xF << ICE_AQ_VSI_TC_Q_NUM_S) /* queueing option section */ u8 q_opt_rss; -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S 0 -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI 0x0 -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF 0x2 -#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL 0x3 -#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S 2 -#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M (0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S) -#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S 6 -#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ (0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ (0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_XOR (0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) -#define ICE_AQ_VSI_Q_OPT_RSS_JHASH (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_S 0 +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_VSI 0x0 +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_PF 0x2 +#define ICE_AQ_VSI_Q_OPT_RSS_LUT_GBL 0x3 +#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S 2 +#define ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_M (0xF << ICE_AQ_VSI_Q_OPT_RSS_GBL_LUT_S) +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_S 6 +#define ICE_AQ_VSI_Q_OPT_RSS_HASH_M (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_TPLZ (0x0 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_SYM_TPLZ (0x1 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_XOR (0x2 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) +#define ICE_AQ_VSI_Q_OPT_RSS_JHASH (0x3 << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) u8 q_opt_tc; -#define ICE_AQ_VSI_Q_OPT_TC_OVR_S 0 -#define ICE_AQ_VSI_Q_OPT_TC_OVR_M (0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S) -#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR BIT(7) +#define ICE_AQ_VSI_Q_OPT_TC_OVR_S 0 +#define ICE_AQ_VSI_Q_OPT_TC_OVR_M (0x1F << ICE_AQ_VSI_Q_OPT_TC_OVR_S) +#define ICE_AQ_VSI_Q_OPT_PROF_TC_OVR BIT(7) u8 q_opt_flags; -#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN BIT(0) +#define ICE_AQ_VSI_Q_OPT_PE_FLTR_EN BIT(0) u8 q_opt_reserved[3]; /* outer up section */ __le32 outer_up_table; /* same structure and defines as ingress tbl */ @@ -452,27 +457,27 @@ struct ice_aqc_vsi_props { __le16 sect_10_reserved; /* flow director section */ __le16 fd_options; -#define ICE_AQ_VSI_FD_ENABLE BIT(0) -#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE BIT(1) -#define ICE_AQ_VSI_FD_PROG_ENABLE BIT(3) +#define ICE_AQ_VSI_FD_ENABLE BIT(0) +#define ICE_AQ_VSI_FD_TX_AUTO_ENABLE BIT(1) +#define ICE_AQ_VSI_FD_PROG_ENABLE BIT(3) __le16 max_fd_fltr_dedicated; __le16 max_fd_fltr_shared; __le16 fd_def_q; -#define ICE_AQ_VSI_FD_DEF_Q_S 0 -#define ICE_AQ_VSI_FD_DEF_Q_M (0x7FF << ICE_AQ_VSI_FD_DEF_Q_S) -#define ICE_AQ_VSI_FD_DEF_GRP_S 12 -#define ICE_AQ_VSI_FD_DEF_GRP_M (0x7 << ICE_AQ_VSI_FD_DEF_GRP_S) +#define ICE_AQ_VSI_FD_DEF_Q_S 0 +#define ICE_AQ_VSI_FD_DEF_Q_M (0x7FF << ICE_AQ_VSI_FD_DEF_Q_S) +#define ICE_AQ_VSI_FD_DEF_GRP_S 12 +#define ICE_AQ_VSI_FD_DEF_GRP_M (0x7 << ICE_AQ_VSI_FD_DEF_GRP_S) __le16 fd_report_opt; -#define ICE_AQ_VSI_FD_REPORT_Q_S 0 -#define ICE_AQ_VSI_FD_REPORT_Q_M (0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S) -#define ICE_AQ_VSI_FD_DEF_PRIORITY_S 12 -#define ICE_AQ_VSI_FD_DEF_PRIORITY_M (0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S) -#define ICE_AQ_VSI_FD_DEF_DROP BIT(15) +#define ICE_AQ_VSI_FD_REPORT_Q_S 0 +#define ICE_AQ_VSI_FD_REPORT_Q_M (0x7FF << ICE_AQ_VSI_FD_REPORT_Q_S) +#define ICE_AQ_VSI_FD_DEF_PRIORITY_S 12 +#define ICE_AQ_VSI_FD_DEF_PRIORITY_M (0x7 << ICE_AQ_VSI_FD_DEF_PRIORITY_S) +#define ICE_AQ_VSI_FD_DEF_DROP BIT(15) /* PASID section */ __le32 pasid_id; -#define ICE_AQ_VSI_PASID_ID_S 0 -#define ICE_AQ_VSI_PASID_ID_M (0xFFFFF << ICE_AQ_VSI_PASID_ID_S) -#define ICE_AQ_VSI_PASID_ID_VALID BIT(31) +#define ICE_AQ_VSI_PASID_ID_S 0 +#define ICE_AQ_VSI_PASID_ID_M (0xFFFFF << ICE_AQ_VSI_PASID_ID_S) +#define ICE_AQ_VSI_PASID_ID_VALID BIT(31) u8 reserved[24]; }; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 0fff5ec897c9..c8991711b754 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -810,13 +810,13 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt) ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE; /* Traffic from VSI can be sent to LAN */ ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; - /* By default bits 3 and 4 in vlan_flags are 0's which results in legacy + /* By default bits 3 and 4 in inner_vlan_flags are 0's which results in legacy * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all * packets untagged/tagged. */ - ctxt->info.vlan_flags = ((ICE_AQ_VSI_VLAN_MODE_ALL & - ICE_AQ_VSI_VLAN_MODE_M) >> - ICE_AQ_VSI_VLAN_MODE_S); + ctxt->info.inner_vlan_flags = ((ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL & + ICE_AQ_VSI_INNER_VLAN_TX_MODE_M) >> + ICE_AQ_VSI_INNER_VLAN_TX_MODE_S); /* Have 1:1 UP mapping for both ingress/egress tables */ table |= ICE_UP_TABLE_TRANSLATE(0, 0); table |= ICE_UP_TABLE_TRANSLATE(1, 1); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 8a0684c0ebd0..6843b8e87441 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -4071,8 +4071,8 @@ static void ice_set_safe_mode_vlan_cfg(struct ice_pf *pf) ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; /* allow all VLANs on Tx and don't strip on Rx */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL | - ICE_AQ_VSI_VLAN_EMOD_NOTHING; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL | + ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; status = ice_update_vsi(hw, vsi->idx, ctxt, NULL); if (status) { @@ -4081,7 +4081,7 @@ static void ice_set_safe_mode_vlan_cfg(struct ice_pf *pf) } else { vsi->info.sec_flags = ctxt->info.sec_flags; vsi->info.sw_flags2 = ctxt->info.sw_flags2; - vsi->info.vlan_flags = ctxt->info.vlan_flags; + vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags; } kfree(ctxt); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c index 6b7feab0b2a1..0b130505b68a 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.c @@ -100,14 +100,14 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) return -ENOMEM; /* Here we are configuring the VSI to let the driver add VLAN tags by - * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag + * setting inner_vlan_flags to ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL. The actual VLAN tag * insertion happens in the Tx hot path, in ice_tx_map. */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL; /* Preserve existing VLAN strip setting */ - ctxt->info.vlan_flags |= (vsi->info.vlan_flags & - ICE_AQ_VSI_VLAN_EMOD_M); + ctxt->info.inner_vlan_flags |= (vsi->info.inner_vlan_flags & + ICE_AQ_VSI_INNER_VLAN_EMODE_M); ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); @@ -118,7 +118,7 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi) goto out; } - vsi->info.vlan_flags = ctxt->info.vlan_flags; + vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags; out: kfree(ctxt); return err; @@ -138,7 +138,7 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) /* do not allow modifying VLAN stripping when a port VLAN is configured * on this VSI */ - if (vsi->info.pvid) + if (vsi->info.port_based_inner_vlan) return 0; ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL); @@ -151,13 +151,13 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) */ if (ena) /* Strip VLAN tag from Rx packet and put it in the desc */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_STR_BOTH; else /* Disable stripping. Leave tag in packet */ - ctxt->info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING; + ctxt->info.inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING; /* Allow all packets untagged/tagged */ - ctxt->info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL; + ctxt->info.inner_vlan_flags |= ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL; ctxt->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID); @@ -168,13 +168,13 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena) goto out; } - vsi->info.vlan_flags = ctxt->info.vlan_flags; + vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags; out: kfree(ctxt); return err; } -int ice_vsi_ena_stripping(struct ice_vsi *vsi, const u16 tpid) +int ice_vsi_ena_inner_stripping(struct ice_vsi *vsi, const u16 tpid) { if (tpid != ETH_P_8021Q) { print_invalid_tpid(vsi, tpid); @@ -184,12 +184,12 @@ int ice_vsi_ena_stripping(struct ice_vsi *vsi, const u16 tpid) return ice_vsi_manage_vlan_stripping(vsi, true); } -int ice_vsi_dis_stripping(struct ice_vsi *vsi) +int ice_vsi_dis_inner_stripping(struct ice_vsi *vsi) { return ice_vsi_manage_vlan_stripping(vsi, false); } -int ice_vsi_ena_insertion(struct ice_vsi *vsi, const u16 tpid) +int ice_vsi_ena_inner_insertion(struct ice_vsi *vsi, const u16 tpid) { if (tpid != ETH_P_8021Q) { print_invalid_tpid(vsi, tpid); @@ -199,18 +199,17 @@ int ice_vsi_ena_insertion(struct ice_vsi *vsi, const u16 tpid) return ice_vsi_manage_vlan_insertion(vsi); } -int ice_vsi_dis_insertion(struct ice_vsi *vsi) +int ice_vsi_dis_inner_insertion(struct ice_vsi *vsi) { return ice_vsi_manage_vlan_insertion(vsi); } /** - * ice_vsi_manage_pvid - Enable or disable port VLAN for VSI + * __ice_vsi_set_inner_port_vlan - set port VLAN VSI context settings to enable a port VLAN * @vsi: the VSI to update * @pvid_info: VLAN ID and QoS used to set the PVID VSI context field - * @enable: true for enable PVID false for disable */ -static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) +static int __ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, u16 pvid_info) { struct ice_hw *hw = &vsi->back->hw; struct ice_aqc_vsi_props *info; @@ -223,18 +222,12 @@ static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) ctxt->info = vsi->info; info = &ctxt->info; - if (enable) { - info->vlan_flags = ICE_AQ_VSI_VLAN_MODE_UNTAGGED | - ICE_AQ_VSI_PVLAN_INSERT_PVID | - ICE_AQ_VSI_VLAN_EMOD_STR; - info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } else { - info->vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING | - ICE_AQ_VSI_VLAN_MODE_ALL; - info->sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - } + info->inner_vlan_flags = ICE_AQ_VSI_INNER_VLAN_TX_MODE_ACCEPTUNTAGGED | + ICE_AQ_VSI_INNER_VLAN_INSERT_PVID | + ICE_AQ_VSI_INNER_VLAN_EMODE_STR; + info->sw_flags2 |= ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA; - info->pvid = cpu_to_le16(pvid_info); + info->port_based_inner_vlan = cpu_to_le16(pvid_info); info->valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID | ICE_AQ_VSI_PROP_SW_VALID); @@ -245,15 +238,15 @@ static int ice_vsi_manage_pvid(struct ice_vsi *vsi, u16 pvid_info, bool enable) goto out; } - vsi->info.vlan_flags = info->vlan_flags; + vsi->info.inner_vlan_flags = info->inner_vlan_flags; vsi->info.sw_flags2 = info->sw_flags2; - vsi->info.pvid = info->pvid; + vsi->info.port_based_inner_vlan = info->port_based_inner_vlan; out: kfree(ctxt); return ret; } -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) +int ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) { u16 port_vlan_info; @@ -265,7 +258,7 @@ int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan) port_vlan_info = vlan->vid | (vlan->prio << VLAN_PRIO_SHIFT); - return ice_vsi_manage_pvid(vsi, port_vlan_info, true); + return __ice_vsi_set_inner_port_vlan(vsi, port_vlan_info); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h index 1bdbf585db7d..a10671133e36 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_lib.h @@ -12,11 +12,11 @@ struct ice_vsi; int ice_vsi_add_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_del_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); -int ice_vsi_ena_stripping(struct ice_vsi *vsi, u16 tpid); -int ice_vsi_dis_stripping(struct ice_vsi *vsi); -int ice_vsi_ena_insertion(struct ice_vsi *vsi, u16 tpid); -int ice_vsi_dis_insertion(struct ice_vsi *vsi); -int ice_vsi_set_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); +int ice_vsi_ena_inner_stripping(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_inner_stripping(struct ice_vsi *vsi); +int ice_vsi_ena_inner_insertion(struct ice_vsi *vsi, u16 tpid); +int ice_vsi_dis_inner_insertion(struct ice_vsi *vsi); +int ice_vsi_set_inner_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan); int ice_vsi_ena_rx_vlan_filtering(struct ice_vsi *vsi); int ice_vsi_dis_rx_vlan_filtering(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c index 3bab6c025856..6a6b49581c70 100644 --- a/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vsi_vlan_ops.c @@ -8,13 +8,13 @@ void ice_vsi_init_vlan_ops(struct ice_vsi *vsi) { vsi->vlan_ops.add_vlan = ice_vsi_add_vlan; vsi->vlan_ops.del_vlan = ice_vsi_del_vlan; - vsi->vlan_ops.ena_stripping = ice_vsi_ena_stripping; - vsi->vlan_ops.dis_stripping = ice_vsi_dis_stripping; - vsi->vlan_ops.ena_insertion = ice_vsi_ena_insertion; - vsi->vlan_ops.dis_insertion = ice_vsi_dis_insertion; + vsi->vlan_ops.ena_stripping = ice_vsi_ena_inner_stripping; + vsi->vlan_ops.dis_stripping = ice_vsi_dis_inner_stripping; + vsi->vlan_ops.ena_insertion = ice_vsi_ena_inner_insertion; + vsi->vlan_ops.dis_insertion = ice_vsi_dis_inner_insertion; vsi->vlan_ops.ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; vsi->vlan_ops.dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vsi->vlan_ops.ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vsi->vlan_ops.dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; - vsi->vlan_ops.set_port_vlan = ice_vsi_set_port_vlan; + vsi->vlan_ops.set_port_vlan = ice_vsi_set_inner_port_vlan; } -- 2.20.1 From maciej.machnikowski at intel.com Thu Dec 2 17:20:24 2021 From: maciej.machnikowski at intel.com (Machnikowski, Maciej) Date: Thu, 2 Dec 2021 17:20:24 +0000 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: > -----Original Message----- > From: Ido Schimmel > Sent: Thursday, December 2, 2021 5:36 PM > To: Machnikowski, Maciej > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > recovered clock for SyncE feature > > On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: > > > -----Original Message----- > > > From: Ido Schimmel > > > Sent: Thursday, December 2, 2021 1:44 PM > > > To: Machnikowski, Maciej > > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > > > recovered clock for SyncE feature > > > > > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski wrote: > > > Looking at the diagram from the previous submission [1]: > > > > > > ??????????????????????? > > > ? RX ? TX ? > > > 1 ? ports ? ports ? 1 > > > ??????????? ? ??????? > > > 2 ? ? ? ? 2 > > > ????????? ? ? ??????? > > > 3 ? ? ? ? ? 3 > > > ??????? ? ? ? ??????? > > > ? ? ? ? ? ? > > > ? ?????? ? ? > > > ? \____/ ? ? > > > ??????????????????????? > > > 1? 2? ? > > > RCLK out? ? ? TX CLK in > > > ? ? ? > > > ??????????????????? > > > ? ? > > > ? SEC ? > > > ? ? > > > ??????????????????? > > > > > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message allows > > > me to redirect the frequency recovered from this netdev to the EEC via > > > either pin 1, pin 2 or both. > > > > > > Given a netdev, the RCLK_GET message allows me to query the range of > > > pins (RCLK out 1-2 in the diagram) through which the frequency can be > > > fed into the EEC. > > > > > > Questions: > > > > > > 1. The query for all the above netdevs will return the same range of > > > pins. How does user space know that these are the same pins? That is, > > > how does user space know that RCLK_SET message to redirect the > frequency > > > recovered from netdev 1 to pin 1 will be overridden by the same > message > > > but for netdev 2? > > > > We don't have a way to do so right now. When we have EEC subsystem in > place > > the right thing to do will be to add EEC input index and EEC index as > additional > > arguments > > > > > 2. How does user space know the mapping between a netdev and an > EEC? > > > That is, how does user space know that RCLK_SET message for netdev 1 > > > will cause the Tx frequency of netdev 2 to change according to the > > > frequency recovered from netdev 1? > > > > Ditto - currently we don't have any entity to link the pins to ATM, > > but we can address that in userspace just like PTP pins are used now > > > > > 3. If user space sends two RCLK_SET messages to redirect the frequency > > > recovered from netdev 1 to RCLK out 1 and from netdev 2 to RCLK out 2, > > > how does it know which recovered frequency is actually used an input to > > > the EEC? > > User space doesn't know this as well? In current model it can come from the config file. Once we implement DPLL subsystem we can implement connection between pins and DPLLs if they are known. > > > > > > 4. Why these pins are represented as attributes of a netdev and not as > > > attributes of the EEC? That is, why are they represented as output pins > > > of the PHY as opposed to input pins of the EEC? > > > > They are 2 separate beings. Recovered clock outputs are controlled > > separately from EEC inputs. > > Separate how? What does it mean that they are controlled separately? In > which sense? That redirection of recovered frequency to pin is > controlled via PHY registers whereas priority setting between EEC inputs > is controlled via EEC registers? If so, this is an implementation detail > of a specific design. It is not of any importance to user space. They belong to different devices. EEC registers are physically in the DPLL hanging over I2C and recovered clocks are in the PHY/integrated PHY in the MAC. Depending on system architecture you may have control over one piece only > > If we mix them it'll be hard to control everything especially that a > > single EEC can support multiple devices. > > Hard how? Please provide concrete examples. From the EEC perspective it's one to many relation - one EEC input pin will serve even 4,16,48 netdevs. I don't see easy way of starting from EEC input of EEC device and figuring out which netdevs are connected to it to talk to the right one. In current model it's as simple as: - I received QL-PRC on netdev ens4f0 - I send back enable recovered clock on pin 0 of the ens4f0 - go to EEC that will be linked to it - see the state of it - if its locked - report QL-EEC downsteam How would you this control look in the EEC/DPLL implementation? Maybe I missed something. > What do you mean by "multiple devices"? A multi-port adapter with a > single EEC or something else? Multiple MACs that use a single EEC clock. > > Also if we make those pins attributes of the EEC it'll become extremally > hard > > to map them to netdevs and control them from the userspace app that will > > receive the ESMC message with a given QL level on netdev X. > > Hard how? What is the problem with something like: > > # eec set source 1 type netdev dev swp1 > > The EEC object should be registered by the same entity that registers > the netdevs whose Tx frequency is controlled by the EEC, the MAC driver. But the EEC object may not be controlled by the MAC - in which case this model won't work. > > > > > 5. What is the problem with the following model? > > > > > > - The EEC is a separate object with following attributes: > > > * State: Invalid / Freerun / Locked / etc > > > * Sources: Netdev / external / etc > > > * Potentially more > > > > > > - Notifications are emitted to user space when the state of the EEC > > > changes. Drivers will either poll the state from the device or get > > > interrupts > > > > > > - The mapping from netdev to EEC is queried via ethtool > > > > Yep - that will be part of the EEC (DPLL) subsystem > > This model avoids all the problems I pointed out in the current > proposal. That's the go-to model, but first we need control over the source as well :) Regards Maciek > > > > > [1] https://lore.kernel.org/netdev/20211110114448.2792314-1- > > > maciej.machnikowski at intel.com/ From vinicius.gomes at intel.com Thu Dec 2 22:34:21 2021 From: vinicius.gomes at intel.com (Vinicius Costa Gomes) Date: Thu, 02 Dec 2021 14:34:21 -0800 Subject: [Intel-wired-lan] [PATCH] igc: Avoid possible deadlock during suspend/resume In-Reply-To: <5a4b31d43d9bf32e518188f3ef84c433df3a18b1.camel@gmx.de> References: <87r1awtdx3.fsf@intel.com> <20211201185731.236130-1-vinicius.gomes@intel.com> <5a4b31d43d9bf32e518188f3ef84c433df3a18b1.camel@gmx.de> Message-ID: <87o85yljpu.fsf@intel.com> Hi Stefan, Stefan Dietrich writes: > Hi Vinicius, > > thanks for the patch - unfortunately it did not solve the issue and I > am still getting reboots/lockups. > Thanks for the test. We learned something, not a lot, but something: the problem you are facing is PTM related and it's not the same bug as that PM deadlock. I am still trying to understand what's going on. Are you able to send me the 'dmesg' output for the two kernel configs (CONFIG_PCIE_PTM enabled and disabled)? (no need to bring the network interface up or down). Your kernel .config would be useful as well. > > Cheers, > Stefan > > On Wed, 2021-12-01 at 10:57 -0800, Vinicius Costa Gomes wrote: >> Inspired by: >> https://bugzilla.kernel.org/show_bug.cgi?id=215129 >> >> Signed-off-by: Vinicius Costa Gomes >> --- >> Just to see if it's indeed the same problem as the bug report above. >> >> drivers/net/ethernet/intel/igc/igc_main.c | 19 +++++++++++++------ >> 1 file changed, 13 insertions(+), 6 deletions(-) >> >> diff --git a/drivers/net/ethernet/intel/igc/igc_main.c >> b/drivers/net/ethernet/intel/igc/igc_main.c >> index 0e19b4d02e62..c58bf557a2a1 100644 >> --- a/drivers/net/ethernet/intel/igc/igc_main.c >> +++ b/drivers/net/ethernet/intel/igc/igc_main.c >> @@ -6619,7 +6619,7 @@ static void igc_deliver_wake_packet(struct >> net_device *netdev) >> netif_rx(skb); >> } >> >> -static int __maybe_unused igc_resume(struct device *dev) >> +static int __maybe_unused __igc_resume(struct device *dev, bool rpm) >> { >> struct pci_dev *pdev = to_pci_dev(dev); >> struct net_device *netdev = pci_get_drvdata(pdev); >> @@ -6661,20 +6661,27 @@ static int __maybe_unused igc_resume(struct >> device *dev) >> >> wr32(IGC_WUS, ~0); >> >> - rtnl_lock(); >> + if (!rpm) >> + rtnl_lock(); >> if (!err && netif_running(netdev)) >> err = __igc_open(netdev, true); >> >> if (!err) >> netif_device_attach(netdev); >> - rtnl_unlock(); >> + if (!rpm) >> + rtnl_unlock(); >> >> return err; >> } >> >> static int __maybe_unused igc_runtime_resume(struct device *dev) >> { >> - return igc_resume(dev); >> + return __igc_resume(dev, true); >> +} >> + >> +static int __maybe_unused igc_resume(struct device *dev) >> +{ >> + return __igc_resume(dev, false); >> } >> >> static int __maybe_unused igc_suspend(struct device *dev) >> @@ -6738,7 +6745,7 @@ static pci_ers_result_t >> igc_io_error_detected(struct pci_dev *pdev, >> * @pdev: Pointer to PCI device >> * >> * Restart the card from scratch, as if from a cold-boot. >> Implementation >> - * resembles the first-half of the igc_resume routine. >> + * resembles the first-half of the __igc_resume routine. >> **/ >> static pci_ers_result_t igc_io_slot_reset(struct pci_dev *pdev) >> { >> @@ -6777,7 +6784,7 @@ static pci_ers_result_t >> igc_io_slot_reset(struct pci_dev *pdev) >> * >> * This callback is called when the error recovery driver tells us >> that >> * its OK to resume normal operation. Implementation resembles the >> - * second-half of the igc_resume routine. >> + * second-half of the __igc_resume routine. >> */ >> static void igc_io_resume(struct pci_dev *pdev) >> { > Cheers, -- Vinicius From petrm at nvidia.com Fri Dec 3 14:26:41 2021 From: petrm at nvidia.com (Petr Machata) Date: Fri, 3 Dec 2021 15:26:41 +0100 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: <87pmqdojby.fsf@nvidia.com> Machnikowski, Maciej writes: >> -----Original Message----- >> From: Ido Schimmel >> >> On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: >> > > -----Original Message----- >> > > From: Ido Schimmel >> > > >> > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski wrote: >> > > Looking at the diagram from the previous submission [1]: >> > > >> > > ??????????????????????? >> > > ? RX ? TX ? >> > > 1 ? ports ? ports ? 1 >> > > ??????????? ? ??????? >> > > 2 ? ? ? ? 2 >> > > ????????? ? ? ??????? >> > > 3 ? ? ? ? ? 3 >> > > ??????? ? ? ? ??????? >> > > ? ? ? ? ? ? >> > > ? ?????? ? ? >> > > ? \____/ ? ? >> > > ??????????????????????? >> > > 1? 2? ? >> > > RCLK out? ? ? TX CLK in >> > > ? ? ? >> > > ??????????????????? >> > > ? ? >> > > ? SEC ? >> > > ? ? >> > > ??????????????????? >> > > >> > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message allows >> > > me to redirect the frequency recovered from this netdev to the EEC via >> > > either pin 1, pin 2 or both. >> > > >> > > Given a netdev, the RCLK_GET message allows me to query the range of >> > > pins (RCLK out 1-2 in the diagram) through which the frequency can be >> > > fed into the EEC. >> > > >> > > Questions: >> > > >> > > 1. The query for all the above netdevs will return the same range >> > > of pins. How does user space know that these are the same pins? >> > > That is, how does user space know that RCLK_SET message to >> > > redirect the frequency recovered from netdev 1 to pin 1 will be >> > > overridden by the same message but for netdev 2? >> > >> > We don't have a way to do so right now. When we have EEC subsystem >> > in place the right thing to do will be to add EEC input index and >> > EEC index as additional arguments >> > >> > > 2. How does user space know the mapping between a netdev and an >> > > EEC? That is, how does user space know that RCLK_SET message for >> > > netdev 1 will cause the Tx frequency of netdev 2 to change >> > > according to the frequency recovered from netdev 1? >> > >> > Ditto - currently we don't have any entity to link the pins to ATM, >> > but we can address that in userspace just like PTP pins are used >> > now >> > >> > > 3. If user space sends two RCLK_SET messages to redirect the >> > > frequency recovered from netdev 1 to RCLK out 1 and from netdev 2 >> > > to RCLK out 2, how does it know which recovered frequency is >> > > actually used an input to the EEC? >> >> User space doesn't know this as well? > > In current model it can come from the config file. Once we implement DPLL > subsystem we can implement connection between pins and DPLLs if they are > known. > >> > > >> > > 4. Why these pins are represented as attributes of a netdev and not as >> > > attributes of the EEC? That is, why are they represented as output pins >> > > of the PHY as opposed to input pins of the EEC? >> > >> > They are 2 separate beings. Recovered clock outputs are controlled >> > separately from EEC inputs. >> >> Separate how? What does it mean that they are controlled separately? In >> which sense? That redirection of recovered frequency to pin is >> controlled via PHY registers whereas priority setting between EEC inputs >> is controlled via EEC registers? If so, this is an implementation detail >> of a specific design. It is not of any importance to user space. > > They belong to different devices. EEC registers are physically in the DPLL > hanging over I2C and recovered clocks are in the PHY/integrated PHY in > the MAC. Depending on system architecture you may have control over > one piece only What does ETHTOOL_MSG_RCLK_SET actually configure, physically? Say I have this message: ETHTOOL_MSG_RCLK_SET dev = eth0 - ETHTOOL_A_RCLK_OUT_PIN_IDX = n - ETHTOOL_A_RCLK_PIN_FLAGS |= ETHTOOL_RCLK_PIN_FLAGS_ENA Eventually this lands in ops->set_rclk_out(dev, out_idx, new_state). What does the MAC driver do next? >> > If we mix them it'll be hard to control everything especially that a >> > single EEC can support multiple devices. >> >> Hard how? Please provide concrete examples. > > From the EEC perspective it's one to many relation - one EEC input pin will serve > even 4,16,48 netdevs. I don't see easy way of starting from EEC input of EEC device > and figuring out which netdevs are connected to it to talk to the right one. > In current model it's as simple as: > - I received QL-PRC on netdev ens4f0 > - I send back enable recovered clock on pin 0 of the ens4f0 How do I know it's pin 0 though? Config file? > - go to EEC that will be linked to it > - see the state of it - if its locked - report QL-EEC downsteam > > How would you this control look in the EEC/DPLL implementation? Maybe > I missed something. In the EEC-centric model this is what happens: - QL-PRC packet is received on ens4f0 - Userspace consults a UAPI to figure out what EEC and pin ID this netdevice corresponds to - Userspace instructs through a UAPI the indicated EEC to use the indicated pin as a source - Userspace then monitors the indicated EEC through a UAPI. When the EEC locks, QL-EEC is reported downstream >> What do you mean by "multiple devices"? A multi-port adapter with a >> single EEC or something else? > > Multiple MACs that use a single EEC clock. > >> > Also if we make those pins attributes of the EEC it'll become extremally hard >> > to map them to netdevs and control them from the userspace app that will >> > receive the ESMC message with a given QL level on netdev X. >> >> Hard how? What is the problem with something like: >> >> # eec set source 1 type netdev dev swp1 >> >> The EEC object should be registered by the same entity that registers >> the netdevs whose Tx frequency is controlled by the EEC, the MAC driver. > > But the EEC object may not be controlled by the MAC - in which case > this model won't work. In that case the driver for the device that controls EEC would instantiates the object. It doesn't have to be a MAC driver. But if it is controlled by the MAC, the MAC driver instantiates it. And can set up the connection between the MAC and the EEC, so that in the shell snippet above "eec" knows how to get the EEC handle from the netdevice. >> > >> > > 5. What is the problem with the following model? >> > > >> > > - The EEC is a separate object with following attributes: >> > > * State: Invalid / Freerun / Locked / etc >> > > * Sources: Netdev / external / etc >> > > * Potentially more >> > > >> > > - Notifications are emitted to user space when the state of the EEC >> > > changes. Drivers will either poll the state from the device or get >> > > interrupts >> > > >> > > - The mapping from netdev to EEC is queried via ethtool >> > >> > Yep - that will be part of the EEC (DPLL) subsystem >> >> This model avoids all the problems I pointed out in the current >> proposal. > > That's the go-to model, but first we need control over the source as > well :) Why is that? Can you illustrate a case that breaks with the above model? From gurucharanx.g at intel.com Fri Dec 3 14:44:48 2021 From: gurucharanx.g at intel.com (G, GurucharanX) Date: Fri, 3 Dec 2021 14:44:48 +0000 Subject: [Intel-wired-lan] [PATCH net-next 3/9] i40e: switch to napi_build_skb() In-Reply-To: <20211123171840.157471-4-alexandr.lobakin@intel.com> References: <20211123171840.157471-1-alexandr.lobakin@intel.com> <20211123171840.157471-4-alexandr.lobakin@intel.com> Message-ID: > -----Original Message----- > From: Intel-wired-lan On Behalf Of > Alexander Lobakin > Sent: Tuesday, November 23, 2021 10:49 PM > To: intel-wired-lan at lists.osuosl.org > Cc: netdev at vger.kernel.org; linux-kernel at vger.kernel.org; Jakub Kicinski > ; David S. Miller > Subject: [Intel-wired-lan] [PATCH net-next 3/9] i40e: switch to napi_build_skb() > > napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some > cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx. > i40e driver runs Tx completion polling cycle right before the Rx one and uses > napi_consume_skb() to feed the cache with skbuff_heads of completed entries, > so it's never empty and always warm at that moment. Switch to the > napi_build_skb() to relax mm pressure on heavy Rx. > > Signed-off-by: Alexander Lobakin > Reviewed-by: Michal Swiatkowski > --- > drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > Tested-by: Gurucharan G (A Contingent worker at Intel) From gurucharanx.g at intel.com Fri Dec 3 14:45:52 2021 From: gurucharanx.g at intel.com (G, GurucharanX) Date: Fri, 3 Dec 2021 14:45:52 +0000 Subject: [Intel-wired-lan] [PATCH net-next 8/9] ixgbe: switch to napi_build_skb() In-Reply-To: <20211123171840.157471-9-alexandr.lobakin@intel.com> References: <20211123171840.157471-1-alexandr.lobakin@intel.com> <20211123171840.157471-9-alexandr.lobakin@intel.com> Message-ID: > -----Original Message----- > From: Intel-wired-lan On Behalf Of > Alexander Lobakin > Sent: Tuesday, November 23, 2021 10:49 PM > To: intel-wired-lan at lists.osuosl.org > Cc: netdev at vger.kernel.org; linux-kernel at vger.kernel.org; Jakub Kicinski > ; David S. Miller > Subject: [Intel-wired-lan] [PATCH net-next 8/9] ixgbe: switch to napi_build_skb() > > napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some > cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx. > ixgbe driver runs Tx completion polling cycle right before the Rx one and uses > napi_consume_skb() to feed the cache with skbuff_heads of completed entries, > so it's never empty and always warm at that moment. Switch to the > napi_build_skb() to relax mm pressure on heavy Rx. > > Signed-off-by: Alexander Lobakin > Reviewed-by: Michal Swiatkowski > --- > drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > Tested-by: Gurucharan G (A Contingent worker at Intel) From maciej.machnikowski at intel.com Fri Dec 3 14:55:05 2021 From: maciej.machnikowski at intel.com (Machnikowski, Maciej) Date: Fri, 3 Dec 2021 14:55:05 +0000 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: <87pmqdojby.fsf@nvidia.com> References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> <87pmqdojby.fsf@nvidia.com> Message-ID: > -----Original Message----- > From: Petr Machata > Sent: Friday, December 3, 2021 3:27 PM > To: Machnikowski, Maciej > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > recovered clock for SyncE feature > > > Machnikowski, Maciej writes: > > >> -----Original Message----- > >> From: Ido Schimmel > >> > >> On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: > >> > > -----Original Message----- > >> > > From: Ido Schimmel > >> > > > >> > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski > wrote: > >> > > Looking at the diagram from the previous submission [1]: > >> > > > >> > > ??????????????????????? > >> > > ? RX ? TX ? > >> > > 1 ? ports ? ports ? 1 > >> > > ??????????? ? ??????? > >> > > 2 ? ? ? ? 2 > >> > > ????????? ? ? ??????? > >> > > 3 ? ? ? ? ? 3 > >> > > ??????? ? ? ? ??????? > >> > > ? ? ? ? ? ? > >> > > ? ?????? ? ? > >> > > ? \____/ ? ? > >> > > ??????????????????????? > >> > > 1? 2? ? > >> > > RCLK out? ? ? TX CLK in > >> > > ? ? ? > >> > > ??????????????????? > >> > > ? ? > >> > > ? SEC ? > >> > > ? ? > >> > > ??????????????????? > >> > > > >> > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message > allows > >> > > me to redirect the frequency recovered from this netdev to the EEC > via > >> > > either pin 1, pin 2 or both. > >> > > > >> > > Given a netdev, the RCLK_GET message allows me to query the range > of > >> > > pins (RCLK out 1-2 in the diagram) through which the frequency can be > >> > > fed into the EEC. > >> > > > >> > > Questions: > >> > > > >> > > 1. The query for all the above netdevs will return the same range > >> > > of pins. How does user space know that these are the same pins? > >> > > That is, how does user space know that RCLK_SET message to > >> > > redirect the frequency recovered from netdev 1 to pin 1 will be > >> > > overridden by the same message but for netdev 2? > >> > > >> > We don't have a way to do so right now. When we have EEC subsystem > >> > in place the right thing to do will be to add EEC input index and > >> > EEC index as additional arguments > >> > > >> > > 2. How does user space know the mapping between a netdev and an > >> > > EEC? That is, how does user space know that RCLK_SET message for > >> > > netdev 1 will cause the Tx frequency of netdev 2 to change > >> > > according to the frequency recovered from netdev 1? > >> > > >> > Ditto - currently we don't have any entity to link the pins to ATM, > >> > but we can address that in userspace just like PTP pins are used > >> > now > >> > > >> > > 3. If user space sends two RCLK_SET messages to redirect the > >> > > frequency recovered from netdev 1 to RCLK out 1 and from netdev 2 > >> > > to RCLK out 2, how does it know which recovered frequency is > >> > > actually used an input to the EEC? > >> > >> User space doesn't know this as well? > > > > In current model it can come from the config file. Once we implement DPLL > > subsystem we can implement connection between pins and DPLLs if they > are > > known. > > > >> > > > >> > > 4. Why these pins are represented as attributes of a netdev and not as > >> > > attributes of the EEC? That is, why are they represented as output > pins > >> > > of the PHY as opposed to input pins of the EEC? > >> > > >> > They are 2 separate beings. Recovered clock outputs are controlled > >> > separately from EEC inputs. > >> > >> Separate how? What does it mean that they are controlled separately? In > >> which sense? That redirection of recovered frequency to pin is > >> controlled via PHY registers whereas priority setting between EEC inputs > >> is controlled via EEC registers? If so, this is an implementation detail > >> of a specific design. It is not of any importance to user space. > > > > They belong to different devices. EEC registers are physically in the DPLL > > hanging over I2C and recovered clocks are in the PHY/integrated PHY in > > the MAC. Depending on system architecture you may have control over > > one piece only > > What does ETHTOOL_MSG_RCLK_SET actually configure, physically? Say I > have this message: > > ETHTOOL_MSG_RCLK_SET dev = eth0 > - ETHTOOL_A_RCLK_OUT_PIN_IDX = n > - ETHTOOL_A_RCLK_PIN_FLAGS |= ETHTOOL_RCLK_PIN_FLAGS_ENA > > Eventually this lands in ops->set_rclk_out(dev, out_idx, new_state). > What does the MAC driver do next? It goes to the PTY layer, enables the clock recovery from a given physical lane, optionally configure the clock divider and pin output muxes. This will be HW-specific though, but the general concept will look like that. > >> > If we mix them it'll be hard to control everything especially that a > >> > single EEC can support multiple devices. > >> > >> Hard how? Please provide concrete examples. > > > > From the EEC perspective it's one to many relation - one EEC input pin will > serve > > even 4,16,48 netdevs. I don't see easy way of starting from EEC input of EEC > device > > and figuring out which netdevs are connected to it to talk to the right one. > > In current model it's as simple as: > > - I received QL-PRC on netdev ens4f0 > > - I send back enable recovered clock on pin 0 of the ens4f0 > > How do I know it's pin 0 though? Config file? You can find that by sending the ETHTOOL_MSG_RCLK_GET without any pin index to get the acceptable/supported range. > > - go to EEC that will be linked to it > > - see the state of it - if its locked - report QL-EEC downsteam > > > > How would you this control look in the EEC/DPLL implementation? Maybe > > I missed something. > > In the EEC-centric model this is what happens: > > - QL-PRC packet is received on ens4f0 > - Userspace consults a UAPI to figure out what EEC and pin ID this > netdevice corresponds to > - Userspace instructs through a UAPI the indicated EEC to use the > indicated pin as a source > - Userspace then monitors the indicated EEC through a UAPI. When the EEC > locks, QL-EEC is reported downstream This is still missing the port/lane->pin mapping. This is what will happen in the EEC/DPLL subsystem. > >> What do you mean by "multiple devices"? A multi-port adapter with a > >> single EEC or something else? > > > > Multiple MACs that use a single EEC clock. > > > >> > Also if we make those pins attributes of the EEC it'll become extremally > hard > >> > to map them to netdevs and control them from the userspace app that > will > >> > receive the ESMC message with a given QL level on netdev X. > >> > >> Hard how? What is the problem with something like: > >> > >> # eec set source 1 type netdev dev swp1 > >> > >> The EEC object should be registered by the same entity that registers > >> the netdevs whose Tx frequency is controlled by the EEC, the MAC driver. > > > > But the EEC object may not be controlled by the MAC - in which case > > this model won't work. > > In that case the driver for the device that controls EEC would > instantiates the object. It doesn't have to be a MAC driver. > > But if it is controlled by the MAC, the MAC driver instantiates it. And > can set up the connection between the MAC and the EEC, so that in the > shell snippet above "eec" knows how to get the EEC handle from the > netdevice. But it still needs to talk to MAC driver somehow to enable the clock recovery on a given pin - that's where the API defined here is needed. > >> > > >> > > 5. What is the problem with the following model? > >> > > > >> > > - The EEC is a separate object with following attributes: > >> > > * State: Invalid / Freerun / Locked / etc > >> > > * Sources: Netdev / external / etc > >> > > * Potentially more > >> > > > >> > > - Notifications are emitted to user space when the state of the EEC > >> > > changes. Drivers will either poll the state from the device or get > >> > > interrupts > >> > > > >> > > - The mapping from netdev to EEC is queried via ethtool > >> > > >> > Yep - that will be part of the EEC (DPLL) subsystem > >> > >> This model avoids all the problems I pointed out in the current > >> proposal. > > > > That's the go-to model, but first we need control over the source as > > well :) > > Why is that? Can you illustrate a case that breaks with the above model? If you have 32 port switch chip with 2 recovered clock outputs how will you tell the chip to get the 18th port to pin 0 and from port 20 to pin 1? That's the part those patches addresses. The further side of "which clock should the EEC use" belongs to the DPLL subsystem and I agree with that. Or to put it into different words: This API will configure given quality level frequency reference outputs on chip's Dedicated outputs. On a board you will connect those to the EEC's reference inputs. The EEC's job is to validate the inputs and lock to them following certain rules, The PHY/MAC (and this API) job is to deliver reference signals to the EEC. From kuba at kernel.org Fri Dec 3 15:04:10 2021 From: kuba at kernel.org (Jakub Kicinski) Date: Fri, 3 Dec 2021 07:04:10 -0800 Subject: [Intel-wired-lan] [RFC PATCH 0/4] r8169: support dash In-Reply-To: References: <20211129101315.16372-381-nic_swsd@realtek.com> <20211129095947.547a765f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <918d75ea873a453ab2ba588a35d66ab6@realtek.com> <20211130190926.7c1d735d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> Message-ID: <20211203070410.1b4abc4d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Fri, 3 Dec 2021 07:57:08 +0000 Hayes Wang wrote: > Jakub Kicinski > > I'm not sure how relevant it will be to you but this is the > > documentation we have: > > > > https://www.kernel.org/doc/html/latest/networking/devlink/index.html > > https://www.kernel.org/doc/html/latest/networking/devlink/devlink-params.ht > > ml > > > > You'll need to add a generic parameter (define + a short description) > > like 325e0d0aa683 ("devlink: Add 'enable_iwarp' generic device param") > > > > In terms of driver changes I think the most relevant example to you > > will be: > > > > drivers/net/ethernet/ti/cpsw_new.c > > > > You need to call devlink_alloc(), devlink_register and > > devlink_params_register() (and the inverse functions). > > I have studied the devlink briefly. > > However, I find some problems. First, our > settings are dependent on the design of > both the hardware and firmware. That is, > I don't think the others need to do the > settings as the same as us. The devlink > seems to let everyone could use the same > command to do the same setting. However, > most of our settings are useless for the > other devices. > > Second, according to the design of our > CMAC, the application has to read and > write data with variable length from/to > the firmware. Each custom has his own > requests. Therefore, our customs would > get different firmware with different > behavior. Only the application and the > firmware know how to communicate with > each other. The driver only passes the > data between them. Like the Ethernet > driver, it doesn't need to know the > contend of the packet. I could implement > the CMAC through sysfs, but I don't > know how to do by devlink. > > In brief, CMAC is our major method to > configure the firmware and get response > from the firmware. Except for certain information, > the other settings are not standard and useless > for the other vendors. > > Is the devlink the only method I could use? > Actually, we use IOCTL now. We wish to > convert it to sysfs for upstream driver. Ah, I've only spotted the enable/disable knob in the patch. If you're exchanging arbitrary binary data with the FW we can't help you. It's not going to fly upstream. From idosch at idosch.org Fri Dec 3 15:45:51 2021 From: idosch at idosch.org (Ido Schimmel) Date: Fri, 3 Dec 2021 17:45:51 +0200 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: On Thu, Dec 02, 2021 at 05:20:24PM +0000, Machnikowski, Maciej wrote: > > -----Original Message----- > > From: Ido Schimmel > > Sent: Thursday, December 2, 2021 5:36 PM > > To: Machnikowski, Maciej > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > > recovered clock for SyncE feature > > > > On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: > > > > -----Original Message----- > > > > From: Ido Schimmel > > > > Sent: Thursday, December 2, 2021 1:44 PM > > > > To: Machnikowski, Maciej > > > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > > > > recovered clock for SyncE feature > > > > > > > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski wrote: > > > > Looking at the diagram from the previous submission [1]: > > > > > > > > ??????????????????????? > > > > ? RX ? TX ? > > > > 1 ? ports ? ports ? 1 > > > > ??????????? ? ??????? > > > > 2 ? ? ? ? 2 > > > > ????????? ? ? ??????? > > > > 3 ? ? ? ? ? 3 > > > > ??????? ? ? ? ??????? > > > > ? ? ? ? ? ? > > > > ? ?????? ? ? > > > > ? \____/ ? ? > > > > ??????????????????????? > > > > 1? 2? ? > > > > RCLK out? ? ? TX CLK in > > > > ? ? ? > > > > ??????????????????? > > > > ? ? > > > > ? SEC ? > > > > ? ? > > > > ??????????????????? > > > > > > > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message allows > > > > me to redirect the frequency recovered from this netdev to the EEC via > > > > either pin 1, pin 2 or both. > > > > > > > > Given a netdev, the RCLK_GET message allows me to query the range of > > > > pins (RCLK out 1-2 in the diagram) through which the frequency can be > > > > fed into the EEC. > > > > > > > > Questions: > > > > > > > > 1. The query for all the above netdevs will return the same range of > > > > pins. How does user space know that these are the same pins? That is, > > > > how does user space know that RCLK_SET message to redirect the > > frequency > > > > recovered from netdev 1 to pin 1 will be overridden by the same > > message > > > > but for netdev 2? > > > > > > We don't have a way to do so right now. When we have EEC subsystem in > > place > > > the right thing to do will be to add EEC input index and EEC index as > > additional > > > arguments > > > > > > > 2. How does user space know the mapping between a netdev and an > > EEC? > > > > That is, how does user space know that RCLK_SET message for netdev 1 > > > > will cause the Tx frequency of netdev 2 to change according to the > > > > frequency recovered from netdev 1? > > > > > > Ditto - currently we don't have any entity to link the pins to ATM, > > > but we can address that in userspace just like PTP pins are used now > > > > > > > 3. If user space sends two RCLK_SET messages to redirect the frequency > > > > recovered from netdev 1 to RCLK out 1 and from netdev 2 to RCLK out 2, > > > > how does it know which recovered frequency is actually used an input to > > > > the EEC? > > > > User space doesn't know this as well? > > In current model it can come from the config file. Once we implement DPLL > subsystem we can implement connection between pins and DPLLs if they are > known. To be clear, no SyncE patches should be accepted before we have a DPLL subsystem or however the subsystem that will model the EEC is going to be called. You are asking us to buy into a new uAPI that can never be removed. We pointed out numerous problems with this uAPI and suggested a model that solves them. When asked why it can't work we are answered with vague arguments about this model being "hard". In addition, without a representation of the EEC, these patches have no value for user space. They basically allow user space to redirect the recovered frequency from a netdev to an object that does not exist. User space doesn't know if the object is successfully tracking the frequency (the EEC state) and does not know which other components are utilizing this recovered frequency as input (e.g., other netdevs, PHC). BTW, what is the use case for enabling two EEC inputs simultaneously? Some seamless failover? > > > > > > > > > 4. Why these pins are represented as attributes of a netdev and not as > > > > attributes of the EEC? That is, why are they represented as output pins > > > > of the PHY as opposed to input pins of the EEC? > > > > > > They are 2 separate beings. Recovered clock outputs are controlled > > > separately from EEC inputs. > > > > Separate how? What does it mean that they are controlled separately? In > > which sense? That redirection of recovered frequency to pin is > > controlled via PHY registers whereas priority setting between EEC inputs > > is controlled via EEC registers? If so, this is an implementation detail > > of a specific design. It is not of any importance to user space. > > They belong to different devices. EEC registers are physically in the DPLL > hanging over I2C and recovered clocks are in the PHY/integrated PHY in > the MAC. Depending on system architecture you may have control over > one piece only These are implementation details of a specific design and should not influence the design of the uAPI. The uAPI should be influenced by the logical task that it is trying to achieve. > > > > If we mix them it'll be hard to control everything especially that a > > > single EEC can support multiple devices. > > > > Hard how? Please provide concrete examples. > > From the EEC perspective it's one to many relation - one EEC input pin will serve > even 4,16,48 netdevs. I don't see easy way of starting from EEC input of EEC device > and figuring out which netdevs are connected to it to talk to the right one. > In current model it's as simple as: > - I received QL-PRC on netdev ens4f0 > - I send back enable recovered clock on pin 0 of the ens4f0 > - go to EEC that will be linked to it > - see the state of it - if its locked - report QL-EEC downsteam > > How would you this control look in the EEC/DPLL implementation? Maybe > I missed something. Petr already replied. > > > What do you mean by "multiple devices"? A multi-port adapter with a > > single EEC or something else? > > Multiple MACs that use a single EEC clock. > > > > Also if we make those pins attributes of the EEC it'll become extremally > > hard > > > to map them to netdevs and control them from the userspace app that will > > > receive the ESMC message with a given QL level on netdev X. > > > > Hard how? What is the problem with something like: > > > > # eec set source 1 type netdev dev swp1 > > > > The EEC object should be registered by the same entity that registers > > the netdevs whose Tx frequency is controlled by the EEC, the MAC driver. > > But the EEC object may not be controlled by the MAC - in which case > this model won't work. Why wouldn't it work? Leave individual kernel modules alone and look at the kernel. It is registering all the necessary logical objects such netdevs, PHCs and EECs. There is no way user space knows better than the kernel how these objects fit together as the purpose of the kernel is to abstract the hardware to user space. User space's request to use the Rx frequency recovered from netdev X as an input to EEC Y will be processed by the DPLL subsystem. In turn, this subsystem will invoke whichever kernel modules it needs to fulfill the request. > > > > > > > > 5. What is the problem with the following model? > > > > > > > > - The EEC is a separate object with following attributes: > > > > * State: Invalid / Freerun / Locked / etc > > > > * Sources: Netdev / external / etc > > > > * Potentially more > > > > > > > > - Notifications are emitted to user space when the state of the EEC > > > > changes. Drivers will either poll the state from the device or get > > > > interrupts > > > > > > > > - The mapping from netdev to EEC is queried via ethtool > > > > > > Yep - that will be part of the EEC (DPLL) subsystem > > > > This model avoids all the problems I pointed out in the current > > proposal. > > That's the go-to model, but first we need control over the source as well :) The point that we are trying to make is that like the EEC state, the source is also an EEC attribute and not a netdev attribute. From idosch at idosch.org Fri Dec 3 15:58:20 2021 From: idosch at idosch.org (Ido Schimmel) Date: Fri, 3 Dec 2021 17:58:20 +0200 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> <87pmqdojby.fsf@nvidia.com> Message-ID: On Fri, Dec 03, 2021 at 02:55:05PM +0000, Machnikowski, Maciej wrote: > If you have 32 port switch chip with 2 recovered clock outputs how will you > tell the chip to get the 18th port to pin 0 and from port 20 to pin 1? That's > the part those patches addresses. The further side of "which clock should the > EEC use" belongs to the DPLL subsystem and I agree with that. > > Or to put it into different words: > This API will configure given quality level frequency reference outputs on chip's > Dedicated outputs. On a board you will connect those to the EEC's reference inputs. So these outputs are hardwired into the EEC's inputs and are therefore only meaningful as EEC inputs? If so, why these outputs are not configured via the EEC object? > > The EEC's job is to validate the inputs and lock to them following certain rules, > The PHY/MAC (and this API) job is to deliver reference signals to the EEC. > From maciej.machnikowski at intel.com Fri Dec 3 16:18:18 2021 From: maciej.machnikowski at intel.com (Machnikowski, Maciej) Date: Fri, 3 Dec 2021 16:18:18 +0000 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: > -----Original Message----- > From: Ido Schimmel > Sent: Friday, December 3, 2021 4:46 PM > To: Machnikowski, Maciej > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > recovered clock for SyncE feature > > On Thu, Dec 02, 2021 at 05:20:24PM +0000, Machnikowski, Maciej wrote: > > > -----Original Message----- > > > From: Ido Schimmel > > > Sent: Thursday, December 2, 2021 5:36 PM > > > To: Machnikowski, Maciej > > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > > > recovered clock for SyncE feature > > > > > > On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: > > > > > -----Original Message----- > > > > > From: Ido Schimmel > > > > > Sent: Thursday, December 2, 2021 1:44 PM > > > > > To: Machnikowski, Maciej > > > > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > > > > > recovered clock for SyncE feature > > > > > > > > > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski > wrote: > > > > > Looking at the diagram from the previous submission [1]: > > > > > > > > > > ??????????????????????? > > > > > ? RX ? TX ? > > > > > 1 ? ports ? ports ? 1 > > > > > ??????????? ? ??????? > > > > > 2 ? ? ? ? 2 > > > > > ????????? ? ? ??????? > > > > > 3 ? ? ? ? ? 3 > > > > > ??????? ? ? ? ??????? > > > > > ? ? ? ? ? ? > > > > > ? ?????? ? ? > > > > > ? \____/ ? ? > > > > > ??????????????????????? > > > > > 1? 2? ? > > > > > RCLK out? ? ? TX CLK in > > > > > ? ? ? > > > > > ??????????????????? > > > > > ? ? > > > > > ? SEC ? > > > > > ? ? > > > > > ??????????????????? > > > > > > > > > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message > allows > > > > > me to redirect the frequency recovered from this netdev to the EEC > via > > > > > either pin 1, pin 2 or both. > > > > > > > > > > Given a netdev, the RCLK_GET message allows me to query the range > of > > > > > pins (RCLK out 1-2 in the diagram) through which the frequency can > be > > > > > fed into the EEC. > > > > > > > > > > Questions: > > > > > > > > > > 1. The query for all the above netdevs will return the same range of > > > > > pins. How does user space know that these are the same pins? That > is, > > > > > how does user space know that RCLK_SET message to redirect the > > > frequency > > > > > recovered from netdev 1 to pin 1 will be overridden by the same > > > message > > > > > but for netdev 2? > > > > > > > > We don't have a way to do so right now. When we have EEC subsystem > in > > > place > > > > the right thing to do will be to add EEC input index and EEC index as > > > additional > > > > arguments > > > > > > > > > 2. How does user space know the mapping between a netdev and an > > > EEC? > > > > > That is, how does user space know that RCLK_SET message for netdev > 1 > > > > > will cause the Tx frequency of netdev 2 to change according to the > > > > > frequency recovered from netdev 1? > > > > > > > > Ditto - currently we don't have any entity to link the pins to ATM, > > > > but we can address that in userspace just like PTP pins are used now > > > > > > > > > 3. If user space sends two RCLK_SET messages to redirect the > frequency > > > > > recovered from netdev 1 to RCLK out 1 and from netdev 2 to RCLK out > 2, > > > > > how does it know which recovered frequency is actually used an input > to > > > > > the EEC? > > > > > > User space doesn't know this as well? > > > > In current model it can come from the config file. Once we implement DPLL > > subsystem we can implement connection between pins and DPLLs if they > are > > known. > > To be clear, no SyncE patches should be accepted before we have a DPLL > subsystem or however the subsystem that will model the EEC is going to > be called. > > You are asking us to buy into a new uAPI that can never be removed. We > pointed out numerous problems with this uAPI and suggested a model that > solves them. When asked why it can't work we are answered with vague > arguments about this model being "hard". My argument was never "it's hard" - the answer is we need both APIs. > In addition, without a representation of the EEC, these patches have no > value for user space. They basically allow user space to redirect the > recovered frequency from a netdev to an object that does not exist. > User space doesn't know if the object is successfully tracking the > frequency (the EEC state) and does not know which other components are > utilizing this recovered frequency as input (e.g., other netdevs, PHC). That's also not true - the proposed uAPI lets you enable recovered frequency output pins and redirect the right clock to them. In some implementations you may not have anything else. > BTW, what is the use case for enabling two EEC inputs simultaneously? > Some seamless failover? Mainly - redundacy > > > > > > > > > > > > 4. Why these pins are represented as attributes of a netdev and not > as > > > > > attributes of the EEC? That is, why are they represented as output > pins > > > > > of the PHY as opposed to input pins of the EEC? > > > > > > > > They are 2 separate beings. Recovered clock outputs are controlled > > > > separately from EEC inputs. > > > > > > Separate how? What does it mean that they are controlled separately? In > > > which sense? That redirection of recovered frequency to pin is > > > controlled via PHY registers whereas priority setting between EEC inputs > > > is controlled via EEC registers? If so, this is an implementation detail > > > of a specific design. It is not of any importance to user space. > > > > They belong to different devices. EEC registers are physically in the DPLL > > hanging over I2C and recovered clocks are in the PHY/integrated PHY in > > the MAC. Depending on system architecture you may have control over > > one piece only > > These are implementation details of a specific design and should not > influence the design of the uAPI. The uAPI should be influenced by the > logical task that it is trying to achieve. There are 2 logical tasks: 1. Enable clocks that are recovered from a specific netdev 2. Control the EEC They are both needed to get to the full solution, but are independent from each other. You can't put RCLK redirection to the EEC as it's one to many relation and you will need to call the netdev to enable it anyway. Also, when we tried to add EEC state to PTP subsystem the answer was that we can't mix subsystems. The proposal to configure recovered clocks through EEC would mix netdev with EEC. > > > > > > If we mix them it'll be hard to control everything especially that a > > > > single EEC can support multiple devices. > > > > > > Hard how? Please provide concrete examples. > > > > From the EEC perspective it's one to many relation - one EEC input pin will > serve > > even 4,16,48 netdevs. I don't see easy way of starting from EEC input of EEC > device > > and figuring out which netdevs are connected to it to talk to the right one. > > In current model it's as simple as: > > - I received QL-PRC on netdev ens4f0 > > - I send back enable recovered clock on pin 0 of the ens4f0 > > - go to EEC that will be linked to it > > - see the state of it - if its locked - report QL-EEC downsteam > > > > How would you this control look in the EEC/DPLL implementation? Maybe > > I missed something. > > Petr already replied. See my response there. > > > > > What do you mean by "multiple devices"? A multi-port adapter with a > > > single EEC or something else? > > > > Multiple MACs that use a single EEC clock. > > > > > > Also if we make those pins attributes of the EEC it'll become extremally > > > hard > > > > to map them to netdevs and control them from the userspace app that > will > > > > receive the ESMC message with a given QL level on netdev X. > > > > > > Hard how? What is the problem with something like: > > > > > > # eec set source 1 type netdev dev swp1 > > > > > > The EEC object should be registered by the same entity that registers > > > the netdevs whose Tx frequency is controlled by the EEC, the MAC > driver. > > > > But the EEC object may not be controlled by the MAC - in which case > > this model won't work. > > Why wouldn't it work? Leave individual kernel modules alone and look at > the kernel. It is registering all the necessary logical objects such > netdevs, PHCs and EECs. There is no way user space knows better than the > kernel how these objects fit together as the purpose of the kernel is to > abstract the hardware to user space. > > User space's request to use the Rx frequency recovered from netdev X as > an input to EEC Y will be processed by the DPLL subsystem. In turn, this > subsystem will invoke whichever kernel modules it needs to fulfill the > request. But how would that call go through the kernel? What would you like to give to the EEC object and how should it react. I'm fine with the changes, but I don't see the solution in that proposal and this model would mix independent subsystems. The netdev -> EEC should be a downstream relation, just like the PTP is now If a netdev wants to check what's the state of EEC driving it - it can do it, but I don't see a way for the EEC subsystem to directly configure something in Potentially couple different MAC chips without calling a kind of netdev API. And that's what those patches address. > > > > > > > > > > > 5. What is the problem with the following model? > > > > > > > > > > - The EEC is a separate object with following attributes: > > > > > * State: Invalid / Freerun / Locked / etc > > > > > * Sources: Netdev / external / etc > > > > > * Potentially more > > > > > > > > > > - Notifications are emitted to user space when the state of the EEC > > > > > changes. Drivers will either poll the state from the device or get > > > > > interrupts > > > > > > > > > > - The mapping from netdev to EEC is queried via ethtool > > > > > > > > Yep - that will be part of the EEC (DPLL) subsystem > > > > > > This model avoids all the problems I pointed out in the current > > > proposal. > > > > That's the go-to model, but first we need control over the source as well :) > > The point that we are trying to make is that like the EEC state, the > source is also an EEC attribute and not a netdev attribute. From petrm at nvidia.com Fri Dec 3 16:26:08 2021 From: petrm at nvidia.com (Petr Machata) Date: Fri, 3 Dec 2021 17:26:08 +0100 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> <87pmqdojby.fsf@nvidia.com> Message-ID: <87lf11odsv.fsf@nvidia.com> Machnikowski, Maciej writes: >> -----Original Message----- >> From: Petr Machata >> Sent: Friday, December 3, 2021 3:27 PM >> To: Machnikowski, Maciej >> Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure >> recovered clock for SyncE feature >> >> >> Machnikowski, Maciej writes: >> >> >> -----Original Message----- >> >> From: Ido Schimmel >> >> >> >> On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: >> >> > > -----Original Message----- >> >> > > From: Ido Schimmel >> >> > > >> >> > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski >> wrote: >> >> > > Looking at the diagram from the previous submission [1]: >> >> > > >> >> > > ??????????????????????? >> >> > > ? RX ? TX ? >> >> > > 1 ? ports ? ports ? 1 >> >> > > ??????????? ? ??????? >> >> > > 2 ? ? ? ? 2 >> >> > > ????????? ? ? ??????? >> >> > > 3 ? ? ? ? ? 3 >> >> > > ??????? ? ? ? ??????? >> >> > > ? ? ? ? ? ? >> >> > > ? ?????? ? ? >> >> > > ? \____/ ? ? >> >> > > ??????????????????????? >> >> > > 1? 2? ? >> >> > > RCLK out? ? ? TX CLK in >> >> > > ? ? ? >> >> > > ??????????????????? >> >> > > ? ? >> >> > > ? SEC ? >> >> > > ? ? >> >> > > ??????????????????? >> >> > > >> >> > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message >> allows >> >> > > me to redirect the frequency recovered from this netdev to the EEC >> via >> >> > > either pin 1, pin 2 or both. >> >> > > >> >> > > Given a netdev, the RCLK_GET message allows me to query the range >> of >> >> > > pins (RCLK out 1-2 in the diagram) through which the frequency can be >> >> > > fed into the EEC. >> >> > > >> >> > > Questions: >> >> > > >> >> > > 1. The query for all the above netdevs will return the same range >> >> > > of pins. How does user space know that these are the same pins? >> >> > > That is, how does user space know that RCLK_SET message to >> >> > > redirect the frequency recovered from netdev 1 to pin 1 will be >> >> > > overridden by the same message but for netdev 2? >> >> > >> >> > We don't have a way to do so right now. When we have EEC subsystem >> >> > in place the right thing to do will be to add EEC input index and >> >> > EEC index as additional arguments >> >> > >> >> > > 2. How does user space know the mapping between a netdev and an >> >> > > EEC? That is, how does user space know that RCLK_SET message for >> >> > > netdev 1 will cause the Tx frequency of netdev 2 to change >> >> > > according to the frequency recovered from netdev 1? >> >> > >> >> > Ditto - currently we don't have any entity to link the pins to ATM, >> >> > but we can address that in userspace just like PTP pins are used >> >> > now >> >> > >> >> > > 3. If user space sends two RCLK_SET messages to redirect the >> >> > > frequency recovered from netdev 1 to RCLK out 1 and from netdev 2 >> >> > > to RCLK out 2, how does it know which recovered frequency is >> >> > > actually used an input to the EEC? >> >> >> >> User space doesn't know this as well? >> > >> > In current model it can come from the config file. Once we implement DPLL >> > subsystem we can implement connection between pins and DPLLs if they >> are >> > known. >> > >> >> > > >> >> > > 4. Why these pins are represented as attributes of a netdev and not as >> >> > > attributes of the EEC? That is, why are they represented as output >> pins >> >> > > of the PHY as opposed to input pins of the EEC? >> >> > >> >> > They are 2 separate beings. Recovered clock outputs are controlled >> >> > separately from EEC inputs. >> >> >> >> Separate how? What does it mean that they are controlled separately? In >> >> which sense? That redirection of recovered frequency to pin is >> >> controlled via PHY registers whereas priority setting between EEC inputs >> >> is controlled via EEC registers? If so, this is an implementation detail >> >> of a specific design. It is not of any importance to user space. >> > >> > They belong to different devices. EEC registers are physically in the DPLL >> > hanging over I2C and recovered clocks are in the PHY/integrated PHY in >> > the MAC. Depending on system architecture you may have control over >> > one piece only >> >> What does ETHTOOL_MSG_RCLK_SET actually configure, physically? Say I >> have this message: >> >> ETHTOOL_MSG_RCLK_SET dev = eth0 >> - ETHTOOL_A_RCLK_OUT_PIN_IDX = n >> - ETHTOOL_A_RCLK_PIN_FLAGS |= ETHTOOL_RCLK_PIN_FLAGS_ENA >> >> Eventually this lands in ops->set_rclk_out(dev, out_idx, new_state). >> What does the MAC driver do next? > > It goes to the PTY layer, enables the clock recovery from a given physical lane, > optionally configure the clock divider and pin output muxes. This will be > HW-specific though, but the general concept will look like that. The reason I am asking is that I suspect that by exposing this functionality through netdev, you assume that the NIC driver will do whatever EEC configuration necessary _anyway_. So why couldn't it just instantiate the EEC object as well? >> >> > If we mix them it'll be hard to control everything especially that a >> >> > single EEC can support multiple devices. >> >> >> >> Hard how? Please provide concrete examples. >> > >> > From the EEC perspective it's one to many relation - one EEC input >> > pin will serve even 4,16,48 netdevs. I don't see easy way of >> > starting from EEC input of EEC device and figuring out which >> > netdevs are connected to it to talk to the right one. In current >> > model it's as simple as: >> > - I received QL-PRC on netdev ens4f0 >> > - I send back enable recovered clock on pin 0 of the ens4f0 >> >> How do I know it's pin 0 though? Config file? > > You can find that by sending the ETHTOOL_MSG_RCLK_GET without any pin > index to get the acceptable/supported range. Ha, OK, pin0 means the RCLK pin. OK. >> > - go to EEC that will be linked to it >> > - see the state of it - if its locked - report QL-EEC downsteam >> > >> > How would you this control look in the EEC/DPLL implementation? Maybe >> > I missed something. >> >> In the EEC-centric model this is what happens: >> >> - QL-PRC packet is received on ens4f0 >> - Userspace consults a UAPI to figure out what EEC and pin ID this >> netdevice corresponds to >> - Userspace instructs through a UAPI the indicated EEC to use the >> indicated pin as a source >> - Userspace then monitors the indicated EEC through a UAPI. When the EEC >> locks, QL-EEC is reported downstream > > This is still missing the port/lane->pin mapping. This is what will > happen in the EEC/DPLL subsystem. You asked how the control looks in the ECC-centric model. So this is how. That this stuff is missing is fairly obvious, we are talking about a different model. I don't buy the "extremely hard" argument. The set of steps to do might be longer, but they are still just steps. No jumps, hoops, sommersaults. On the flip side we get a proper UAPI that can stay useful for a while. >> >> What do you mean by "multiple devices"? A multi-port adapter with a >> >> single EEC or something else? >> > >> > Multiple MACs that use a single EEC clock. >> > >> >> > Also if we make those pins attributes of the EEC it'll become >> >> > extremally hard to map them to netdevs and control them from the >> >> > userspace app that will receive the ESMC message with a given QL >> >> > level on netdev X. >> >> >> >> Hard how? What is the problem with something like: >> >> >> >> # eec set source 1 type netdev dev swp1 >> >> >> >> The EEC object should be registered by the same entity that registers >> >> the netdevs whose Tx frequency is controlled by the EEC, the MAC driver. >> > >> > But the EEC object may not be controlled by the MAC - in which case >> > this model won't work. >> >> In that case the driver for the device that controls EEC would >> instantiates the object. It doesn't have to be a MAC driver. >> >> But if it is controlled by the MAC, the MAC driver instantiates it. And >> can set up the connection between the MAC and the EEC, so that in the >> shell snippet above "eec" knows how to get the EEC handle from the >> netdevice. > > But it still needs to talk to MAC driver somehow to enable the clock > recovery on a given pin - that's where the API defined here is needed. Yes, there needs to be an API between the EEC object and its owner. That API can be internal though. E.g. a set of callbacks or a notifier chain. This is how loose coupling is typically done in the kernel. >> >> > > 5. What is the problem with the following model? >> >> > > >> >> > > - The EEC is a separate object with following attributes: >> >> > > * State: Invalid / Freerun / Locked / etc >> >> > > * Sources: Netdev / external / etc >> >> > > * Potentially more >> >> > > >> >> > > - Notifications are emitted to user space when the state of the EEC >> >> > > changes. Drivers will either poll the state from the device or get >> >> > > interrupts >> >> > > >> >> > > - The mapping from netdev to EEC is queried via ethtool >> >> > >> >> > Yep - that will be part of the EEC (DPLL) subsystem >> >> >> >> This model avoids all the problems I pointed out in the current >> >> proposal. >> > >> > That's the go-to model, but first we need control over the source as >> > well :) >> >> Why is that? Can you illustrate a case that breaks with the above model? > > If you have 32 port switch chip with 2 recovered clock outputs how will you > tell the chip to get the 18th port to pin 0 and from port 20 to pin 1? That's > the part those patches addresses. The further side of "which clock should the > EEC use" belongs to the DPLL subsystem and I agree with that. So the claim is that in some cases the owner of the EEC does not know about the netdevices? If that is the case, how do netdevices know about the EEC, like the netdev-centric model assumes? Anyway, to answer the question, something like the following would happen: - Ask EEC to enumerate all input pins it knows about - Find the one that references swp18 - Ask EEC to forward that input pin to output pin 0 - Repeat for swp20 and output pin 1 The switch driver (or multi-port NIC driver) just instantiates all of netdevices, the EEC object, and pin objects, and therefore can set up arbitrary linking between the three. > Or to put it into different words: > This API will configure given quality level frequency reference outputs on chip's > Dedicated outputs. On a board you will connect those to the EEC's reference inputs. > > The EEC's job is to validate the inputs and lock to them following certain rules, > The PHY/MAC (and this API) job is to deliver reference signals to the EEC. From maciej.machnikowski at intel.com Fri Dec 3 16:50:07 2021 From: maciej.machnikowski at intel.com (Machnikowski, Maciej) Date: Fri, 3 Dec 2021 16:50:07 +0000 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: <87lf11odsv.fsf@nvidia.com> References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> <87pmqdojby.fsf@nvidia.com> <87lf11odsv.fsf@nvidia.com> Message-ID: > -----Original Message----- > From: Petr Machata > Sent: Friday, December 3, 2021 5:26 PM > To: Machnikowski, Maciej > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > recovered clock for SyncE feature > > > Machnikowski, Maciej writes: > > >> -----Original Message----- > >> From: Petr Machata > >> Sent: Friday, December 3, 2021 3:27 PM > >> To: Machnikowski, Maciej > >> Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > >> recovered clock for SyncE feature > >> > >> > >> Machnikowski, Maciej writes: > >> > >> >> -----Original Message----- > >> >> From: Ido Schimmel > >> >> > >> >> On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej > wrote: > >> >> > > -----Original Message----- > >> >> > > From: Ido Schimmel > >> >> > > > >> >> > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski > >> wrote: > >> >> > > Looking at the diagram from the previous submission [1]: > >> >> > > > >> >> > > ??????????????????????? > >> >> > > ? RX ? TX ? > >> >> > > 1 ? ports ? ports ? 1 > >> >> > > ??????????? ? ??????? > >> >> > > 2 ? ? ? ? 2 > >> >> > > ????????? ? ? ??????? > >> >> > > 3 ? ? ? ? ? 3 > >> >> > > ??????? ? ? ? ??????? > >> >> > > ? ? ? ? ? ? > >> >> > > ? ?????? ? ? > >> >> > > ? \____/ ? ? > >> >> > > ??????????????????????? > >> >> > > 1? 2? ? > >> >> > > RCLK out? ? ? TX CLK in > >> >> > > ? ? ? > >> >> > > ??????????????????? > >> >> > > ? ? > >> >> > > ? SEC ? > >> >> > > ? ? > >> >> > > ??????????????????? > >> >> > > > >> >> > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message > >> allows > >> >> > > me to redirect the frequency recovered from this netdev to the > EEC > >> via > >> >> > > either pin 1, pin 2 or both. > >> >> > > > >> >> > > Given a netdev, the RCLK_GET message allows me to query the > range > >> of > >> >> > > pins (RCLK out 1-2 in the diagram) through which the frequency can > be > >> >> > > fed into the EEC. > >> >> > > > >> >> > > Questions: > >> >> > > > >> >> > > 1. The query for all the above netdevs will return the same range > >> >> > > of pins. How does user space know that these are the same pins? > >> >> > > That is, how does user space know that RCLK_SET message to > >> >> > > redirect the frequency recovered from netdev 1 to pin 1 will be > >> >> > > overridden by the same message but for netdev 2? > >> >> > > >> >> > We don't have a way to do so right now. When we have EEC > subsystem > >> >> > in place the right thing to do will be to add EEC input index and > >> >> > EEC index as additional arguments > >> >> > > >> >> > > 2. How does user space know the mapping between a netdev and > an > >> >> > > EEC? That is, how does user space know that RCLK_SET message > for > >> >> > > netdev 1 will cause the Tx frequency of netdev 2 to change > >> >> > > according to the frequency recovered from netdev 1? > >> >> > > >> >> > Ditto - currently we don't have any entity to link the pins to ATM, > >> >> > but we can address that in userspace just like PTP pins are used > >> >> > now > >> >> > > >> >> > > 3. If user space sends two RCLK_SET messages to redirect the > >> >> > > frequency recovered from netdev 1 to RCLK out 1 and from netdev > 2 > >> >> > > to RCLK out 2, how does it know which recovered frequency is > >> >> > > actually used an input to the EEC? > >> >> > >> >> User space doesn't know this as well? > >> > > >> > In current model it can come from the config file. Once we implement > DPLL > >> > subsystem we can implement connection between pins and DPLLs if > they > >> are > >> > known. > >> > > >> >> > > > >> >> > > 4. Why these pins are represented as attributes of a netdev and > not as > >> >> > > attributes of the EEC? That is, why are they represented as output > >> pins > >> >> > > of the PHY as opposed to input pins of the EEC? > >> >> > > >> >> > They are 2 separate beings. Recovered clock outputs are controlled > >> >> > separately from EEC inputs. > >> >> > >> >> Separate how? What does it mean that they are controlled separately? > In > >> >> which sense? That redirection of recovered frequency to pin is > >> >> controlled via PHY registers whereas priority setting between EEC > inputs > >> >> is controlled via EEC registers? If so, this is an implementation detail > >> >> of a specific design. It is not of any importance to user space. > >> > > >> > They belong to different devices. EEC registers are physically in the DPLL > >> > hanging over I2C and recovered clocks are in the PHY/integrated PHY in > >> > the MAC. Depending on system architecture you may have control over > >> > one piece only > >> > >> What does ETHTOOL_MSG_RCLK_SET actually configure, physically? Say I > >> have this message: > >> > >> ETHTOOL_MSG_RCLK_SET dev = eth0 > >> - ETHTOOL_A_RCLK_OUT_PIN_IDX = n > >> - ETHTOOL_A_RCLK_PIN_FLAGS |= ETHTOOL_RCLK_PIN_FLAGS_ENA > >> > >> Eventually this lands in ops->set_rclk_out(dev, out_idx, new_state). > >> What does the MAC driver do next? > > > > It goes to the PTY layer, enables the clock recovery from a given physical > lane, > > optionally configure the clock divider and pin output muxes. This will be > > HW-specific though, but the general concept will look like that. > > The reason I am asking is that I suspect that by exposing this > functionality through netdev, you assume that the NIC driver will do > whatever EEC configuration necessary _anyway_. So why couldn't it just > instantiate the EEC object as well? Not necessarily. The EEC can be supported by totally different driver. I.e there are Renesas DPLL drivers available now in the ptp subsystem. The DPLL can be connected anywhere in the system. > >> >> > If we mix them it'll be hard to control everything especially that a > >> >> > single EEC can support multiple devices. > >> >> > >> >> Hard how? Please provide concrete examples. > >> > > >> > From the EEC perspective it's one to many relation - one EEC input > >> > pin will serve even 4,16,48 netdevs. I don't see easy way of > >> > starting from EEC input of EEC device and figuring out which > >> > netdevs are connected to it to talk to the right one. In current > >> > model it's as simple as: > >> > - I received QL-PRC on netdev ens4f0 > >> > - I send back enable recovered clock on pin 0 of the ens4f0 > >> > >> How do I know it's pin 0 though? Config file? > > > > You can find that by sending the ETHTOOL_MSG_RCLK_GET without any > pin > > index to get the acceptable/supported range. > > Ha, OK, pin0 means the RCLK pin. OK. > > >> > - go to EEC that will be linked to it > >> > - see the state of it - if its locked - report QL-EEC downsteam > >> > > >> > How would you this control look in the EEC/DPLL implementation? > Maybe > >> > I missed something. > >> > >> In the EEC-centric model this is what happens: > >> > >> - QL-PRC packet is received on ens4f0 > >> - Userspace consults a UAPI to figure out what EEC and pin ID this > >> netdevice corresponds to > >> - Userspace instructs through a UAPI the indicated EEC to use the > >> indicated pin as a source > >> - Userspace then monitors the indicated EEC through a UAPI. When the > EEC > >> locks, QL-EEC is reported downstream > > > > This is still missing the port/lane->pin mapping. This is what will > > happen in the EEC/DPLL subsystem. > > You asked how the control looks in the ECC-centric model. So this is > how. That this stuff is missing is fairly obvious, we are talking about > a different model. > > I don't buy the "extremely hard" argument. The set of steps to do might > be longer, but they are still just steps. No jumps, hoops, sommersaults. > On the flip side we get a proper UAPI that can stay useful for a while. > > >> >> What do you mean by "multiple devices"? A multi-port adapter with a > >> >> single EEC or something else? > >> > > >> > Multiple MACs that use a single EEC clock. > >> > > >> >> > Also if we make those pins attributes of the EEC it'll become > >> >> > extremally hard to map them to netdevs and control them from the > >> >> > userspace app that will receive the ESMC message with a given QL > >> >> > level on netdev X. > >> >> > >> >> Hard how? What is the problem with something like: > >> >> > >> >> # eec set source 1 type netdev dev swp1 > >> >> > >> >> The EEC object should be registered by the same entity that registers > >> >> the netdevs whose Tx frequency is controlled by the EEC, the MAC > driver. > >> > > >> > But the EEC object may not be controlled by the MAC - in which case > >> > this model won't work. > >> > >> In that case the driver for the device that controls EEC would > >> instantiates the object. It doesn't have to be a MAC driver. > >> > >> But if it is controlled by the MAC, the MAC driver instantiates it. And > >> can set up the connection between the MAC and the EEC, so that in the > >> shell snippet above "eec" knows how to get the EEC handle from the > >> netdevice. > > > > But it still needs to talk to MAC driver somehow to enable the clock > > recovery on a given pin - that's where the API defined here is needed. > > Yes, there needs to be an API between the EEC object and its owner. That > API can be internal though. E.g. a set of callbacks or a notifier chain. > This is how loose coupling is typically done in the kernel. > > >> >> > > 5. What is the problem with the following model? > >> >> > > > >> >> > > - The EEC is a separate object with following attributes: > >> >> > > * State: Invalid / Freerun / Locked / etc > >> >> > > * Sources: Netdev / external / etc > >> >> > > * Potentially more > >> >> > > > >> >> > > - Notifications are emitted to user space when the state of the EEC > >> >> > > changes. Drivers will either poll the state from the device or get > >> >> > > interrupts > >> >> > > > >> >> > > - The mapping from netdev to EEC is queried via ethtool > >> >> > > >> >> > Yep - that will be part of the EEC (DPLL) subsystem > >> >> > >> >> This model avoids all the problems I pointed out in the current > >> >> proposal. > >> > > >> > That's the go-to model, but first we need control over the source as > >> > well :) > >> > >> Why is that? Can you illustrate a case that breaks with the above model? > > > > If you have 32 port switch chip with 2 recovered clock outputs how will you > > tell the chip to get the 18th port to pin 0 and from port 20 to pin 1? That's > > the part those patches addresses. The further side of "which clock should > the > > EEC use" belongs to the DPLL subsystem and I agree with that. > > So the claim is that in some cases the owner of the EEC does not know > about the netdevices? > > If that is the case, how do netdevices know about the EEC, like the > netdev-centric model assumes? > > Anyway, to answer the question, something like the following would > happen: > > - Ask EEC to enumerate all input pins it knows about > - Find the one that references swp18 > - Ask EEC to forward that input pin to output pin 0 > - Repeat for swp20 and output pin 1 > > The switch driver (or multi-port NIC driver) just instantiates all of > netdevices, the EEC object, and pin objects, and therefore can set up > arbitrary linking between the three. This will end up with a model in which pin X of the EEC will link to dozens ports - userspace tool would need to find out the relation between them and EECs somehow. It's far more convenient if a given netdev knows where it is connected to and which pin can it drive. I.e. send the netdev swp20 ETHTOOL_MSG_RCLK_GET and get the pin indexes of the EEC and send the future message to find which EEC that is (or even return EEC index in RCLK_GET?). Set the recovered clock on that pin with the ETHTOOL_MSG_RCLK_SET. Then go to the given EEC and configure it to use the pin that was returned before as a frequency source and monitor the EEC state. Additionally, the EEC device may be instantiated by a totally different driver, in which case the relation between its pins and netdevs may not even be known. > > Or to put it into different words: > > This API will configure given quality level frequency reference outputs on > chip's > > Dedicated outputs. On a board you will connect those to the EEC's > reference inputs. > > > > The EEC's job is to validate the inputs and lock to them following certain > rules, > > The PHY/MAC (and this API) job is to deliver reference signals to the EEC. From hayeswang at realtek.com Fri Dec 3 07:57:08 2021 From: hayeswang at realtek.com (Hayes Wang) Date: Fri, 3 Dec 2021 07:57:08 +0000 Subject: [Intel-wired-lan] [RFC PATCH 0/4] r8169: support dash In-Reply-To: <20211130190926.7c1d735d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> References: <20211129101315.16372-381-nic_swsd@realtek.com> <20211129095947.547a765f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <918d75ea873a453ab2ba588a35d66ab6@realtek.com> <20211130190926.7c1d735d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> Message-ID: Jakub Kicinski > Sent: Wednesday, December 1, 2021 11:09 AM [...] > I'm not sure how relevant it will be to you but this is the > documentation we have: > > https://www.kernel.org/doc/html/latest/networking/devlink/index.html > https://www.kernel.org/doc/html/latest/networking/devlink/devlink-params.ht > ml > > You'll need to add a generic parameter (define + a short description) > like 325e0d0aa683 ("devlink: Add 'enable_iwarp' generic device param") > > In terms of driver changes I think the most relevant example to you > will be: > > drivers/net/ethernet/ti/cpsw_new.c > > You need to call devlink_alloc(), devlink_register and > devlink_params_register() (and the inverse functions). I have studied the devlink briefly. However, I find some problems. First, our settings are dependent on the design of both the hardware and firmware. That is, I don't think the others need to do the settings as the same as us. The devlink seems to let everyone could use the same command to do the same setting. However, most of our settings are useless for the other devices. Second, according to the design of our CMAC, the application has to read and write data with variable length from/to the firmware. Each custom has his own requests. Therefore, our customs would get different firmware with different behavior. Only the application and the firmware know how to communicate with each other. The driver only passes the data between them. Like the Ethernet driver, it doesn't need to know the contend of the packet. I could implement the CMAC through sysfs, but I don't know how to do by devlink. In brief, CMAC is our major method to configure the firmware and get response from the firmware. Except for certain information, the other settings are not standard and useless for the other vendors. Is the devlink the only method I could use? Actually, we use IOCTL now. We wish to convert it to sysfs for upstream driver. Best Regards, Hayes From petrm at nvidia.com Fri Dec 3 18:21:33 2021 From: petrm at nvidia.com (Petr Machata) Date: Fri, 3 Dec 2021 19:21:33 +0100 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: <87ilw5o8gi.fsf@nvidia.com> Machnikowski, Maciej writes: >> -----Original Message----- >> From: Ido Schimmel >> Sent: Friday, December 3, 2021 4:46 PM >> To: Machnikowski, Maciej >> Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure >> recovered clock for SyncE feature >> >> On Thu, Dec 02, 2021 at 05:20:24PM +0000, Machnikowski, Maciej wrote: >> > > -----Original Message----- >> > > From: Ido Schimmel >> > > Sent: Thursday, December 2, 2021 5:36 PM >> > > To: Machnikowski, Maciej >> > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure >> > > recovered clock for SyncE feature >> > > >> > > On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: >> > > > > -----Original Message----- >> > > > > From: Ido Schimmel >> > > > > Sent: Thursday, December 2, 2021 1:44 PM >> > > > > To: Machnikowski, Maciej >> > > > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure >> > > > > recovered clock for SyncE feature >> > > > > >> > > > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski wrote: >> > > > > Looking at the diagram from the previous submission [1]: >> > > > > >> > > > > ??????????????????????? >> > > > > ? RX ? TX ? >> > > > > 1 ? ports ? ports ? 1 >> > > > > ??????????? ? ??????? >> > > > > 2 ? ? ? ? 2 >> > > > > ????????? ? ? ??????? >> > > > > 3 ? ? ? ? ? 3 >> > > > > ??????? ? ? ? ??????? >> > > > > ? ? ? ? ? ? >> > > > > ? ?????? ? ? >> > > > > ? \____/ ? ? >> > > > > ??????????????????????? >> > > > > 1? 2? ? >> > > > > RCLK out? ? ? TX CLK in >> > > > > ? ? ? >> > > > > ??????????????????? >> > > > > ? ? >> > > > > ? SEC ? >> > > > > ? ? >> > > > > ??????????????????? >> > > > > >> > > > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET >> > > > > message allows me to redirect the frequency recovered from >> > > > > this netdev to the EEC via either pin 1, pin 2 or both. >> > > > > >> > > > > Given a netdev, the RCLK_GET message allows me to query the >> > > > > range of pins (RCLK out 1-2 in the diagram) through which the >> > > > > frequency can be fed into the EEC. >> > > > > >> > > > > Questions: >> > > > > >> > > > > 1. The query for all the above netdevs will return the same >> > > > > range of pins. How does user space know that these are the >> > > > > same pins? That is, how does user space know that RCLK_SET >> > > > > message to redirect the frequency recovered from netdev 1 to >> > > > > pin 1 will be overridden by the same message but for netdev >> > > > > 2? >> > > > >> > > > We don't have a way to do so right now. When we have EEC >> > > > subsystem in place the right thing to do will be to add EEC >> > > > input index and EEC index as additional arguments >> > > > >> > > > > 2. How does user space know the mapping between a netdev and >> > > > > an EEC? That is, how does user space know that RCLK_SET >> > > > > message for netdev 1 will cause the Tx frequency of netdev 2 >> > > > > to change according to the frequency recovered from netdev 1? >> > > > >> > > > Ditto - currently we don't have any entity to link the pins to >> > > > ATM, but we can address that in userspace just like PTP pins >> > > > are used now >> > > > >> > > > > 3. If user space sends two RCLK_SET messages to redirect the >> > > > > frequency recovered from netdev 1 to RCLK out 1 and from >> > > > > netdev 2 to RCLK out 2, how does it know which recovered >> > > > > frequency is actually used an input to the EEC? >> > > >> > > User space doesn't know this as well? >> > >> > In current model it can come from the config file. Once we >> > implement DPLL subsystem we can implement connection between pins >> > and DPLLs if they are known. >> >> To be clear, no SyncE patches should be accepted before we have a >> DPLL subsystem or however the subsystem that will model the EEC is >> going to be called. >> >> You are asking us to buy into a new uAPI that can never be removed. >> We pointed out numerous problems with this uAPI and suggested a model >> that solves them. When asked why it can't work we are answered with >> vague arguments about this model being "hard". > > My argument was never "it's hard" - the answer is we need both APIs. > >> In addition, without a representation of the EEC, these patches have >> no value for user space. They basically allow user space to redirect >> the recovered frequency from a netdev to an object that does not >> exist. User space doesn't know if the object is successfully tracking >> the frequency (the EEC state) and does not know which other >> components are utilizing this recovered frequency as input (e.g., >> other netdevs, PHC). > > That's also not true - the proposed uAPI lets you enable recovered > frequency output pins and redirect the right clock to them. In some > implementations you may not have anything else. Wait, are there EEC deployments where there is no way to determine the EEC state? >> BTW, what is the use case for enabling two EEC inputs simultaneously? >> Some seamless failover? > > Mainly - redundacy > >> > >> > > > > >> > > > > 4. Why these pins are represented as attributes of a netdev >> > > > > and not as attributes of the EEC? That is, why are they >> > > > > represented as output pins of the PHY as opposed to input >> > > > > pins of the EEC? >> > > > >> > > > They are 2 separate beings. Recovered clock outputs are >> > > > controlled separately from EEC inputs. >> > > >> > > Separate how? What does it mean that they are controlled >> > > separately? In which sense? That redirection of recovered >> > > frequency to pin is controlled via PHY registers whereas priority >> > > setting between EEC inputs is controlled via EEC registers? If >> > > so, this is an implementation detail of a specific design. It is >> > > not of any importance to user space. >> > >> > They belong to different devices. EEC registers are physically in >> > the DPLL hanging over I2C and recovered clocks are in the >> > PHY/integrated PHY in the MAC. Depending on system architecture you >> > may have control over one piece only >> >> These are implementation details of a specific design and should not >> influence the design of the uAPI. The uAPI should be influenced by >> the logical task that it is trying to achieve. > > There are 2 logical tasks: > 1. Enable clocks that are recovered from a specific netdev > 2. Control the EEC > > They are both needed to get to the full solution, but are independent > from each other. You can't put RCLK redirection to the EEC as it's one > to many relation and you will need to call the netdev to enable it > anyway. "Call the netdev"? When EEC decides a configuration needs to be done, it will defer to a callback set up by whoever created the EEC object. EEC doesn't care. If you have a disk that somehow contains an EEC to syntonize disk spinning across the data center, go ahead and create the object from a disk driver. Then the EEC object will invoke disk driver code. > Also, when we tried to add EEC state to PTP subsystem the answer was > that we can't mix subsystems. The proposal to configure recovered > clocks through EEC would mix netdev with EEC. Involving MAC driver through an abstract interface is not mixing subsystems. It's just loose coupling. >> > > What do you mean by "multiple devices"? A multi-port adapter with >> > > a single EEC or something else? >> > >> > Multiple MACs that use a single EEC clock. >> > >> > > > Also if we make those pins attributes of the EEC it'll become >> > > > extremally hard to map them to netdevs and control them from >> > > > the userspace app that will receive the ESMC message with a >> > > > given QL level on netdev X. >> > > >> > > Hard how? What is the problem with something like: >> > > >> > > # eec set source 1 type netdev dev swp1 >> > > >> > > The EEC object should be registered by the same entity that >> > > registers the netdevs whose Tx frequency is controlled by the >> > > EEC, the MAC driver. >> > >> > But the EEC object may not be controlled by the MAC - in which case >> > this model won't work. >> >> Why wouldn't it work? Leave individual kernel modules alone and look >> at the kernel. It is registering all the necessary logical objects >> such netdevs, PHCs and EECs. There is no way user space knows better >> than the kernel how these objects fit together as the purpose of the >> kernel is to abstract the hardware to user space. >> >> User space's request to use the Rx frequency recovered from netdev X >> as an input to EEC Y will be processed by the DPLL subsystem. In >> turn, this subsystem will invoke whichever kernel modules it needs to >> fulfill the request. > > But how would that call go through the kernel? What would you like to > give to the EEC object and how should it react. I'm fine with the > changes, but I don't see the solution in that proposal You will give EEC object handle, RCLK source handle, and a handle of the output pin to configure. These are all objects in the EEC subsystem. Some of the RCLK sources are pre-attached to a netdevice, so they carry an ifindex reference. Some are external and do not have a netdevice (that's for NIC-to-NIC frequency bridges, external GPS's and whatnot). Eventually to implement the request, the EEC object would call its creator through a callback appropriate for the request. > and this model would mix independent subsystems. The only place where netdevices are tightly coupled to the EEC are those pre-attached pins. But OK, EEC just happens to be very, very often part of a NIC, and being able to say, this RCLK comes from swp1, is just very, very handy. But it is not a requirement. The EEC model can just as easily represent external pins, or weird stuff like boards that have nothing _but_ external pins. > The netdev -> EEC should be a downstream relation, just like the PTP > is now If a netdev wants to check what's the state of EEC driving it - > it can do it, but I don't see a way for the EEC subsystem to directly > configure something in Potentially couple different MAC chips without > calling a kind of netdev API. And that's what those patches address. Either the device packages everything, e.g. a switch, or an EEC-enabled NIC. In that case, the NIC driver instantiates the EEC, and pins, and RCLK sources, and netdevices. EEC configuration ends up getting handled by this device driver, because that's the way it set things up. Or we have a NIC separate from the EEC, but there is still an option to hook those up somehow. That looks like something that should probably be represented by an EEC with some external RCLK sources. (Or maybe they are just inout pins or whatever, that is a detail.) Then the EEC driver ends up instantiating the object, and implementing the requests. And the admin needs to have external information to know that external pin such and such is actually connected to PHY such and such. From petrm at nvidia.com Fri Dec 3 18:44:57 2021 From: petrm at nvidia.com (Petr Machata) Date: Fri, 3 Dec 2021 19:44:57 +0100 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> <87pmqdojby.fsf@nvidia.com> <87lf11odsv.fsf@nvidia.com> Message-ID: <87fsr9o7di.fsf@nvidia.com> Machnikowski, Maciej writes: >> -----Original Message----- >> From: Petr Machata >> >> Machnikowski, Maciej writes: >> >> >> -----Original Message----- >> >> From: Petr Machata >> >> >> >> Machnikowski, Maciej writes: >> >> >> >> >> -----Original Message----- >> >> >> From: Ido Schimmel >> >> >> >> >> >> On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: >> >> >> > > -----Original Message----- >> >> >> > > From: Ido Schimmel >> >> >> > > >> >> >> > > 4. Why these pins are represented as attributes of a netdev >> >> >> > > and not as attributes of the EEC? That is, why are they >> >> >> > > represented as output pins of the PHY as opposed to input >> >> >> > > pins of the EEC? >> >> >> > >> >> >> > They are 2 separate beings. Recovered clock outputs are >> >> >> > controlled separately from EEC inputs. >> >> >> >> >> >> Separate how? What does it mean that they are controlled >> >> >> separately? In which sense? That redirection of recovered >> >> >> frequency to pin is controlled via PHY registers whereas >> >> >> priority setting between EEC inputs is controlled via EEC >> >> >> registers? If so, this is an implementation detail of a >> >> >> specific design. It is not of any importance to user space. >> >> > >> >> > They belong to different devices. EEC registers are physically >> >> > in the DPLL hanging over I2C and recovered clocks are in the >> >> > PHY/integrated PHY in the MAC. Depending on system architecture >> >> > you may have control over one piece only >> >> >> >> What does ETHTOOL_MSG_RCLK_SET actually configure, physically? Say >> >> I have this message: >> >> >> >> ETHTOOL_MSG_RCLK_SET dev = eth0 >> >> - ETHTOOL_A_RCLK_OUT_PIN_IDX = n >> >> - ETHTOOL_A_RCLK_PIN_FLAGS |= ETHTOOL_RCLK_PIN_FLAGS_ENA >> >> >> >> Eventually this lands in ops->set_rclk_out(dev, out_idx, >> >> new_state). What does the MAC driver do next? >> > >> > It goes to the PTY layer, enables the clock recovery from a given >> > physical lane, optionally configure the clock divider and pin >> > output muxes. This will be HW-specific though, but the general >> > concept will look like that. >> >> The reason I am asking is that I suspect that by exposing this >> functionality through netdev, you assume that the NIC driver will do >> whatever EEC configuration necessary _anyway_. So why couldn't it just >> instantiate the EEC object as well? > > Not necessarily. The EEC can be supported by totally different driver. > I.e there are Renesas DPLL drivers available now in the ptp subsystem. > The DPLL can be connected anywhere in the system. > >> >> >> > > 5. What is the problem with the following model? >> >> >> > > >> >> >> > > - The EEC is a separate object with following attributes: >> >> >> > > * State: Invalid / Freerun / Locked / etc >> >> >> > > * Sources: Netdev / external / etc >> >> >> > > * Potentially more >> >> >> > > >> >> >> > > - Notifications are emitted to user space when the state of >> >> >> > > the EEC changes. Drivers will either poll the state from >> >> >> > > the device or get interrupts >> >> >> > > >> >> >> > > - The mapping from netdev to EEC is queried via ethtool >> >> >> > >> >> >> > Yep - that will be part of the EEC (DPLL) subsystem >> >> >> >> >> >> This model avoids all the problems I pointed out in the current >> >> >> proposal. >> >> > >> >> > That's the go-to model, but first we need control over the >> >> > source as well :) >> >> >> >> Why is that? Can you illustrate a case that breaks with the above >> >> model? >> > >> > If you have 32 port switch chip with 2 recovered clock outputs how >> > will you tell the chip to get the 18th port to pin 0 and from port >> > 20 to pin 1? That's the part those patches addresses. The further >> > side of "which clock should the EEC use" belongs to the DPLL >> > subsystem and I agree with that. >> >> So the claim is that in some cases the owner of the EEC does not know >> about the netdevices? >> >> If that is the case, how do netdevices know about the EEC, like the >> netdev-centric model assumes? >> >> Anyway, to answer the question, something like the following would >> happen: >> >> - Ask EEC to enumerate all input pins it knows about >> - Find the one that references swp18 >> - Ask EEC to forward that input pin to output pin 0 >> - Repeat for swp20 and output pin 1 >> >> The switch driver (or multi-port NIC driver) just instantiates all of >> netdevices, the EEC object, and pin objects, and therefore can set up >> arbitrary linking between the three. > > This will end up with a model in which pin X of the EEC will link to >dozens ports - userspace tool would need to find out the relation >between them and EECs somehow. Indeed. If you have EEC connected to a bunch of ports, the EEC object is related to a bunch of netdevices. The UAPI needs to have tools to dump these objects so that it is possible to discover what is connected where. This configuration will also not change during the lifetime of the EEC object, so tools can cache it. > It's far more convenient if a given netdev knows where it is connected > to and which pin can it drive. Yeah, it is of course possible to add references from the netdevice to the EEC object directly, so that the tool just needs to ask a netdevice what EEC / RCLK source ID it maps to. This has mostly nothing to do with the model itself. > I.e. send the netdev swp20 ETHTOOL_MSG_RCLK_GET and get the pin > indexes of the EEC and send the future message to find which EEC that > is (or even return EEC index in RCLK_GET?). Since the pin index on its own is useless, it would make sense to return both pieces of information at the same time. > Set the recovered clock on that pin with the ETHTOOL_MSG_RCLK_SET. Nope. > Then go to the given EEC and configure it to use the pin that was > returned before as a frequency source and monitor the EEC state. Yep. EEC will invoke a callback to set up the tracking. If something special needs to be done to "set the recovered clock on that pin", the handler of that callback will do it. > Additionally, the EEC device may be instantiated by a totally > different driver, in which case the relation between its pins and > netdevs may not even be known. Like an EEC, some PHYs, but the MAC driver does not know about both pieces? Who sets up the connection between the two? The box admin through some cabling? SoC designer? Also, what does the external EEC actually do with the signal from the PHY? Tune to it and forward to the other PHYs in the complex? From maciej.machnikowski at intel.com Fri Dec 3 19:07:15 2021 From: maciej.machnikowski at intel.com (Machnikowski, Maciej) Date: Fri, 3 Dec 2021 19:07:15 +0000 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: <87fsr9o7di.fsf@nvidia.com> References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> <87pmqdojby.fsf@nvidia.com> <87lf11odsv.fsf@nvidia.com> <87fsr9o7di.fsf@nvidia.com> Message-ID: > -----Original Message----- > From: Petr Machata > Sent: Friday, December 3, 2021 7:45 PM > To: Machnikowski, Maciej > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > recovered clock for SyncE feature > > > Machnikowski, Maciej writes: > > >> -----Original Message----- > >> From: Petr Machata > >> > >> Machnikowski, Maciej writes: > >> > >> >> -----Original Message----- > >> >> From: Petr Machata > >> >> > >> >> Machnikowski, Maciej writes: > >> >> > >> >> >> -----Original Message----- > >> >> >> From: Ido Schimmel > >> >> >> > >> >> >> On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej > wrote: > >> >> >> > > -----Original Message----- > >> >> >> > > From: Ido Schimmel > >> >> >> > > > >> >> >> > > 4. Why these pins are represented as attributes of a netdev > >> >> >> > > and not as attributes of the EEC? That is, why are they > >> >> >> > > represented as output pins of the PHY as opposed to input > >> >> >> > > pins of the EEC? > >> >> >> > > >> >> >> > They are 2 separate beings. Recovered clock outputs are > >> >> >> > controlled separately from EEC inputs. > >> >> >> > >> >> >> Separate how? What does it mean that they are controlled > >> >> >> separately? In which sense? That redirection of recovered > >> >> >> frequency to pin is controlled via PHY registers whereas > >> >> >> priority setting between EEC inputs is controlled via EEC > >> >> >> registers? If so, this is an implementation detail of a > >> >> >> specific design. It is not of any importance to user space. > >> >> > > >> >> > They belong to different devices. EEC registers are physically > >> >> > in the DPLL hanging over I2C and recovered clocks are in the > >> >> > PHY/integrated PHY in the MAC. Depending on system architecture > >> >> > you may have control over one piece only > >> >> > >> >> What does ETHTOOL_MSG_RCLK_SET actually configure, physically? > Say > >> >> I have this message: > >> >> > >> >> ETHTOOL_MSG_RCLK_SET dev = eth0 > >> >> - ETHTOOL_A_RCLK_OUT_PIN_IDX = n > >> >> - ETHTOOL_A_RCLK_PIN_FLAGS |= ETHTOOL_RCLK_PIN_FLAGS_ENA > >> >> > >> >> Eventually this lands in ops->set_rclk_out(dev, out_idx, > >> >> new_state). What does the MAC driver do next? > >> > > >> > It goes to the PTY layer, enables the clock recovery from a given > >> > physical lane, optionally configure the clock divider and pin > >> > output muxes. This will be HW-specific though, but the general > >> > concept will look like that. > >> > >> The reason I am asking is that I suspect that by exposing this > >> functionality through netdev, you assume that the NIC driver will do > >> whatever EEC configuration necessary _anyway_. So why couldn't it just > >> instantiate the EEC object as well? > > > > Not necessarily. The EEC can be supported by totally different driver. > > I.e there are Renesas DPLL drivers available now in the ptp subsystem. > > The DPLL can be connected anywhere in the system. > > > >> >> >> > > 5. What is the problem with the following model? > >> >> >> > > > >> >> >> > > - The EEC is a separate object with following attributes: > >> >> >> > > * State: Invalid / Freerun / Locked / etc > >> >> >> > > * Sources: Netdev / external / etc > >> >> >> > > * Potentially more > >> >> >> > > > >> >> >> > > - Notifications are emitted to user space when the state of > >> >> >> > > the EEC changes. Drivers will either poll the state from > >> >> >> > > the device or get interrupts > >> >> >> > > > >> >> >> > > - The mapping from netdev to EEC is queried via ethtool > >> >> >> > > >> >> >> > Yep - that will be part of the EEC (DPLL) subsystem > >> >> >> > >> >> >> This model avoids all the problems I pointed out in the current > >> >> >> proposal. > >> >> > > >> >> > That's the go-to model, but first we need control over the > >> >> > source as well :) > >> >> > >> >> Why is that? Can you illustrate a case that breaks with the above > >> >> model? > >> > > >> > If you have 32 port switch chip with 2 recovered clock outputs how > >> > will you tell the chip to get the 18th port to pin 0 and from port > >> > 20 to pin 1? That's the part those patches addresses. The further > >> > side of "which clock should the EEC use" belongs to the DPLL > >> > subsystem and I agree with that. > >> > >> So the claim is that in some cases the owner of the EEC does not know > >> about the netdevices? > >> > >> If that is the case, how do netdevices know about the EEC, like the > >> netdev-centric model assumes? > >> > >> Anyway, to answer the question, something like the following would > >> happen: > >> > >> - Ask EEC to enumerate all input pins it knows about > >> - Find the one that references swp18 > >> - Ask EEC to forward that input pin to output pin 0 > >> - Repeat for swp20 and output pin 1 > >> > >> The switch driver (or multi-port NIC driver) just instantiates all of > >> netdevices, the EEC object, and pin objects, and therefore can set up > >> arbitrary linking between the three. > > > > This will end up with a model in which pin X of the EEC will link to > >dozens ports - userspace tool would need to find out the relation > >between them and EECs somehow. > > Indeed. If you have EEC connected to a bunch of ports, the EEC object is > related to a bunch of netdevices. The UAPI needs to have tools to dump > these objects so that it is possible to discover what is connected > where. > > This configuration will also not change during the lifetime of the EEC > object, so tools can cache it. > > > It's far more convenient if a given netdev knows where it is connected > > to and which pin can it drive. > > Yeah, it is of course possible to add references from the netdevice to > the EEC object directly, so that the tool just needs to ask a netdevice > what EEC / RCLK source ID it maps to. > > This has mostly nothing to do with the model itself. > > > I.e. send the netdev swp20 ETHTOOL_MSG_RCLK_GET and get the pin > > indexes of the EEC and send the future message to find which EEC that > > is (or even return EEC index in RCLK_GET?). > > Since the pin index on its own is useless, it would make sense to return > both pieces of information at the same time. > > > Set the recovered clock on that pin with the ETHTOOL_MSG_RCLK_SET. > > Nope. > > > Then go to the given EEC and configure it to use the pin that was > > returned before as a frequency source and monitor the EEC state. > > Yep. > > EEC will invoke a callback to set up the tracking. If something special > needs to be done to "set the recovered clock on that pin", the handler > of that callback will do it. > > > Additionally, the EEC device may be instantiated by a totally > > different driver, in which case the relation between its pins and > > netdevs may not even be known. > > Like an EEC, some PHYs, but the MAC driver does not know about both > pieces? Who sets up the connection between the two? The box admin > through some cabling? SoC designer? > > Also, what does the external EEC actually do with the signal from the > PHY? Tune to it and forward to the other PHYs in the complex? Yes - it can also apply HW filters to it. The EEC model will not work when you have the following system: SoC with some ethernet ports with driver A Switch chip with N ports with driver B EEC/DPLL with driver C Both SoC and Switch ASIC can recover clock and use the cleaned clock from the DPLL. In that case you can't create any relation between EEC and recover clock pins that would enable the EEC subsystem to control recovered clocks, because you have 3 independent drivers. The model you proposed assumes that the MAC/Switch is in charge of the DPLL, but that's not always true. The model where recovered clock outputs are controlled independently can support both models and is more flexible. It can also address the mode where you want to use the recovered clock as a source for RF part of your system and don't have any EEC to control from the netdev side. From lkp at intel.com Sat Dec 4 01:03:07 2021 From: lkp at intel.com (kernel test robot) Date: Sat, 04 Dec 2021 09:03:07 +0800 Subject: [Intel-wired-lan] [tnguy-net-queue:dev-queue] BUILD SUCCESS 2765e7c0a88cdb8b7bfaf0b5cbae8cb7dc1cebcc Message-ID: <61aabe4b.fFJylyXMUAiCyQSE%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue.git dev-queue branch HEAD: 2765e7c0a88cdb8b7bfaf0b5cbae8cb7dc1cebcc i40e: Fix for failed to init adminq while VF reset elapsed time: 1861m configs tested: 159 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm defconfig arm allyesconfig arm allmodconfig arm64 defconfig arm64 allyesconfig i386 randconfig-c001-20211202 arm hisi_defconfig powerpc kmeter1_defconfig powerpc adder875_defconfig m68k m5208evb_defconfig s390 allyesconfig powerpc arches_defconfig arc axs101_defconfig m68k m5407c3_defconfig mips maltaup_defconfig sparc sparc64_defconfig mips vocore2_defconfig arm shmobile_defconfig arm imx_v6_v7_defconfig um i386_defconfig powerpc sequoia_defconfig arm ep93xx_defconfig arm pxa3xx_defconfig sh alldefconfig nios2 10m50_defconfig powerpc ppa8548_defconfig mips maltaaprp_defconfig arm hackkit_defconfig powerpc pseries_defconfig h8300 h8300h-sim_defconfig powerpc mpc866_ads_defconfig sh kfr2r09_defconfig powerpc tqm8555_defconfig m68k q40_defconfig mips cobalt_defconfig arm jornada720_defconfig sh landisk_defconfig um defconfig arm omap2plus_defconfig powerpc bluestone_defconfig arm omap1_defconfig powerpc cell_defconfig arm bcm2835_defconfig arc axs103_smp_defconfig arm palmz72_defconfig xtensa common_defconfig powerpc mpc8272_ads_defconfig powerpc pcm030_defconfig powerpc powernv_defconfig arm spear6xx_defconfig parisc generic-32bit_defconfig sh sh7770_generic_defconfig arm rpc_defconfig powerpc gamecube_defconfig sh shmin_defconfig arm aspeed_g4_defconfig sparc alldefconfig powerpc warp_defconfig microblaze defconfig arm randconfig-c002-20211203 ia64 allmodconfig ia64 defconfig ia64 allyesconfig m68k allmodconfig m68k defconfig m68k allyesconfig nios2 defconfig arc allyesconfig nds32 allnoconfig nds32 defconfig nios2 allyesconfig csky defconfig alpha defconfig alpha allyesconfig xtensa allyesconfig h8300 allyesconfig arc defconfig sh allmodconfig parisc defconfig s390 allmodconfig parisc allyesconfig s390 defconfig i386 allyesconfig sparc allyesconfig sparc defconfig i386 defconfig i386 debian-10.3-kselftests i386 debian-10.3 mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig x86_64 randconfig-a006-20211203 x86_64 randconfig-a005-20211203 x86_64 randconfig-a001-20211203 x86_64 randconfig-a002-20211203 x86_64 randconfig-a004-20211203 x86_64 randconfig-a003-20211203 i386 randconfig-a001-20211203 i386 randconfig-a005-20211203 i386 randconfig-a002-20211203 i386 randconfig-a003-20211203 i386 randconfig-a006-20211203 i386 randconfig-a004-20211203 x86_64 randconfig-a016-20211202 x86_64 randconfig-a011-20211202 x86_64 randconfig-a013-20211202 x86_64 randconfig-a014-20211202 x86_64 randconfig-a012-20211202 x86_64 randconfig-a015-20211202 i386 randconfig-a013-20211204 i386 randconfig-a016-20211204 i386 randconfig-a011-20211204 i386 randconfig-a014-20211204 i386 randconfig-a012-20211204 i386 randconfig-a015-20211204 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig um x86_64_defconfig x86_64 allyesconfig x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec x86_64 rhel-8.3-kselftests clang tested configs: x86_64 randconfig-a006-20211202 x86_64 randconfig-a005-20211202 x86_64 randconfig-a001-20211202 x86_64 randconfig-a002-20211202 x86_64 randconfig-a004-20211202 x86_64 randconfig-a003-20211202 i386 randconfig-a001-20211202 i386 randconfig-a005-20211202 i386 randconfig-a002-20211202 i386 randconfig-a003-20211202 i386 randconfig-a006-20211202 i386 randconfig-a004-20211202 x86_64 randconfig-a016-20211203 x86_64 randconfig-a011-20211203 x86_64 randconfig-a013-20211203 x86_64 randconfig-a014-20211203 x86_64 randconfig-a015-20211203 x86_64 randconfig-a012-20211203 i386 randconfig-a016-20211203 i386 randconfig-a013-20211203 i386 randconfig-a011-20211203 i386 randconfig-a014-20211203 i386 randconfig-a012-20211203 i386 randconfig-a015-20211203 hexagon randconfig-r045-20211203 s390 randconfig-r044-20211203 hexagon randconfig-r041-20211203 riscv randconfig-r042-20211203 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From alexandr.lobakin at intel.com Sat Dec 4 01:08:29 2021 From: alexandr.lobakin at intel.com (Alexander Lobakin) Date: Sat, 4 Dec 2021 02:08:29 +0100 Subject: [Intel-wired-lan] [RFC PATCH 0/4] r8169: support dash In-Reply-To: <20211203070410.1b4abc4d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> References: <20211129101315.16372-381-nic_swsd@realtek.com> <20211129095947.547a765f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <918d75ea873a453ab2ba588a35d66ab6@realtek.com> <20211130190926.7c1d735d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <20211203070410.1b4abc4d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> Message-ID: <20211204010829.7796-1-alexandr.lobakin@intel.com> From: Jakub Kicinski Date: Fri, 3 Dec 2021 07:04:10 -0800 > On Fri, 3 Dec 2021 07:57:08 +0000 Hayes Wang wrote: > > Jakub Kicinski > > > I'm not sure how relevant it will be to you but this is the > > > documentation we have: > > > > > > https://www.kernel.org/doc/html/latest/networking/devlink/index.html > > > https://www.kernel.org/doc/html/latest/networking/devlink/devlink-params.ht > > > ml > > > > > > You'll need to add a generic parameter (define + a short description) > > > like 325e0d0aa683 ("devlink: Add 'enable_iwarp' generic device param") > > > > > > In terms of driver changes I think the most relevant example to you > > > will be: > > > > > > drivers/net/ethernet/ti/cpsw_new.c > > > > > > You need to call devlink_alloc(), devlink_register and > > > devlink_params_register() (and the inverse functions). > > > > I have studied the devlink briefly. > > > > However, I find some problems. First, our > > settings are dependent on the design of > > both the hardware and firmware. That is, > > I don't think the others need to do the > > settings as the same as us. The devlink > > seems to let everyone could use the same > > command to do the same setting. However, > > most of our settings are useless for the > > other devices. > > > > Second, according to the design of our > > CMAC, the application has to read and > > write data with variable length from/to > > the firmware. Each custom has his own > > requests. Therefore, our customs would > > get different firmware with different > > behavior. Only the application and the > > firmware know how to communicate with > > each other. The driver only passes the > > data between them. Like the Ethernet > > driver, it doesn't need to know the > > contend of the packet. I could implement > > the CMAC through sysfs, but I don't > > know how to do by devlink. > > > > In brief, CMAC is our major method to > > configure the firmware and get response > > from the firmware. Except for certain information, > > the other settings are not standard and useless > > for the other vendors. > > > > Is the devlink the only method I could use? > > Actually, we use IOCTL now. We wish to > > convert it to sysfs for upstream driver. > > Ah, I've only spotted the enable/disable knob in the patch. > If you're exchanging arbitrary binary data with the FW we > can't help you. It's not going to fly upstream. Uhm. I'm not saying sysfs is a proper way to do that, not at all, buuut... We have a ton of different subsystems providing a communication channel between userspace and HW/FW. Chardevices all over the tree, highly used rpmsg for remoteproc, uio. We have register dump in Ethtool, as well as get/set for EEPROM, I'd count them as well. So it probably isn't a bad idea to provide some standard API for network drivers to talk to HW/FW from userspace, like get/set or rx/tx (when having enough caps for sure)? It could be Devlink ops or Ethtool ops, the latter fits more to me. Al From kuba at kernel.org Sat Dec 4 01:32:03 2021 From: kuba at kernel.org (Jakub Kicinski) Date: Fri, 3 Dec 2021 17:32:03 -0800 Subject: [Intel-wired-lan] [RFC PATCH 0/4] r8169: support dash In-Reply-To: <20211204010829.7796-1-alexandr.lobakin@intel.com> References: <20211129101315.16372-381-nic_swsd@realtek.com> <20211129095947.547a765f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <918d75ea873a453ab2ba588a35d66ab6@realtek.com> <20211130190926.7c1d735d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <20211203070410.1b4abc4d@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <20211204010829.7796-1-alexandr.lobakin@intel.com> Message-ID: <20211203173203.285dc75f@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Sat, 4 Dec 2021 02:08:29 +0100 Alexander Lobakin wrote: > > Ah, I've only spotted the enable/disable knob in the patch. > > If you're exchanging arbitrary binary data with the FW we > > can't help you. It's not going to fly upstream. > > Uhm. I'm not saying sysfs is a proper way to do that, not at all, > buuut... > We have a ton of different subsystems providing a communication > channel between userspace and HW/FW. Chardevices all over the > tree, highly used rpmsg for remoteproc, uio. Not in Ethernet. > We have register dump in Ethtool, Read only. > as well as get/set for EEPROM, I'd count them as well. EEPROM writes are supposed to update FW images, not send random messages. > So it probably isn't a bad idea to provide some standard API for > network drivers to talk to HW/FW from userspace, like get/set or > rx/tx (when having enough caps for sure)? It could be Devlink ops > or Ethtool ops, the latter fits more to me. I'm not saying it's wrong to merge shim drivers into the kernel and let the user space talk to device FW. I'm saying it's counter to what netdev's policy has always been and counter to my personal interests. What is a standard API for custom, proprietary FW message interface? We want standards at a functional level. Once you open up a raw FW write interface there is no policing of what goes thru it. I CCed Intel since you also have the (infamous) ME, but I never heard of the need to communicate from the OS to the ME via the netdev driver... Not sure why things are different for Realtek. From lkp at intel.com Sat Dec 4 02:37:22 2021 From: lkp at intel.com (kernel test robot) Date: Sat, 04 Dec 2021 10:37:22 +0800 Subject: [Intel-wired-lan] [tnguy-next-queue:dev-queue] BUILD SUCCESS 404189e29907502e06179af67790aea01d826f30 Message-ID: <61aad462.kbTs2qEgp7+JQ4C0%lkp@intel.com> tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git dev-queue branch HEAD: 404189e29907502e06179af67790aea01d826f30 ice: remove dead store on XSK hotpath elapsed time: 1880m configs tested: 164 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. gcc tested configs: arm allmodconfig arm allyesconfig arm defconfig arm64 allyesconfig arm64 defconfig i386 randconfig-c001-20211202 i386 randconfig-c001-20211203 arm lubbock_defconfig xtensa common_defconfig sh migor_defconfig microblaze defconfig arm shmobile_defconfig arm imx_v6_v7_defconfig um i386_defconfig powerpc sequoia_defconfig arm ep93xx_defconfig arm davinci_all_defconfig h8300 edosk2674_defconfig sh se7724_defconfig powerpc cm5200_defconfig powerpc ppa8548_defconfig nios2 defconfig powerpc mpc866_ads_defconfig sh kfr2r09_defconfig mips mtx1_defconfig powerpc tqm8555_defconfig powerpc tqm8541_defconfig arm mini2440_defconfig sh apsh4ad0a_defconfig arm pxa_defconfig arm hackkit_defconfig parisc defconfig powerpc mpc83xx_defconfig powerpc mpc836x_rdk_defconfig arm at91_dt_defconfig powerpc powernv_defconfig um x86_64_defconfig arm qcom_defconfig arm eseries_pxa_defconfig ia64 tiger_defconfig mips tb0226_defconfig arm multi_v4t_defconfig arm simpad_defconfig powerpc mpc832x_rdb_defconfig sh apsh4a3a_defconfig arm pleb_defconfig h8300 alldefconfig arm socfpga_defconfig m68k m5208evb_defconfig powerpc mvme5100_defconfig arm multi_v7_defconfig sh se7751_defconfig sh microdev_defconfig powerpc eiger_defconfig sh sdk7786_defconfig m68k allmodconfig powerpc gamecube_defconfig arm h5000_defconfig m68k defconfig arm randconfig-c002-20211203 arm randconfig-c002-20211202 ia64 allmodconfig ia64 defconfig ia64 allyesconfig m68k allyesconfig arc allyesconfig nds32 allnoconfig nds32 defconfig nios2 allyesconfig csky defconfig alpha defconfig alpha allyesconfig xtensa allyesconfig h8300 allyesconfig arc defconfig sh allmodconfig s390 allmodconfig parisc allyesconfig s390 defconfig s390 allyesconfig i386 allyesconfig sparc allyesconfig sparc defconfig i386 defconfig i386 debian-10.3-kselftests i386 debian-10.3 mips allmodconfig mips allyesconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig x86_64 randconfig-a006-20211203 x86_64 randconfig-a005-20211203 x86_64 randconfig-a001-20211203 x86_64 randconfig-a002-20211203 x86_64 randconfig-a004-20211203 x86_64 randconfig-a003-20211203 i386 randconfig-a001-20211203 i386 randconfig-a005-20211203 i386 randconfig-a002-20211203 i386 randconfig-a003-20211203 i386 randconfig-a006-20211203 i386 randconfig-a004-20211203 x86_64 randconfig-a016-20211202 x86_64 randconfig-a011-20211202 x86_64 randconfig-a013-20211202 x86_64 randconfig-a014-20211202 x86_64 randconfig-a012-20211202 x86_64 randconfig-a015-20211202 i386 randconfig-a016-20211202 i386 randconfig-a013-20211202 i386 randconfig-a011-20211202 i386 randconfig-a014-20211202 i386 randconfig-a012-20211202 i386 randconfig-a015-20211202 arc randconfig-r043-20211202 s390 randconfig-r044-20211202 riscv randconfig-r042-20211202 riscv nommu_k210_defconfig riscv allyesconfig riscv nommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscv allmodconfig x86_64 rhel-8.3-kselftests x86_64 allyesconfig x86_64 defconfig x86_64 rhel-8.3 x86_64 rhel-8.3-func x86_64 kexec clang tested configs: arm randconfig-c002-20211203 x86_64 randconfig-c007-20211203 riscv randconfig-c006-20211203 mips randconfig-c004-20211203 i386 randconfig-c001-20211203 powerpc randconfig-c003-20211203 s390 randconfig-c005-20211203 x86_64 randconfig-a006-20211202 x86_64 randconfig-a005-20211202 x86_64 randconfig-a001-20211202 x86_64 randconfig-a002-20211202 x86_64 randconfig-a004-20211202 x86_64 randconfig-a003-20211202 i386 randconfig-a001-20211202 i386 randconfig-a005-20211202 i386 randconfig-a002-20211202 i386 randconfig-a003-20211202 i386 randconfig-a006-20211202 i386 randconfig-a004-20211202 x86_64 randconfig-a016-20211203 x86_64 randconfig-a011-20211203 x86_64 randconfig-a013-20211203 x86_64 randconfig-a014-20211203 x86_64 randconfig-a015-20211203 x86_64 randconfig-a012-20211203 i386 randconfig-a016-20211203 i386 randconfig-a013-20211203 i386 randconfig-a011-20211203 i386 randconfig-a014-20211203 i386 randconfig-a012-20211203 i386 randconfig-a015-20211203 hexagon randconfig-r045-20211203 s390 randconfig-r044-20211203 hexagon randconfig-r041-20211203 riscv randconfig-r042-20211203 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org From idosch at idosch.org Sun Dec 5 12:24:08 2021 From: idosch at idosch.org (Ido Schimmel) Date: Sun, 5 Dec 2021 14:24:08 +0200 Subject: [Intel-wired-lan] [PATCH v4 net-next 2/4] ethtool: Add ability to configure recovered clock for SyncE feature In-Reply-To: References: <20211201180208.640179-1-maciej.machnikowski@intel.com> <20211201180208.640179-3-maciej.machnikowski@intel.com> Message-ID: On Fri, Dec 03, 2021 at 04:18:18PM +0000, Machnikowski, Maciej wrote: > > -----Original Message----- > > From: Ido Schimmel > > Sent: Friday, December 3, 2021 4:46 PM > > To: Machnikowski, Maciej > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > > recovered clock for SyncE feature > > > > On Thu, Dec 02, 2021 at 05:20:24PM +0000, Machnikowski, Maciej wrote: > > > > -----Original Message----- > > > > From: Ido Schimmel > > > > Sent: Thursday, December 2, 2021 5:36 PM > > > > To: Machnikowski, Maciej > > > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > > > > recovered clock for SyncE feature > > > > > > > > On Thu, Dec 02, 2021 at 03:17:06PM +0000, Machnikowski, Maciej wrote: > > > > > > -----Original Message----- > > > > > > From: Ido Schimmel > > > > > > Sent: Thursday, December 2, 2021 1:44 PM > > > > > > To: Machnikowski, Maciej > > > > > > Subject: Re: [PATCH v4 net-next 2/4] ethtool: Add ability to configure > > > > > > recovered clock for SyncE feature > > > > > > > > > > > > On Wed, Dec 01, 2021 at 07:02:06PM +0100, Maciej Machnikowski > > wrote: > > > > > > Looking at the diagram from the previous submission [1]: > > > > > > > > > > > > ??????????????????????? > > > > > > ? RX ? TX ? > > > > > > 1 ? ports ? ports ? 1 > > > > > > ??????????? ? ??????? > > > > > > 2 ? ? ? ? 2 > > > > > > ????????? ? ? ??????? > > > > > > 3 ? ? ? ? ? 3 > > > > > > ??????? ? ? ? ??????? > > > > > > ? ? ? ? ? ? > > > > > > ? ?????? ? ? > > > > > > ? \____/ ? ? > > > > > > ??????????????????????? > > > > > > 1? 2? ? > > > > > > RCLK out? ? ? TX CLK in > > > > > > ? ? ? > > > > > > ??????????????????? > > > > > > ? ? > > > > > > ? SEC ? > > > > > > ? ? > > > > > > ??????????????????? > > > > > > > > > > > > Given a netdev (1, 2 or 3 in the diagram), the RCLK_SET message > > allows > > > > > > me to redirect the frequency recovered from this netdev to the EEC > > via > > > > > > either pin 1, pin 2 or both. > > > > > > > > > > > > Given a netdev, the RCLK_GET message allows me to query the range > > of > > > > > > pins (RCLK out 1-2 in the diagram) through which the frequency can > > be > > > > > > fed into the EEC. > > > > > > > > > > > > Questions: > > > > > > > > > > > > 1. The query for all the above netdevs will return the same range of > > > > > > pins. How does user space know that these are the same pins? That > > is, > > > > > > how does user space know that RCLK_SET message to redirect the > > > > frequency > > > > > > recovered from netdev 1 to pin 1 will be overridden by the same > > > > message > > > > > > but for netdev 2? > > > > > > > > > > We don't have a way to do so right now. When we have EEC subsystem > > in > > > > place > > > > > the right thing to do will be to add EEC input index and EEC index as > > > > additional > > > > > arguments > > > > > > > > > > > 2. How does user space know the mapping between a netdev and an > > > > EEC? > > > > > > That is, how does user space know that RCLK_SET message for netdev > > 1 > > > > > > will cause the Tx frequency of netdev 2 to change according to the > > > > > > frequency recovered from netdev 1? > > > > > > > > > > Ditto - currently we don't have any entity to link the pins to ATM, > > > > > but we can address that in userspace just like PTP pins are used now > > > > > > > > > > > 3. If user space sends two RCLK_SET messages to redirect the > > frequency > > > > > > recovered from netdev 1 to RCLK out 1 and from netdev 2 to RCLK out > > 2, > > > > > > how does it know which recovered frequency is actually used an input > > to > > > > > > the EEC? > > > > > > > > User space doesn't know this as well? > > > > > > In current model it can come from the config file. Once we implement DPLL > > > subsystem we can implement connection between pins and DPLLs if they > > are > > > known. > > > > To be clear, no SyncE patches should be accepted before we have a DPLL > > subsystem or however the subsystem that will model the EEC is going to > > be called. > > > > You are asking us to buy into a new uAPI that can never be removed. We > > pointed out numerous problems with this uAPI and suggested a model that > > solves them. When asked why it can't work we are answered with vague > > arguments about this model being "hard". > > My argument was never "it's hard" - the answer is we need both APIs. We are discussing whether two APIs are actually necessary or whether EEC source configuration can be done via the EEC. The answer cannot be "the answer is we need both APIs". > > > In addition, without a representation of the EEC, these patches have no > > value for user space. They basically allow user space to redirect the > > recovered frequency from a netdev to an object that does not exist. > > User space doesn't know if the object is successfully tracking the > > frequency (the EEC state) and does not know which other components are > > utilizing this recovered frequency as input (e.g., other netdevs, PHC). > > That's also not true - the proposed uAPI lets you enable recovered frequency > output pins and redirect the right clock to them. In some implementations > you may not have anything else. What isn't true? That these patches have no value for user space? This is 100% true. You admitted that this is incomplete work. There is no reason to merge one API without the other. At the very least, we need to see an explanation of how the two APIs work together. This is missing from the patchset, which prompted these questions: https://lore.kernel.org/netdev/Yai%2Fe5jz3NZAg0pm at shredder/ > > > BTW, what is the use case for enabling two EEC inputs simultaneously? > > Some seamless failover? > > Mainly - redundacy > > > > > > > > > > > > > > > > 4. Why these pins are represented as attributes of a netdev and not > > as > > > > > > attributes of the EEC? That is, why are they represented as output > > pins > > > > > > of the PHY as opposed to input pins of the EEC? > > > > > > > > > > They are 2 separate beings. Recovered clock outputs are controlled > > > > > separately from EEC inputs. > > > > > > > > Separate how? What does it mean that they are controlled separately? In > > > > which sense? That redirection of recovered frequency to pin is > > > > controlled via PHY registers whereas priority setting between EEC inputs > > > > is controlled via EEC registers? If so, this is an implementation detail > > > > of a specific design. It is not of any importance to user space. > > > > > > They belong to different devices. EEC registers are physically in the DPLL > > > hanging over I2C and recovered clocks are in the PHY/integrated PHY in > > > the MAC. Depending on system architecture you may have control over > > > one piece only > > > > These are implementation details of a specific design and should not > > influence the design of the uAPI. The uAPI should be influenced by the > > logical task that it is trying to achieve. > > There are 2 logical tasks: > 1. Enable clocks that are recovered from a specific netdev I already replied about this here: https://lore.kernel.org/netdev/Yao+nK40D0+u8UKL at shredder/ If the recovered clock outputs are only meaningful as EEC inputs, then there is no reason not to configure them through the EEC object. The fact that you think that the *internal* kernel plumbing (that can be improved over time) will be "hard" is not a reason to end up with a *user* API (that cannot be changed) where the *Ethernet* Equipment Clock is ignorant of its *Ethernet* ports. With your proposal where the EEC is only aware of pins, how does user space answer the question of what is the source of the EEC? It needs to issue RCLK_GET dump? How does it even know that the source is a netdev and not an external one? And if the EEC object knows that the source is a netdev, how come it does not know which netdev? > 2. Control the EEC > > They are both needed to get to the full solution, but are independent from > each other. You can't put RCLK redirection to the EEC as it's one to many > relation and you will need to call the netdev to enable it anyway. So what if I need to call the netdev? The EEC cannot be so disjoint from the associated netdevs. After all, EEC stands for *Ethernet* Equipment Clock. In the common case, the EEC will transfer the frequency from one netdev to another. In the less common case, it will transfer the frequency from an external source to a netdev. > > Also, when we tried to add EEC state to PTP subsystem the answer was > that we can't mix subsystems. SyncE doesn't belong in PTP because PTP can work without SyncE and SyncE can work without PTP. The fact that the primary use case for SyncE might be PTP doesn't mean that SyncE belongs in PTP subsystem. > The proposal to configure recovered clocks through EEC would mix > netdev with EEC. I don't believe that *Ethernet* Equipment Clock and *Ethernet* ports should be so disjoint so that the EEC doesn't know about: a. The netdev from which it is recovering its frequency b. The netdevs that it is controlling If the netdevs are smart enough to report the EEC input pins and EEC association to user space, then they are also smart enough to register themselves internally in the kernel with the EEC. They can all appear as virtual input/output pins of the EEC that can be enabled/disabled by user space. In addition, you can have physical (named) pins for external sources / outputs and another virtual output pin towards the PHC. > > > > > > > > > If we mix them it'll be hard to control everything especially that a > > > > > single EEC can support multiple devices. > > > > > > > > Hard how? Please provide concrete examples. > > > > > > From the EEC perspective it's one to many relation - one EEC input pin will > > serve > > > even 4,16,48 netdevs. I don't see easy way of starting from EEC input of EEC > > device > > > and figuring out which netdevs are connected to it to talk to the right one. > > > In current model it's as simple as: > > > - I received QL-PRC on netdev ens4f0 > > > - I send back enable recovered clock on pin 0 of the ens4f0 > > > - go to EEC that will be linked to it > > > - see the state of it - if its locked - report QL-EEC downsteam > > > > > > How would you this control look in the EEC/DPLL implementation? Maybe > > > I missed something. > > > > Petr already replied. > > See my response there. > > > > > > > > What do you mean by "multiple devices"? A multi-port adapter with a > > > > single EEC or something else? > > > > > > Multiple MACs that use a single EEC clock. > > > > > > > > Also if we make those pins attributes of the EEC it'll become extremally > > > > hard > > > > > to map them to netdevs and control them from the userspace app that > > will > > > > > receive the ESMC message with a given QL level on netdev X. > > > > > > > > Hard how? What is the problem with something like: > > > > > > > > # eec set source 1 type netdev dev swp1 > > > > > > > > The EEC object should be registered by the same entity that registers > > > > the netdevs whose Tx frequency is controlled by the EEC, the MAC > > driver. > > > > > > But the EEC object may not be controlled by the MAC - in which case > > > this model won't work. > > > > Why wouldn't it work? Leave individual kernel modules alone and look at > > the kernel. It is registering all the necessary logical objects such > > netdevs, PHCs and EECs. There is no way user space knows better than the > > kernel how these objects fit together as the purpose of the kernel is to > > abstract the hardware to user space. > > > > User space's request to use the Rx frequency recovered from netdev X as > > an input to EEC Y will be processed by the DPLL subsystem. In turn, this > > subsystem will invoke whichever kernel modules it needs to fulfill the > > request. > > But how would that call go through the kernel? What would you like to give > to the EEC object and how should it react. I'm fine with the changes, but > I don't see the solution in that proposal and this model would mix independent > subsystems. > The netdev -> EEC should be a downstream relation, just like the PTP is now > If a netdev wants to check what's the state of EEC driving it - it can do it, but > I don't see a way for the EEC subsystem to directly configure something in > Potentially couple different MAC chips without calling a kind of netdev API. > And that's what those patches address. > > > > > > > > > > > > > > > 5. What is the problem with the following model? > > > > > > > > > > > > - The EEC is a separate object with following attributes: > > > > > > * State: Invalid / Freerun / Locked / etc > > > > > > * Sources: Netdev / external / etc > > > > > > * Potentially more > > > > > > > > > > > > - Notifications are emitted to user space when the state of the EEC > > > > > > changes. Drivers will either poll the state from the device or get > > > > > > interrupts > > > > > > > > > > > > - The mapping from netdev to EEC is queried via ethtool > > > > > > > > > > Yep - that will be part of the EEC (DPLL) subsystem > > > > > > > > This model avoids all the problems I pointed out in the current > > > > proposal. > > > > > > That's the go-to model, but first we need control over the source as well :) > > > > The point that we are trying to make is that like the EEC state, the > > source is also an EEC attribute and not a netdev attribute.