mirror of https://github.com/torvalds/linux.git
Including fixes from wireless and netfilter.
Current release - regressions:
- phy: multiple fixes for EEE rework
- wifi: wext: warn about usage only once
- wifi: ath11k: allow system suspend to survive ath11k
Current release - new code bugs:
- mlx5: Fix memory leak in IPsec RoCE creation
- ibmvnic: assign XPS map to correct queue index
Previous releases - regressions:
- netfilter: ip6t_rpfilter: Fix regression with VRF interfaces
- netfilter: ctnetlink: make event listener tracking global
- nf_tables: allow to fetch set elements when table has an owner
- mlx5:
- fix skb leak while fifo resync and push
- fix possible ptp queue fifo use-after-free
Previous releases - always broken:
- sched: fix action bind logic
- ptp: vclock: use mutex to fix "sleep on atomic" bug if driver
also uses a mutex
- netfilter: conntrack: fix rmmod double-free race
- netfilter: xt_length: use skb len to match in length_mt6,
avoid issues with BIG TCP
Misc:
- ice: remove unnecessary CONFIG_ICE_GNSS
- mlx5e: remove hairpin write debugfs files
- sched: act_api: move TCA_EXT_WARN_MSG to the correct hierarchy
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmP9JgYACgkQMUZtbf5S
IrsIRRAApy4Hjb8z5z3k4HOM2lA3b/3OWD301I5YtoU3FC4L938yETAPFYUGbWrX
rKN4YOTNh2Fvkgbni7vz9hbC84C6i86Q9+u7dT1U+kCk3kbyQPFZlEDj5fY0I8zK
1xweCRrC1CcG74S2M5UO3UnWz1ypWQpTnHfWZqq0Duh1j9Xc+MHjHC2IKrGnzM6U
1/ODk9FrtsWC+KGJlWwiV+yJMYUA4nCKIS/NrmdRlBa7eoP0oC1xkA8g0kz3/P3S
O+xMyhExcZbMYY5VMkiGBZ5l8Ve3t6lHcMXq7jWlSCOeXd4Ut6zzojHlGZjzlCy9
RQQJzva2wlltqB9rECUQixpZbVS6ubf5++zvACOKONhSIEdpWjZW9K/qsV8igbfM
Xx0hsG1jCBt/xssRw2UBsq73vjNf1AkdksvqJgcggAvBJU8cV3MxRRB4/9lyPdmB
NNFqehwCeE3aU0FSBKoxZVYpfg+8J/XhwKT63Cc2d1ENetsWk/LxvkYm24aokpW+
nn+jUH9AYk3rFlBVQG1xsCwU4VlGk/yZgRwRMYFBqPkAGcXLZOnqdoSviBPN3yN0
Habs1hxToMt3QBgLJcMVn8CYdWCJgnZpxs8Mfo+PGoWKHzQ9kXBdyYyIZm1GyesD
BN/2QN38yMGXRALd2NXS2Va4ygX7KptB7+HsitdkzKCqcp1Ao+I=
=Ko4p
-----END PGP SIGNATURE-----
Merge tag 'net-6.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
"Including fixes from wireless and netfilter.
The notable fixes here are the EEE fix which restores boot for many
embedded platforms (real and QEMU); WiFi warning suppression and the
ICE Kconfig cleanup.
Current release - regressions:
- phy: multiple fixes for EEE rework
- wifi: wext: warn about usage only once
- wifi: ath11k: allow system suspend to survive ath11k
Current release - new code bugs:
- mlx5: Fix memory leak in IPsec RoCE creation
- ibmvnic: assign XPS map to correct queue index
Previous releases - regressions:
- netfilter: ip6t_rpfilter: Fix regression with VRF interfaces
- netfilter: ctnetlink: make event listener tracking global
- nf_tables: allow to fetch set elements when table has an owner
- mlx5:
- fix skb leak while fifo resync and push
- fix possible ptp queue fifo use-after-free
Previous releases - always broken:
- sched: fix action bind logic
- ptp: vclock: use mutex to fix "sleep on atomic" bug if driver also
uses a mutex
- netfilter: conntrack: fix rmmod double-free race
- netfilter: xt_length: use skb len to match in length_mt6, avoid
issues with BIG TCP
Misc:
- ice: remove unnecessary CONFIG_ICE_GNSS
- mlx5e: remove hairpin write debugfs files
- sched: act_api: move TCA_EXT_WARN_MSG to the correct hierarchy"
* tag 'net-6.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (53 commits)
tcp: tcp_check_req() can be called from process context
net: phy: c45: fix network interface initialization failures on xtensa, arm:cubieboard
xen-netback: remove unused variables pending_idx and index
net/sched: act_api: move TCA_EXT_WARN_MSG to the correct hierarchy
net: dsa: ocelot_ext: remove unnecessary phylink.h include
net: mscc: ocelot: fix duplicate driver name error
net: dsa: felix: fix internal MDIO controller resource length
net: dsa: seville: ignore mscc-miim read errors from Lynx PCS
net/sched: act_sample: fix action bind logic
net/sched: act_mpls: fix action bind logic
net/sched: act_pedit: fix action bind logic
wifi: wext: warn about usage only once
wifi: mt76: usb: fix use-after-free in mt76u_free_rx_queue
qede: avoid uninitialized entries in coal_entry array
nfc: fix memory leak of se_io context in nfc_genl_se_io
ice: remove unnecessary CONFIG_ICE_GNSS
net/sched: cls_api: Move call to tcf_exts_miss_cookie_base_destroy()
ibmvnic: Assign XPS map to correct queue index
docs: net: fix inaccuracies in msg_zerocopy.rst
tools: net: add __pycache__ to gitignore
...
This commit is contained in:
commit
5ca26d6039
|
|
@ -28,7 +28,7 @@ definitions:
|
||||||
-
|
-
|
||||||
name: hw-offload
|
name: hw-offload
|
||||||
doc:
|
doc:
|
||||||
This feature informs if netdev supports XDP hw oflloading.
|
This feature informs if netdev supports XDP hw offloading.
|
||||||
-
|
-
|
||||||
name: rx-sg
|
name: rx-sg
|
||||||
doc:
|
doc:
|
||||||
|
|
|
||||||
|
|
@ -15,7 +15,7 @@ Opportunity and Caveats
|
||||||
|
|
||||||
Copying large buffers between user process and kernel can be
|
Copying large buffers between user process and kernel can be
|
||||||
expensive. Linux supports various interfaces that eschew copying,
|
expensive. Linux supports various interfaces that eschew copying,
|
||||||
such as sendpage and splice. The MSG_ZEROCOPY flag extends the
|
such as sendfile and splice. The MSG_ZEROCOPY flag extends the
|
||||||
underlying copy avoidance mechanism to common socket send calls.
|
underlying copy avoidance mechanism to common socket send calls.
|
||||||
|
|
||||||
Copy avoidance is not a free lunch. As implemented, with page pinning,
|
Copy avoidance is not a free lunch. As implemented, with page pinning,
|
||||||
|
|
@ -83,8 +83,8 @@ Pass the new flag.
|
||||||
ret = send(fd, buf, sizeof(buf), MSG_ZEROCOPY);
|
ret = send(fd, buf, sizeof(buf), MSG_ZEROCOPY);
|
||||||
|
|
||||||
A zerocopy failure will return -1 with errno ENOBUFS. This happens if
|
A zerocopy failure will return -1 with errno ENOBUFS. This happens if
|
||||||
the socket option was not set, the socket exceeds its optmem limit or
|
the socket exceeds its optmem limit or the user exceeds their ulimit on
|
||||||
the user exceeds its ulimit on locked pages.
|
locked pages.
|
||||||
|
|
||||||
|
|
||||||
Mixing copy avoidance and copying
|
Mixing copy avoidance and copying
|
||||||
|
|
|
||||||
|
|
@ -177,7 +177,7 @@ static const struct mfd_cell vsc7512_devs[] = {
|
||||||
.num_resources = ARRAY_SIZE(vsc7512_miim1_resources),
|
.num_resources = ARRAY_SIZE(vsc7512_miim1_resources),
|
||||||
.resources = vsc7512_miim1_resources,
|
.resources = vsc7512_miim1_resources,
|
||||||
}, {
|
}, {
|
||||||
.name = "ocelot-switch",
|
.name = "ocelot-ext-switch",
|
||||||
.of_compatible = "mscc,vsc7512-switch",
|
.of_compatible = "mscc,vsc7512-switch",
|
||||||
.num_resources = ARRAY_SIZE(vsc7512_switch_resources),
|
.num_resources = ARRAY_SIZE(vsc7512_switch_resources),
|
||||||
.resources = vsc7512_switch_resources,
|
.resources = vsc7512_switch_resources,
|
||||||
|
|
|
||||||
|
|
@ -554,7 +554,7 @@ static const char * const vsc9959_resource_names[TARGET_MAX] = {
|
||||||
* SGMII/QSGMII MAC PCS can be found.
|
* SGMII/QSGMII MAC PCS can be found.
|
||||||
*/
|
*/
|
||||||
static const struct resource vsc9959_imdio_res =
|
static const struct resource vsc9959_imdio_res =
|
||||||
DEFINE_RES_MEM_NAMED(0x8030, 0x8040, "imdio");
|
DEFINE_RES_MEM_NAMED(0x8030, 0x10, "imdio");
|
||||||
|
|
||||||
static const struct reg_field vsc9959_regfields[REGFIELD_MAX] = {
|
static const struct reg_field vsc9959_regfields[REGFIELD_MAX] = {
|
||||||
[ANA_ADVLEARN_VLAN_CHK] = REG_FIELD(ANA_ADVLEARN, 6, 6),
|
[ANA_ADVLEARN_VLAN_CHK] = REG_FIELD(ANA_ADVLEARN, 6, 6),
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,6 @@
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/mfd/ocelot.h>
|
#include <linux/mfd/ocelot.h>
|
||||||
#include <linux/phylink.h>
|
|
||||||
#include <linux/platform_device.h>
|
#include <linux/platform_device.h>
|
||||||
#include <linux/regmap.h>
|
#include <linux/regmap.h>
|
||||||
#include <soc/mscc/ocelot.h>
|
#include <soc/mscc/ocelot.h>
|
||||||
|
|
@ -149,7 +148,7 @@ MODULE_DEVICE_TABLE(of, ocelot_ext_switch_of_match);
|
||||||
|
|
||||||
static struct platform_driver ocelot_ext_switch_driver = {
|
static struct platform_driver ocelot_ext_switch_driver = {
|
||||||
.driver = {
|
.driver = {
|
||||||
.name = "ocelot-switch",
|
.name = "ocelot-ext-switch",
|
||||||
.of_match_table = of_match_ptr(ocelot_ext_switch_of_match),
|
.of_match_table = of_match_ptr(ocelot_ext_switch_of_match),
|
||||||
},
|
},
|
||||||
.probe = ocelot_ext_probe,
|
.probe = ocelot_ext_probe,
|
||||||
|
|
|
||||||
|
|
@ -893,8 +893,8 @@ static int vsc9953_mdio_bus_alloc(struct ocelot *ocelot)
|
||||||
|
|
||||||
rc = mscc_miim_setup(dev, &bus, "VSC9953 internal MDIO bus",
|
rc = mscc_miim_setup(dev, &bus, "VSC9953 internal MDIO bus",
|
||||||
ocelot->targets[GCB],
|
ocelot->targets[GCB],
|
||||||
ocelot->map[GCB][GCB_MIIM_MII_STATUS & REG_MASK]);
|
ocelot->map[GCB][GCB_MIIM_MII_STATUS & REG_MASK],
|
||||||
|
true);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
dev_err(dev, "failed to setup MDIO bus\n");
|
dev_err(dev, "failed to setup MDIO bus\n");
|
||||||
return rc;
|
return rc;
|
||||||
|
|
|
||||||
|
|
@ -296,10 +296,10 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter)
|
||||||
|
|
||||||
rc = __netif_set_xps_queue(adapter->netdev,
|
rc = __netif_set_xps_queue(adapter->netdev,
|
||||||
cpumask_bits(queue->affinity_mask),
|
cpumask_bits(queue->affinity_mask),
|
||||||
i, XPS_CPUS);
|
i_txqs - 1, XPS_CPUS);
|
||||||
if (rc)
|
if (rc)
|
||||||
netdev_warn(adapter->netdev, "%s: Set XPS on queue %d failed, rc = %d.\n",
|
netdev_warn(adapter->netdev, "%s: Set XPS on queue %d failed, rc = %d.\n",
|
||||||
__func__, i, rc);
|
__func__, i_txqs - 1, rc);
|
||||||
}
|
}
|
||||||
|
|
||||||
out:
|
out:
|
||||||
|
|
|
||||||
|
|
@ -296,6 +296,7 @@ config ICE
|
||||||
default n
|
default n
|
||||||
depends on PCI_MSI
|
depends on PCI_MSI
|
||||||
depends on PTP_1588_CLOCK_OPTIONAL
|
depends on PTP_1588_CLOCK_OPTIONAL
|
||||||
|
depends on GNSS || GNSS = n
|
||||||
select AUXILIARY_BUS
|
select AUXILIARY_BUS
|
||||||
select DIMLIB
|
select DIMLIB
|
||||||
select NET_DEVLINK
|
select NET_DEVLINK
|
||||||
|
|
@ -337,9 +338,6 @@ config ICE_HWTS
|
||||||
the PTP clock driver precise cross-timestamp ioctl
|
the PTP clock driver precise cross-timestamp ioctl
|
||||||
(PTP_SYS_OFFSET_PRECISE).
|
(PTP_SYS_OFFSET_PRECISE).
|
||||||
|
|
||||||
config ICE_GNSS
|
|
||||||
def_bool GNSS = y || GNSS = ICE
|
|
||||||
|
|
||||||
config FM10K
|
config FM10K
|
||||||
tristate "Intel(R) FM10000 Ethernet Switch Host Interface Support"
|
tristate "Intel(R) FM10000 Ethernet Switch Host Interface Support"
|
||||||
default n
|
default n
|
||||||
|
|
|
||||||
|
|
@ -47,4 +47,4 @@ ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o
|
||||||
ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o
|
ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o
|
||||||
ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o
|
ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o
|
||||||
ice-$(CONFIG_ICE_SWITCHDEV) += ice_eswitch.o
|
ice-$(CONFIG_ICE_SWITCHDEV) += ice_eswitch.o
|
||||||
ice-$(CONFIG_ICE_GNSS) += ice_gnss.o
|
ice-$(CONFIG_GNSS) += ice_gnss.o
|
||||||
|
|
|
||||||
|
|
@ -45,7 +45,7 @@ struct gnss_serial {
|
||||||
struct list_head queue;
|
struct list_head queue;
|
||||||
};
|
};
|
||||||
|
|
||||||
#if IS_ENABLED(CONFIG_ICE_GNSS)
|
#if IS_ENABLED(CONFIG_GNSS)
|
||||||
void ice_gnss_init(struct ice_pf *pf);
|
void ice_gnss_init(struct ice_pf *pf);
|
||||||
void ice_gnss_exit(struct ice_pf *pf);
|
void ice_gnss_exit(struct ice_pf *pf);
|
||||||
bool ice_gnss_is_gps_present(struct ice_hw *hw);
|
bool ice_gnss_is_gps_present(struct ice_hw *hw);
|
||||||
|
|
@ -56,5 +56,5 @@ static inline bool ice_gnss_is_gps_present(struct ice_hw *hw)
|
||||||
{
|
{
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
#endif /* IS_ENABLED(CONFIG_ICE_GNSS) */
|
#endif /* IS_ENABLED(CONFIG_GNSS) */
|
||||||
#endif /* _ICE_GNSS_H_ */
|
#endif /* _ICE_GNSS_H_ */
|
||||||
|
|
|
||||||
|
|
@ -793,7 +793,7 @@ static int otx2_prepare_ipv6_flow(struct ethtool_rx_flow_spec *fsp,
|
||||||
|
|
||||||
/* NPC profile doesn't extract AH/ESP header fields */
|
/* NPC profile doesn't extract AH/ESP header fields */
|
||||||
if ((ah_esp_mask->spi & ah_esp_hdr->spi) ||
|
if ((ah_esp_mask->spi & ah_esp_hdr->spi) ||
|
||||||
(ah_esp_mask->tclass & ah_esp_mask->tclass))
|
(ah_esp_mask->tclass & ah_esp_hdr->tclass))
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
|
|
||||||
if (flow_type == AH_V6_FLOW)
|
if (flow_type == AH_V6_FLOW)
|
||||||
|
|
|
||||||
|
|
@ -10,6 +10,7 @@
|
||||||
#include <net/tso.h>
|
#include <net/tso.h>
|
||||||
#include <linux/bpf.h>
|
#include <linux/bpf.h>
|
||||||
#include <linux/bpf_trace.h>
|
#include <linux/bpf_trace.h>
|
||||||
|
#include <net/ip6_checksum.h>
|
||||||
|
|
||||||
#include "otx2_reg.h"
|
#include "otx2_reg.h"
|
||||||
#include "otx2_common.h"
|
#include "otx2_common.h"
|
||||||
|
|
@ -699,7 +700,7 @@ static void otx2_sqe_add_ext(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
|
||||||
|
|
||||||
static void otx2_sqe_add_mem(struct otx2_snd_queue *sq, int *offset,
|
static void otx2_sqe_add_mem(struct otx2_snd_queue *sq, int *offset,
|
||||||
int alg, u64 iova, int ptp_offset,
|
int alg, u64 iova, int ptp_offset,
|
||||||
u64 base_ns, int udp_csum)
|
u64 base_ns, bool udp_csum_crt)
|
||||||
{
|
{
|
||||||
struct nix_sqe_mem_s *mem;
|
struct nix_sqe_mem_s *mem;
|
||||||
|
|
||||||
|
|
@ -711,7 +712,7 @@ static void otx2_sqe_add_mem(struct otx2_snd_queue *sq, int *offset,
|
||||||
|
|
||||||
if (ptp_offset) {
|
if (ptp_offset) {
|
||||||
mem->start_offset = ptp_offset;
|
mem->start_offset = ptp_offset;
|
||||||
mem->udp_csum_crt = udp_csum;
|
mem->udp_csum_crt = !!udp_csum_crt;
|
||||||
mem->base_ns = base_ns;
|
mem->base_ns = base_ns;
|
||||||
mem->step_type = 1;
|
mem->step_type = 1;
|
||||||
}
|
}
|
||||||
|
|
@ -986,10 +987,11 @@ static bool otx2_validate_network_transport(struct sk_buff *skb)
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool otx2_ptp_is_sync(struct sk_buff *skb, int *offset, int *udp_csum)
|
static bool otx2_ptp_is_sync(struct sk_buff *skb, int *offset, bool *udp_csum_crt)
|
||||||
{
|
{
|
||||||
struct ethhdr *eth = (struct ethhdr *)(skb->data);
|
struct ethhdr *eth = (struct ethhdr *)(skb->data);
|
||||||
u16 nix_offload_hlen = 0, inner_vhlen = 0;
|
u16 nix_offload_hlen = 0, inner_vhlen = 0;
|
||||||
|
bool udp_hdr_present = false, is_sync;
|
||||||
u8 *data = skb->data, *msgtype;
|
u8 *data = skb->data, *msgtype;
|
||||||
__be16 proto = eth->h_proto;
|
__be16 proto = eth->h_proto;
|
||||||
int network_depth = 0;
|
int network_depth = 0;
|
||||||
|
|
@ -1029,45 +1031,81 @@ static bool otx2_ptp_is_sync(struct sk_buff *skb, int *offset, int *udp_csum)
|
||||||
if (!otx2_validate_network_transport(skb))
|
if (!otx2_validate_network_transport(skb))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
*udp_csum = 1;
|
|
||||||
*offset = nix_offload_hlen + skb_transport_offset(skb) +
|
*offset = nix_offload_hlen + skb_transport_offset(skb) +
|
||||||
sizeof(struct udphdr);
|
sizeof(struct udphdr);
|
||||||
|
udp_hdr_present = true;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
msgtype = data + *offset;
|
msgtype = data + *offset;
|
||||||
|
|
||||||
/* Check PTP messageId is SYNC or not */
|
/* Check PTP messageId is SYNC or not */
|
||||||
return (*msgtype & 0xf) == 0;
|
is_sync = !(*msgtype & 0xf);
|
||||||
|
if (is_sync)
|
||||||
|
*udp_csum_crt = udp_hdr_present;
|
||||||
|
else
|
||||||
|
*offset = 0;
|
||||||
|
|
||||||
|
return is_sync;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void otx2_set_txtstamp(struct otx2_nic *pfvf, struct sk_buff *skb,
|
static void otx2_set_txtstamp(struct otx2_nic *pfvf, struct sk_buff *skb,
|
||||||
struct otx2_snd_queue *sq, int *offset)
|
struct otx2_snd_queue *sq, int *offset)
|
||||||
{
|
{
|
||||||
|
struct ethhdr *eth = (struct ethhdr *)(skb->data);
|
||||||
struct ptpv2_tstamp *origin_tstamp;
|
struct ptpv2_tstamp *origin_tstamp;
|
||||||
int ptp_offset = 0, udp_csum = 0;
|
bool udp_csum_crt = false;
|
||||||
|
unsigned int udphoff;
|
||||||
struct timespec64 ts;
|
struct timespec64 ts;
|
||||||
|
int ptp_offset = 0;
|
||||||
|
__wsum skb_csum;
|
||||||
u64 iova;
|
u64 iova;
|
||||||
|
|
||||||
if (unlikely(!skb_shinfo(skb)->gso_size &&
|
if (unlikely(!skb_shinfo(skb)->gso_size &&
|
||||||
(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) {
|
(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) {
|
||||||
if (unlikely(pfvf->flags & OTX2_FLAG_PTP_ONESTEP_SYNC)) {
|
if (unlikely(pfvf->flags & OTX2_FLAG_PTP_ONESTEP_SYNC &&
|
||||||
if (otx2_ptp_is_sync(skb, &ptp_offset, &udp_csum)) {
|
otx2_ptp_is_sync(skb, &ptp_offset, &udp_csum_crt))) {
|
||||||
origin_tstamp = (struct ptpv2_tstamp *)
|
origin_tstamp = (struct ptpv2_tstamp *)
|
||||||
((u8 *)skb->data + ptp_offset +
|
((u8 *)skb->data + ptp_offset +
|
||||||
PTP_SYNC_SEC_OFFSET);
|
PTP_SYNC_SEC_OFFSET);
|
||||||
ts = ns_to_timespec64(pfvf->ptp->tstamp);
|
ts = ns_to_timespec64(pfvf->ptp->tstamp);
|
||||||
origin_tstamp->seconds_msb = htons((ts.tv_sec >> 32) & 0xffff);
|
origin_tstamp->seconds_msb = htons((ts.tv_sec >> 32) & 0xffff);
|
||||||
origin_tstamp->seconds_lsb = htonl(ts.tv_sec & 0xffffffff);
|
origin_tstamp->seconds_lsb = htonl(ts.tv_sec & 0xffffffff);
|
||||||
origin_tstamp->nanoseconds = htonl(ts.tv_nsec);
|
origin_tstamp->nanoseconds = htonl(ts.tv_nsec);
|
||||||
/* Point to correction field in PTP packet */
|
/* Point to correction field in PTP packet */
|
||||||
ptp_offset += 8;
|
ptp_offset += 8;
|
||||||
|
|
||||||
|
/* When user disables hw checksum, stack calculates the csum,
|
||||||
|
* but it does not cover ptp timestamp which is added later.
|
||||||
|
* Recalculate the checksum manually considering the timestamp.
|
||||||
|
*/
|
||||||
|
if (udp_csum_crt) {
|
||||||
|
struct udphdr *uh = udp_hdr(skb);
|
||||||
|
|
||||||
|
if (skb->ip_summed != CHECKSUM_PARTIAL && uh->check != 0) {
|
||||||
|
udphoff = skb_transport_offset(skb);
|
||||||
|
uh->check = 0;
|
||||||
|
skb_csum = skb_checksum(skb, udphoff, skb->len - udphoff,
|
||||||
|
0);
|
||||||
|
if (ntohs(eth->h_proto) == ETH_P_IPV6)
|
||||||
|
uh->check = csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
|
||||||
|
&ipv6_hdr(skb)->daddr,
|
||||||
|
skb->len - udphoff,
|
||||||
|
ipv6_hdr(skb)->nexthdr,
|
||||||
|
skb_csum);
|
||||||
|
else
|
||||||
|
uh->check = csum_tcpudp_magic(ip_hdr(skb)->saddr,
|
||||||
|
ip_hdr(skb)->daddr,
|
||||||
|
skb->len - udphoff,
|
||||||
|
IPPROTO_UDP,
|
||||||
|
skb_csum);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
|
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
|
||||||
}
|
}
|
||||||
iova = sq->timestamps->iova + (sq->head * sizeof(u64));
|
iova = sq->timestamps->iova + (sq->head * sizeof(u64));
|
||||||
otx2_sqe_add_mem(sq, offset, NIX_SENDMEMALG_E_SETTSTMP, iova,
|
otx2_sqe_add_mem(sq, offset, NIX_SENDMEMALG_E_SETTSTMP, iova,
|
||||||
ptp_offset, pfvf->ptp->base_ns, udp_csum);
|
ptp_offset, pfvf->ptp->base_ns, udp_csum_crt);
|
||||||
} else {
|
} else {
|
||||||
skb_tx_timestamp(skb);
|
skb_tx_timestamp(skb);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -98,4 +98,8 @@ void mlx5_ec_cleanup(struct mlx5_core_dev *dev)
|
||||||
err = mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_HOST_PF]);
|
err = mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_HOST_PF]);
|
||||||
if (err)
|
if (err)
|
||||||
mlx5_core_warn(dev, "Timeout reclaiming external host PF pages err(%d)\n", err);
|
mlx5_core_warn(dev, "Timeout reclaiming external host PF pages err(%d)\n", err);
|
||||||
|
|
||||||
|
err = mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_VF]);
|
||||||
|
if (err)
|
||||||
|
mlx5_core_warn(dev, "Timeout reclaiming external host VFs pages err(%d)\n", err);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -86,7 +86,19 @@ static bool mlx5e_ptp_ts_cqe_drop(struct mlx5e_ptpsq *ptpsq, u16 skb_cc, u16 skb
|
||||||
return (ptpsq->ts_cqe_ctr_mask && (skb_cc != skb_id));
|
return (ptpsq->ts_cqe_ctr_mask && (skb_cc != skb_id));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void mlx5e_ptp_skb_fifo_ts_cqe_resync(struct mlx5e_ptpsq *ptpsq, u16 skb_cc, u16 skb_id)
|
static bool mlx5e_ptp_ts_cqe_ooo(struct mlx5e_ptpsq *ptpsq, u16 skb_id)
|
||||||
|
{
|
||||||
|
u16 skb_cc = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_cc);
|
||||||
|
u16 skb_pc = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_pc);
|
||||||
|
|
||||||
|
if (PTP_WQE_CTR2IDX(skb_id - skb_cc) >= PTP_WQE_CTR2IDX(skb_pc - skb_cc))
|
||||||
|
return true;
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mlx5e_ptp_skb_fifo_ts_cqe_resync(struct mlx5e_ptpsq *ptpsq, u16 skb_cc,
|
||||||
|
u16 skb_id, int budget)
|
||||||
{
|
{
|
||||||
struct skb_shared_hwtstamps hwts = {};
|
struct skb_shared_hwtstamps hwts = {};
|
||||||
struct sk_buff *skb;
|
struct sk_buff *skb;
|
||||||
|
|
@ -98,6 +110,7 @@ static void mlx5e_ptp_skb_fifo_ts_cqe_resync(struct mlx5e_ptpsq *ptpsq, u16 skb_
|
||||||
hwts.hwtstamp = mlx5e_skb_cb_get_hwts(skb)->cqe_hwtstamp;
|
hwts.hwtstamp = mlx5e_skb_cb_get_hwts(skb)->cqe_hwtstamp;
|
||||||
skb_tstamp_tx(skb, &hwts);
|
skb_tstamp_tx(skb, &hwts);
|
||||||
ptpsq->cq_stats->resync_cqe++;
|
ptpsq->cq_stats->resync_cqe++;
|
||||||
|
napi_consume_skb(skb, budget);
|
||||||
skb_cc = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_cc);
|
skb_cc = PTP_WQE_CTR2IDX(ptpsq->skb_fifo_cc);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -118,8 +131,14 @@ static void mlx5e_ptp_handle_ts_cqe(struct mlx5e_ptpsq *ptpsq,
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mlx5e_ptp_ts_cqe_drop(ptpsq, skb_cc, skb_id))
|
if (mlx5e_ptp_ts_cqe_drop(ptpsq, skb_cc, skb_id)) {
|
||||||
mlx5e_ptp_skb_fifo_ts_cqe_resync(ptpsq, skb_cc, skb_id);
|
if (mlx5e_ptp_ts_cqe_ooo(ptpsq, skb_id)) {
|
||||||
|
/* already handled by a previous resync */
|
||||||
|
ptpsq->cq_stats->ooo_cqe_drop++;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
mlx5e_ptp_skb_fifo_ts_cqe_resync(ptpsq, skb_cc, skb_id, budget);
|
||||||
|
}
|
||||||
|
|
||||||
skb = mlx5e_skb_fifo_pop(&ptpsq->skb_fifo);
|
skb = mlx5e_skb_fifo_pop(&ptpsq->skb_fifo);
|
||||||
hwtstamp = mlx5e_cqe_ts_to_ns(sq->ptp_cyc2time, sq->clock, get_cqe_ts(cqe));
|
hwtstamp = mlx5e_cqe_ts_to_ns(sq->ptp_cyc2time, sq->clock, get_cqe_ts(cqe));
|
||||||
|
|
|
||||||
|
|
@ -710,8 +710,7 @@ void mlx5e_rep_tc_receive(struct mlx5_cqe64 *cqe, struct mlx5e_rq *rq,
|
||||||
else
|
else
|
||||||
napi_gro_receive(rq->cq.napi, skb);
|
napi_gro_receive(rq->cq.napi, skb);
|
||||||
|
|
||||||
if (tc_priv.fwd_dev)
|
dev_put(tc_priv.fwd_dev);
|
||||||
dev_put(tc_priv.fwd_dev);
|
|
||||||
|
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -37,7 +37,7 @@ mlx5e_tc_act_stats_create(void)
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
handle = kvzalloc(sizeof(*handle), GFP_KERNEL);
|
handle = kvzalloc(sizeof(*handle), GFP_KERNEL);
|
||||||
if (IS_ERR(handle))
|
if (!handle)
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
|
|
||||||
err = rhashtable_init(&handle->ht, &act_counters_ht_params);
|
err = rhashtable_init(&handle->ht, &act_counters_ht_params);
|
||||||
|
|
|
||||||
|
|
@ -86,7 +86,7 @@ void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq);
|
||||||
static inline bool
|
static inline bool
|
||||||
mlx5e_skb_fifo_has_room(struct mlx5e_skb_fifo *fifo)
|
mlx5e_skb_fifo_has_room(struct mlx5e_skb_fifo *fifo)
|
||||||
{
|
{
|
||||||
return (*fifo->pc - *fifo->cc) < fifo->mask;
|
return (u16)(*fifo->pc - *fifo->cc) < fifo->mask;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool
|
static inline bool
|
||||||
|
|
@ -302,6 +302,8 @@ void mlx5e_skb_fifo_push(struct mlx5e_skb_fifo *fifo, struct sk_buff *skb)
|
||||||
static inline
|
static inline
|
||||||
struct sk_buff *mlx5e_skb_fifo_pop(struct mlx5e_skb_fifo *fifo)
|
struct sk_buff *mlx5e_skb_fifo_pop(struct mlx5e_skb_fifo *fifo)
|
||||||
{
|
{
|
||||||
|
WARN_ON_ONCE(*fifo->pc == *fifo->cc);
|
||||||
|
|
||||||
return *mlx5e_skb_fifo_get(fifo, (*fifo->cc)++);
|
return *mlx5e_skb_fifo_get(fifo, (*fifo->cc)++);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -2138,6 +2138,7 @@ static const struct counter_desc ptp_cq_stats_desc[] = {
|
||||||
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, abort_abs_diff_ns) },
|
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, abort_abs_diff_ns) },
|
||||||
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_cqe) },
|
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_cqe) },
|
||||||
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_event) },
|
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, resync_event) },
|
||||||
|
{ MLX5E_DECLARE_PTP_CQ_STAT(struct mlx5e_ptp_cq_stats, ooo_cqe_drop) },
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct counter_desc ptp_rq_stats_desc[] = {
|
static const struct counter_desc ptp_rq_stats_desc[] = {
|
||||||
|
|
|
||||||
|
|
@ -461,6 +461,7 @@ struct mlx5e_ptp_cq_stats {
|
||||||
u64 abort_abs_diff_ns;
|
u64 abort_abs_diff_ns;
|
||||||
u64 resync_cqe;
|
u64 resync_cqe;
|
||||||
u64 resync_event;
|
u64 resync_event;
|
||||||
|
u64 ooo_cqe_drop;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct mlx5e_rep_stats {
|
struct mlx5e_rep_stats {
|
||||||
|
|
|
||||||
|
|
@ -1048,61 +1048,6 @@ static int mlx5e_hairpin_get_prio(struct mlx5e_priv *priv,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int debugfs_hairpin_queues_set(void *data, u64 val)
|
|
||||||
{
|
|
||||||
struct mlx5e_hairpin_params *hp = data;
|
|
||||||
|
|
||||||
if (!val) {
|
|
||||||
mlx5_core_err(hp->mdev,
|
|
||||||
"Number of hairpin queues must be > 0\n");
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
hp->num_queues = val;
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int debugfs_hairpin_queues_get(void *data, u64 *val)
|
|
||||||
{
|
|
||||||
struct mlx5e_hairpin_params *hp = data;
|
|
||||||
|
|
||||||
*val = hp->num_queues;
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
DEFINE_DEBUGFS_ATTRIBUTE(fops_hairpin_queues, debugfs_hairpin_queues_get,
|
|
||||||
debugfs_hairpin_queues_set, "%llu\n");
|
|
||||||
|
|
||||||
static int debugfs_hairpin_queue_size_set(void *data, u64 val)
|
|
||||||
{
|
|
||||||
struct mlx5e_hairpin_params *hp = data;
|
|
||||||
|
|
||||||
if (val > BIT(MLX5_CAP_GEN(hp->mdev, log_max_hairpin_num_packets))) {
|
|
||||||
mlx5_core_err(hp->mdev,
|
|
||||||
"Invalid hairpin queue size, must be <= %lu\n",
|
|
||||||
BIT(MLX5_CAP_GEN(hp->mdev,
|
|
||||||
log_max_hairpin_num_packets)));
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
hp->queue_size = roundup_pow_of_two(val);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int debugfs_hairpin_queue_size_get(void *data, u64 *val)
|
|
||||||
{
|
|
||||||
struct mlx5e_hairpin_params *hp = data;
|
|
||||||
|
|
||||||
*val = hp->queue_size;
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
DEFINE_DEBUGFS_ATTRIBUTE(fops_hairpin_queue_size,
|
|
||||||
debugfs_hairpin_queue_size_get,
|
|
||||||
debugfs_hairpin_queue_size_set, "%llu\n");
|
|
||||||
|
|
||||||
static int debugfs_hairpin_num_active_get(void *data, u64 *val)
|
static int debugfs_hairpin_num_active_get(void *data, u64 *val)
|
||||||
{
|
{
|
||||||
struct mlx5e_tc_table *tc = data;
|
struct mlx5e_tc_table *tc = data;
|
||||||
|
|
@ -1148,10 +1093,6 @@ static void mlx5e_tc_debugfs_init(struct mlx5e_tc_table *tc,
|
||||||
|
|
||||||
tc->dfs_root = debugfs_create_dir("tc", dfs_root);
|
tc->dfs_root = debugfs_create_dir("tc", dfs_root);
|
||||||
|
|
||||||
debugfs_create_file("hairpin_num_queues", 0644, tc->dfs_root,
|
|
||||||
&tc->hairpin_params, &fops_hairpin_queues);
|
|
||||||
debugfs_create_file("hairpin_queue_size", 0644, tc->dfs_root,
|
|
||||||
&tc->hairpin_params, &fops_hairpin_queue_size);
|
|
||||||
debugfs_create_file("hairpin_num_active", 0444, tc->dfs_root, tc,
|
debugfs_create_file("hairpin_num_active", 0444, tc->dfs_root, tc,
|
||||||
&fops_hairpin_num_active);
|
&fops_hairpin_num_active);
|
||||||
debugfs_create_file("hairpin_table_dump", 0444, tc->dfs_root, tc,
|
debugfs_create_file("hairpin_table_dump", 0444, tc->dfs_root, tc,
|
||||||
|
|
|
||||||
|
|
@ -869,7 +869,8 @@ mlx5_eswitch_add_send_to_vport_rule(struct mlx5_eswitch *on_esw,
|
||||||
dest.vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID;
|
dest.vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID;
|
||||||
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
|
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
|
||||||
|
|
||||||
if (rep->vport == MLX5_VPORT_UPLINK)
|
if (MLX5_CAP_ESW_FLOWTABLE(on_esw->dev, flow_source) &&
|
||||||
|
rep->vport == MLX5_VPORT_UPLINK)
|
||||||
spec->flow_context.flow_source = MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT;
|
spec->flow_context.flow_source = MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT;
|
||||||
|
|
||||||
flow_rule = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(on_esw),
|
flow_rule = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(on_esw),
|
||||||
|
|
|
||||||
|
|
@ -105,6 +105,7 @@ int mlx5_geneve_tlv_option_add(struct mlx5_geneve *geneve, struct geneve_opt *op
|
||||||
geneve->opt_type = opt->type;
|
geneve->opt_type = opt->type;
|
||||||
geneve->obj_id = res;
|
geneve->obj_id = res;
|
||||||
geneve->refcount++;
|
geneve->refcount++;
|
||||||
|
res = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
unlock:
|
unlock:
|
||||||
|
|
|
||||||
|
|
@ -162,7 +162,7 @@ int mlx5_ipsec_fs_roce_tx_create(struct mlx5_core_dev *mdev,
|
||||||
if (IS_ERR(ft)) {
|
if (IS_ERR(ft)) {
|
||||||
err = PTR_ERR(ft);
|
err = PTR_ERR(ft);
|
||||||
mlx5_core_err(mdev, "Fail to create RoCE IPsec tx ft err=%d\n", err);
|
mlx5_core_err(mdev, "Fail to create RoCE IPsec tx ft err=%d\n", err);
|
||||||
return err;
|
goto free_in;
|
||||||
}
|
}
|
||||||
|
|
||||||
roce->ft = ft;
|
roce->ft = ft;
|
||||||
|
|
@ -174,22 +174,25 @@ int mlx5_ipsec_fs_roce_tx_create(struct mlx5_core_dev *mdev,
|
||||||
if (IS_ERR(g)) {
|
if (IS_ERR(g)) {
|
||||||
err = PTR_ERR(g);
|
err = PTR_ERR(g);
|
||||||
mlx5_core_err(mdev, "Fail to create RoCE IPsec tx group err=%d\n", err);
|
mlx5_core_err(mdev, "Fail to create RoCE IPsec tx group err=%d\n", err);
|
||||||
goto fail;
|
goto destroy_table;
|
||||||
}
|
}
|
||||||
roce->g = g;
|
roce->g = g;
|
||||||
|
|
||||||
err = ipsec_fs_roce_tx_rule_setup(mdev, roce, pol_ft);
|
err = ipsec_fs_roce_tx_rule_setup(mdev, roce, pol_ft);
|
||||||
if (err) {
|
if (err) {
|
||||||
mlx5_core_err(mdev, "Fail to create RoCE IPsec tx rules err=%d\n", err);
|
mlx5_core_err(mdev, "Fail to create RoCE IPsec tx rules err=%d\n", err);
|
||||||
goto rule_fail;
|
goto destroy_group;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
kvfree(in);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
rule_fail:
|
destroy_group:
|
||||||
mlx5_destroy_flow_group(roce->g);
|
mlx5_destroy_flow_group(roce->g);
|
||||||
fail:
|
destroy_table:
|
||||||
mlx5_destroy_flow_table(ft);
|
mlx5_destroy_flow_table(ft);
|
||||||
|
free_in:
|
||||||
|
kvfree(in);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -147,6 +147,10 @@ mlx5_device_disable_sriov(struct mlx5_core_dev *dev, int num_vfs, bool clear_vf)
|
||||||
|
|
||||||
mlx5_eswitch_disable_sriov(dev->priv.eswitch, clear_vf);
|
mlx5_eswitch_disable_sriov(dev->priv.eswitch, clear_vf);
|
||||||
|
|
||||||
|
/* For ECPFs, skip waiting for host VF pages until ECPF is destroyed */
|
||||||
|
if (mlx5_core_is_ecpf(dev))
|
||||||
|
return;
|
||||||
|
|
||||||
if (mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_VF]))
|
if (mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_VF]))
|
||||||
mlx5_core_warn(dev, "timeout reclaiming VFs pages\n");
|
mlx5_core_warn(dev, "timeout reclaiming VFs pages\n");
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -963,7 +963,6 @@ static int qede_alloc_fp_array(struct qede_dev *edev)
|
||||||
{
|
{
|
||||||
u8 fp_combined, fp_rx = edev->fp_num_rx;
|
u8 fp_combined, fp_rx = edev->fp_num_rx;
|
||||||
struct qede_fastpath *fp;
|
struct qede_fastpath *fp;
|
||||||
void *mem;
|
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
edev->fp_array = kcalloc(QEDE_QUEUE_CNT(edev),
|
edev->fp_array = kcalloc(QEDE_QUEUE_CNT(edev),
|
||||||
|
|
@ -974,21 +973,15 @@ static int qede_alloc_fp_array(struct qede_dev *edev)
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!edev->coal_entry) {
|
if (!edev->coal_entry) {
|
||||||
mem = kcalloc(QEDE_MAX_RSS_CNT(edev),
|
edev->coal_entry = kcalloc(QEDE_MAX_RSS_CNT(edev),
|
||||||
sizeof(*edev->coal_entry), GFP_KERNEL);
|
sizeof(*edev->coal_entry),
|
||||||
} else {
|
GFP_KERNEL);
|
||||||
mem = krealloc(edev->coal_entry,
|
if (!edev->coal_entry) {
|
||||||
QEDE_QUEUE_CNT(edev) * sizeof(*edev->coal_entry),
|
DP_ERR(edev, "coalesce entry allocation failed\n");
|
||||||
GFP_KERNEL);
|
goto err;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!mem) {
|
|
||||||
DP_ERR(edev, "coalesce entry allocation failed\n");
|
|
||||||
kfree(edev->coal_entry);
|
|
||||||
goto err;
|
|
||||||
}
|
|
||||||
edev->coal_entry = mem;
|
|
||||||
|
|
||||||
fp_combined = QEDE_QUEUE_CNT(edev) - fp_rx - edev->fp_num_tx;
|
fp_combined = QEDE_QUEUE_CNT(edev) - fp_rx - edev->fp_num_tx;
|
||||||
|
|
||||||
/* Allocate the FP elements for Rx queues followed by combined and then
|
/* Allocate the FP elements for Rx queues followed by combined and then
|
||||||
|
|
|
||||||
|
|
@ -2894,8 +2894,10 @@ static int happy_meal_pci_probe(struct pci_dev *pdev,
|
||||||
goto err_out_clear_quattro;
|
goto err_out_clear_quattro;
|
||||||
}
|
}
|
||||||
|
|
||||||
hpreg_res = devm_request_region(&pdev->dev, pci_resource_start(pdev, 0),
|
hpreg_res = devm_request_mem_region(&pdev->dev,
|
||||||
pci_resource_len(pdev, 0), DRV_NAME);
|
pci_resource_start(pdev, 0),
|
||||||
|
pci_resource_len(pdev, 0),
|
||||||
|
DRV_NAME);
|
||||||
if (!hpreg_res) {
|
if (!hpreg_res) {
|
||||||
err = -EBUSY;
|
err = -EBUSY;
|
||||||
dev_err(&pdev->dev, "Cannot obtain PCI resources, aborting.\n");
|
dev_err(&pdev->dev, "Cannot obtain PCI resources, aborting.\n");
|
||||||
|
|
|
||||||
|
|
@ -52,6 +52,7 @@ struct mscc_miim_info {
|
||||||
struct mscc_miim_dev {
|
struct mscc_miim_dev {
|
||||||
struct regmap *regs;
|
struct regmap *regs;
|
||||||
int mii_status_offset;
|
int mii_status_offset;
|
||||||
|
bool ignore_read_errors;
|
||||||
struct regmap *phy_regs;
|
struct regmap *phy_regs;
|
||||||
const struct mscc_miim_info *info;
|
const struct mscc_miim_info *info;
|
||||||
struct clk *clk;
|
struct clk *clk;
|
||||||
|
|
@ -135,7 +136,7 @@ static int mscc_miim_read(struct mii_bus *bus, int mii_id, int regnum)
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (val & MSCC_MIIM_DATA_ERROR) {
|
if (!miim->ignore_read_errors && !!(val & MSCC_MIIM_DATA_ERROR)) {
|
||||||
ret = -EIO;
|
ret = -EIO;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
@ -212,7 +213,8 @@ static const struct regmap_config mscc_miim_phy_regmap_config = {
|
||||||
};
|
};
|
||||||
|
|
||||||
int mscc_miim_setup(struct device *dev, struct mii_bus **pbus, const char *name,
|
int mscc_miim_setup(struct device *dev, struct mii_bus **pbus, const char *name,
|
||||||
struct regmap *mii_regmap, int status_offset)
|
struct regmap *mii_regmap, int status_offset,
|
||||||
|
bool ignore_read_errors)
|
||||||
{
|
{
|
||||||
struct mscc_miim_dev *miim;
|
struct mscc_miim_dev *miim;
|
||||||
struct mii_bus *bus;
|
struct mii_bus *bus;
|
||||||
|
|
@ -234,6 +236,7 @@ int mscc_miim_setup(struct device *dev, struct mii_bus **pbus, const char *name,
|
||||||
|
|
||||||
miim->regs = mii_regmap;
|
miim->regs = mii_regmap;
|
||||||
miim->mii_status_offset = status_offset;
|
miim->mii_status_offset = status_offset;
|
||||||
|
miim->ignore_read_errors = ignore_read_errors;
|
||||||
|
|
||||||
*pbus = bus;
|
*pbus = bus;
|
||||||
|
|
||||||
|
|
@ -285,7 +288,7 @@ static int mscc_miim_probe(struct platform_device *pdev)
|
||||||
return dev_err_probe(dev, PTR_ERR(phy_regmap),
|
return dev_err_probe(dev, PTR_ERR(phy_regmap),
|
||||||
"Unable to create phy register regmap\n");
|
"Unable to create phy register regmap\n");
|
||||||
|
|
||||||
ret = mscc_miim_setup(dev, &bus, "mscc_miim", mii_regmap, 0);
|
ret = mscc_miim_setup(dev, &bus, "mscc_miim", mii_regmap, 0, false);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(dev, "Unable to setup the MDIO bus\n");
|
dev_err(dev, "Unable to setup the MDIO bus\n");
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
||||||
|
|
@ -262,7 +262,7 @@ int genphy_c45_an_config_aneg(struct phy_device *phydev)
|
||||||
linkmode_and(phydev->advertising, phydev->advertising,
|
linkmode_and(phydev->advertising, phydev->advertising,
|
||||||
phydev->supported);
|
phydev->supported);
|
||||||
|
|
||||||
ret = genphy_c45_write_eee_adv(phydev, phydev->supported_eee);
|
ret = genphy_c45_an_config_eee_aneg(phydev);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
else if (ret)
|
else if (ret)
|
||||||
|
|
@ -672,9 +672,9 @@ EXPORT_SYMBOL_GPL(genphy_c45_read_mdix);
|
||||||
*/
|
*/
|
||||||
int genphy_c45_write_eee_adv(struct phy_device *phydev, unsigned long *adv)
|
int genphy_c45_write_eee_adv(struct phy_device *phydev, unsigned long *adv)
|
||||||
{
|
{
|
||||||
int val, changed;
|
int val, changed = 0;
|
||||||
|
|
||||||
if (linkmode_intersects(phydev->supported, PHY_EEE_CAP1_FEATURES)) {
|
if (linkmode_intersects(phydev->supported_eee, PHY_EEE_CAP1_FEATURES)) {
|
||||||
val = linkmode_to_mii_eee_cap1_t(adv);
|
val = linkmode_to_mii_eee_cap1_t(adv);
|
||||||
|
|
||||||
/* In eee_broken_modes are stored MDIO_AN_EEE_ADV specific raw
|
/* In eee_broken_modes are stored MDIO_AN_EEE_ADV specific raw
|
||||||
|
|
@ -721,12 +721,11 @@ int genphy_c45_write_eee_adv(struct phy_device *phydev, unsigned long *adv)
|
||||||
* @phydev: target phy_device struct
|
* @phydev: target phy_device struct
|
||||||
* @adv: the linkmode advertisement status
|
* @adv: the linkmode advertisement status
|
||||||
*/
|
*/
|
||||||
static int genphy_c45_read_eee_adv(struct phy_device *phydev,
|
int genphy_c45_read_eee_adv(struct phy_device *phydev, unsigned long *adv)
|
||||||
unsigned long *adv)
|
|
||||||
{
|
{
|
||||||
int val;
|
int val;
|
||||||
|
|
||||||
if (linkmode_intersects(phydev->supported, PHY_EEE_CAP1_FEATURES)) {
|
if (linkmode_intersects(phydev->supported_eee, PHY_EEE_CAP1_FEATURES)) {
|
||||||
/* IEEE 802.3-2018 45.2.7.13 EEE advertisement 1
|
/* IEEE 802.3-2018 45.2.7.13 EEE advertisement 1
|
||||||
* (Register 7.60)
|
* (Register 7.60)
|
||||||
*/
|
*/
|
||||||
|
|
@ -762,7 +761,7 @@ static int genphy_c45_read_eee_lpa(struct phy_device *phydev,
|
||||||
{
|
{
|
||||||
int val;
|
int val;
|
||||||
|
|
||||||
if (linkmode_intersects(phydev->supported, PHY_EEE_CAP1_FEATURES)) {
|
if (linkmode_intersects(phydev->supported_eee, PHY_EEE_CAP1_FEATURES)) {
|
||||||
/* IEEE 802.3-2018 45.2.7.14 EEE link partner ability 1
|
/* IEEE 802.3-2018 45.2.7.14 EEE link partner ability 1
|
||||||
* (Register 7.61)
|
* (Register 7.61)
|
||||||
*/
|
*/
|
||||||
|
|
@ -858,6 +857,21 @@ int genphy_c45_read_eee_abilities(struct phy_device *phydev)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(genphy_c45_read_eee_abilities);
|
EXPORT_SYMBOL_GPL(genphy_c45_read_eee_abilities);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* genphy_c45_an_config_eee_aneg - configure EEE advertisement
|
||||||
|
* @phydev: target phy_device struct
|
||||||
|
*/
|
||||||
|
int genphy_c45_an_config_eee_aneg(struct phy_device *phydev)
|
||||||
|
{
|
||||||
|
if (!phydev->eee_enabled) {
|
||||||
|
__ETHTOOL_DECLARE_LINK_MODE_MASK(adv) = {};
|
||||||
|
|
||||||
|
return genphy_c45_write_eee_adv(phydev, adv);
|
||||||
|
}
|
||||||
|
|
||||||
|
return genphy_c45_write_eee_adv(phydev, phydev->advertising_eee);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* genphy_c45_pma_read_abilities - read supported link modes from PMA
|
* genphy_c45_pma_read_abilities - read supported link modes from PMA
|
||||||
* @phydev: target phy_device struct
|
* @phydev: target phy_device struct
|
||||||
|
|
@ -1421,17 +1435,33 @@ EXPORT_SYMBOL(genphy_c45_ethtool_get_eee);
|
||||||
int genphy_c45_ethtool_set_eee(struct phy_device *phydev,
|
int genphy_c45_ethtool_set_eee(struct phy_device *phydev,
|
||||||
struct ethtool_eee *data)
|
struct ethtool_eee *data)
|
||||||
{
|
{
|
||||||
__ETHTOOL_DECLARE_LINK_MODE_MASK(adv) = {};
|
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (data->eee_enabled) {
|
if (data->eee_enabled) {
|
||||||
if (data->advertised)
|
if (data->advertised) {
|
||||||
adv[0] = data->advertised;
|
__ETHTOOL_DECLARE_LINK_MODE_MASK(adv);
|
||||||
else
|
|
||||||
linkmode_copy(adv, phydev->supported_eee);
|
ethtool_convert_legacy_u32_to_link_mode(adv,
|
||||||
|
data->advertised);
|
||||||
|
linkmode_andnot(adv, adv, phydev->supported_eee);
|
||||||
|
if (!linkmode_empty(adv)) {
|
||||||
|
phydev_warn(phydev, "At least some EEE link modes are not supported.\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
ethtool_convert_legacy_u32_to_link_mode(phydev->advertising_eee,
|
||||||
|
data->advertised);
|
||||||
|
} else {
|
||||||
|
linkmode_copy(phydev->advertising_eee,
|
||||||
|
phydev->supported_eee);
|
||||||
|
}
|
||||||
|
|
||||||
|
phydev->eee_enabled = true;
|
||||||
|
} else {
|
||||||
|
phydev->eee_enabled = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = genphy_c45_write_eee_adv(phydev, adv);
|
ret = genphy_c45_an_config_eee_aneg(phydev);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
if (ret > 0)
|
if (ret > 0)
|
||||||
|
|
|
||||||
|
|
@ -2231,7 +2231,7 @@ int __genphy_config_aneg(struct phy_device *phydev, bool changed)
|
||||||
{
|
{
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
err = genphy_c45_write_eee_adv(phydev, phydev->supported_eee);
|
err = genphy_c45_an_config_eee_aneg(phydev);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
else if (err)
|
else if (err)
|
||||||
|
|
@ -3141,6 +3141,25 @@ static int phy_probe(struct device *dev)
|
||||||
of_set_phy_supported(phydev);
|
of_set_phy_supported(phydev);
|
||||||
phy_advertise_supported(phydev);
|
phy_advertise_supported(phydev);
|
||||||
|
|
||||||
|
/* Get PHY default EEE advertising modes and handle them as potentially
|
||||||
|
* safe initial configuration.
|
||||||
|
*/
|
||||||
|
err = genphy_c45_read_eee_adv(phydev, phydev->advertising_eee);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
|
||||||
|
/* There is no "enabled" flag. If PHY is advertising, assume it is
|
||||||
|
* kind of enabled.
|
||||||
|
*/
|
||||||
|
phydev->eee_enabled = !linkmode_empty(phydev->advertising_eee);
|
||||||
|
|
||||||
|
/* Some PHYs may advertise, by default, not support EEE modes. So,
|
||||||
|
* we need to clean them.
|
||||||
|
*/
|
||||||
|
if (phydev->eee_enabled)
|
||||||
|
linkmode_and(phydev->advertising_eee, phydev->supported_eee,
|
||||||
|
phydev->advertising_eee);
|
||||||
|
|
||||||
/* Get the EEE modes we want to prohibit. We will ask
|
/* Get the EEE modes we want to prohibit. We will ask
|
||||||
* the PHY stop advertising these mode later on
|
* the PHY stop advertising these mode later on
|
||||||
*/
|
*/
|
||||||
|
|
|
||||||
|
|
@ -981,7 +981,7 @@ static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev)
|
||||||
if (ret)
|
if (ret)
|
||||||
ath11k_warn(ab, "failed to suspend core: %d\n", ret);
|
ath11k_warn(ab, "failed to suspend core: %d\n", ret);
|
||||||
|
|
||||||
return ret;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static __maybe_unused int ath11k_pci_pm_resume(struct device *dev)
|
static __maybe_unused int ath11k_pci_pm_resume(struct device *dev)
|
||||||
|
|
|
||||||
|
|
@ -706,6 +706,7 @@ mt76u_free_rx_queue(struct mt76_dev *dev, struct mt76_queue *q)
|
||||||
q->entry[i].urb = NULL;
|
q->entry[i].urb = NULL;
|
||||||
}
|
}
|
||||||
page_pool_destroy(q->page_pool);
|
page_pool_destroy(q->page_pool);
|
||||||
|
q->page_pool = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void mt76u_free_rx(struct mt76_dev *dev)
|
static void mt76u_free_rx(struct mt76_dev *dev)
|
||||||
|
|
|
||||||
|
|
@ -883,11 +883,9 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
||||||
struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
|
struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
|
||||||
struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
|
struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
|
||||||
unsigned int extra_count;
|
unsigned int extra_count;
|
||||||
u16 pending_idx;
|
|
||||||
RING_IDX idx;
|
RING_IDX idx;
|
||||||
int work_to_do;
|
int work_to_do;
|
||||||
unsigned int data_len;
|
unsigned int data_len;
|
||||||
pending_ring_idx_t index;
|
|
||||||
|
|
||||||
if (queue->tx.sring->req_prod - queue->tx.req_cons >
|
if (queue->tx.sring->req_prod - queue->tx.req_cons >
|
||||||
XEN_NETIF_TX_RING_SIZE) {
|
XEN_NETIF_TX_RING_SIZE) {
|
||||||
|
|
@ -983,9 +981,6 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
index = pending_index(queue->pending_cons);
|
|
||||||
pending_idx = queue->pending_ring[index];
|
|
||||||
|
|
||||||
if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size)
|
if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size)
|
||||||
data_len = txreq.size;
|
data_len = txreq.size;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -672,6 +672,12 @@ int st_nci_se_io(struct nci_dev *ndev, u32 se_idx,
|
||||||
ST_NCI_EVT_TRANSMIT_DATA, apdu,
|
ST_NCI_EVT_TRANSMIT_DATA, apdu,
|
||||||
apdu_length);
|
apdu_length);
|
||||||
default:
|
default:
|
||||||
|
/* Need to free cb_context here as at the moment we can't
|
||||||
|
* clearly indicate to the caller if the callback function
|
||||||
|
* would be called (and free it) or not. In both cases a
|
||||||
|
* negative value may be returned to the caller.
|
||||||
|
*/
|
||||||
|
kfree(cb_context);
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -236,6 +236,12 @@ int st21nfca_hci_se_io(struct nfc_hci_dev *hdev, u32 se_idx,
|
||||||
ST21NFCA_EVT_TRANSMIT_DATA,
|
ST21NFCA_EVT_TRANSMIT_DATA,
|
||||||
apdu, apdu_length);
|
apdu, apdu_length);
|
||||||
default:
|
default:
|
||||||
|
/* Need to free cb_context here as at the moment we can't
|
||||||
|
* clearly indicate to the caller if the callback function
|
||||||
|
* would be called (and free it) or not. In both cases a
|
||||||
|
* negative value may be returned to the caller.
|
||||||
|
*/
|
||||||
|
kfree(cb_context);
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -66,7 +66,7 @@ struct ptp_vclock {
|
||||||
struct hlist_node vclock_hash_node;
|
struct hlist_node vclock_hash_node;
|
||||||
struct cyclecounter cc;
|
struct cyclecounter cc;
|
||||||
struct timecounter tc;
|
struct timecounter tc;
|
||||||
spinlock_t lock; /* protects tc/cc */
|
struct mutex lock; /* protects tc/cc */
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
||||||
|
|
@ -43,16 +43,16 @@ static void ptp_vclock_hash_del(struct ptp_vclock *vclock)
|
||||||
static int ptp_vclock_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
|
static int ptp_vclock_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
|
||||||
{
|
{
|
||||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||||
unsigned long flags;
|
|
||||||
s64 adj;
|
s64 adj;
|
||||||
|
|
||||||
adj = (s64)scaled_ppm << PTP_VCLOCK_FADJ_SHIFT;
|
adj = (s64)scaled_ppm << PTP_VCLOCK_FADJ_SHIFT;
|
||||||
adj = div_s64(adj, PTP_VCLOCK_FADJ_DENOMINATOR);
|
adj = div_s64(adj, PTP_VCLOCK_FADJ_DENOMINATOR);
|
||||||
|
|
||||||
spin_lock_irqsave(&vclock->lock, flags);
|
if (mutex_lock_interruptible(&vclock->lock))
|
||||||
|
return -EINTR;
|
||||||
timecounter_read(&vclock->tc);
|
timecounter_read(&vclock->tc);
|
||||||
vclock->cc.mult = PTP_VCLOCK_CC_MULT + adj;
|
vclock->cc.mult = PTP_VCLOCK_CC_MULT + adj;
|
||||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
mutex_unlock(&vclock->lock);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
@ -60,11 +60,11 @@ static int ptp_vclock_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
|
||||||
static int ptp_vclock_adjtime(struct ptp_clock_info *ptp, s64 delta)
|
static int ptp_vclock_adjtime(struct ptp_clock_info *ptp, s64 delta)
|
||||||
{
|
{
|
||||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||||
unsigned long flags;
|
|
||||||
|
|
||||||
spin_lock_irqsave(&vclock->lock, flags);
|
if (mutex_lock_interruptible(&vclock->lock))
|
||||||
|
return -EINTR;
|
||||||
timecounter_adjtime(&vclock->tc, delta);
|
timecounter_adjtime(&vclock->tc, delta);
|
||||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
mutex_unlock(&vclock->lock);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
@ -73,12 +73,12 @@ static int ptp_vclock_gettime(struct ptp_clock_info *ptp,
|
||||||
struct timespec64 *ts)
|
struct timespec64 *ts)
|
||||||
{
|
{
|
||||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||||
unsigned long flags;
|
|
||||||
u64 ns;
|
u64 ns;
|
||||||
|
|
||||||
spin_lock_irqsave(&vclock->lock, flags);
|
if (mutex_lock_interruptible(&vclock->lock))
|
||||||
|
return -EINTR;
|
||||||
ns = timecounter_read(&vclock->tc);
|
ns = timecounter_read(&vclock->tc);
|
||||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
mutex_unlock(&vclock->lock);
|
||||||
*ts = ns_to_timespec64(ns);
|
*ts = ns_to_timespec64(ns);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
@ -91,7 +91,6 @@ static int ptp_vclock_gettimex(struct ptp_clock_info *ptp,
|
||||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||||
struct ptp_clock *pptp = vclock->pclock;
|
struct ptp_clock *pptp = vclock->pclock;
|
||||||
struct timespec64 pts;
|
struct timespec64 pts;
|
||||||
unsigned long flags;
|
|
||||||
int err;
|
int err;
|
||||||
u64 ns;
|
u64 ns;
|
||||||
|
|
||||||
|
|
@ -99,9 +98,10 @@ static int ptp_vclock_gettimex(struct ptp_clock_info *ptp,
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
spin_lock_irqsave(&vclock->lock, flags);
|
if (mutex_lock_interruptible(&vclock->lock))
|
||||||
|
return -EINTR;
|
||||||
ns = timecounter_cyc2time(&vclock->tc, timespec64_to_ns(&pts));
|
ns = timecounter_cyc2time(&vclock->tc, timespec64_to_ns(&pts));
|
||||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
mutex_unlock(&vclock->lock);
|
||||||
|
|
||||||
*ts = ns_to_timespec64(ns);
|
*ts = ns_to_timespec64(ns);
|
||||||
|
|
||||||
|
|
@ -113,11 +113,11 @@ static int ptp_vclock_settime(struct ptp_clock_info *ptp,
|
||||||
{
|
{
|
||||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||||
u64 ns = timespec64_to_ns(ts);
|
u64 ns = timespec64_to_ns(ts);
|
||||||
unsigned long flags;
|
|
||||||
|
|
||||||
spin_lock_irqsave(&vclock->lock, flags);
|
if (mutex_lock_interruptible(&vclock->lock))
|
||||||
|
return -EINTR;
|
||||||
timecounter_init(&vclock->tc, &vclock->cc, ns);
|
timecounter_init(&vclock->tc, &vclock->cc, ns);
|
||||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
mutex_unlock(&vclock->lock);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
@ -127,7 +127,6 @@ static int ptp_vclock_getcrosststamp(struct ptp_clock_info *ptp,
|
||||||
{
|
{
|
||||||
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
struct ptp_vclock *vclock = info_to_vclock(ptp);
|
||||||
struct ptp_clock *pptp = vclock->pclock;
|
struct ptp_clock *pptp = vclock->pclock;
|
||||||
unsigned long flags;
|
|
||||||
int err;
|
int err;
|
||||||
u64 ns;
|
u64 ns;
|
||||||
|
|
||||||
|
|
@ -135,9 +134,10 @@ static int ptp_vclock_getcrosststamp(struct ptp_clock_info *ptp,
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
spin_lock_irqsave(&vclock->lock, flags);
|
if (mutex_lock_interruptible(&vclock->lock))
|
||||||
|
return -EINTR;
|
||||||
ns = timecounter_cyc2time(&vclock->tc, ktime_to_ns(xtstamp->device));
|
ns = timecounter_cyc2time(&vclock->tc, ktime_to_ns(xtstamp->device));
|
||||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
mutex_unlock(&vclock->lock);
|
||||||
|
|
||||||
xtstamp->device = ns_to_ktime(ns);
|
xtstamp->device = ns_to_ktime(ns);
|
||||||
|
|
||||||
|
|
@ -205,7 +205,7 @@ struct ptp_vclock *ptp_vclock_register(struct ptp_clock *pclock)
|
||||||
|
|
||||||
INIT_HLIST_NODE(&vclock->vclock_hash_node);
|
INIT_HLIST_NODE(&vclock->vclock_hash_node);
|
||||||
|
|
||||||
spin_lock_init(&vclock->lock);
|
mutex_init(&vclock->lock);
|
||||||
|
|
||||||
vclock->clock = ptp_clock_register(&vclock->info, &pclock->dev);
|
vclock->clock = ptp_clock_register(&vclock->info, &pclock->dev);
|
||||||
if (IS_ERR_OR_NULL(vclock->clock)) {
|
if (IS_ERR_OR_NULL(vclock->clock)) {
|
||||||
|
|
@ -269,7 +269,6 @@ ktime_t ptp_convert_timestamp(const ktime_t *hwtstamp, int vclock_index)
|
||||||
{
|
{
|
||||||
unsigned int hash = vclock_index % HASH_SIZE(vclock_hash);
|
unsigned int hash = vclock_index % HASH_SIZE(vclock_hash);
|
||||||
struct ptp_vclock *vclock;
|
struct ptp_vclock *vclock;
|
||||||
unsigned long flags;
|
|
||||||
u64 ns;
|
u64 ns;
|
||||||
u64 vclock_ns = 0;
|
u64 vclock_ns = 0;
|
||||||
|
|
||||||
|
|
@ -281,9 +280,10 @@ ktime_t ptp_convert_timestamp(const ktime_t *hwtstamp, int vclock_index)
|
||||||
if (vclock->clock->index != vclock_index)
|
if (vclock->clock->index != vclock_index)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
spin_lock_irqsave(&vclock->lock, flags);
|
if (mutex_lock_interruptible(&vclock->lock))
|
||||||
|
break;
|
||||||
vclock_ns = timecounter_cyc2time(&vclock->tc, ns);
|
vclock_ns = timecounter_cyc2time(&vclock->tc, ns);
|
||||||
spin_unlock_irqrestore(&vclock->lock, flags);
|
mutex_unlock(&vclock->lock);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -14,6 +14,6 @@
|
||||||
|
|
||||||
int mscc_miim_setup(struct device *device, struct mii_bus **bus,
|
int mscc_miim_setup(struct device *device, struct mii_bus **bus,
|
||||||
const char *name, struct regmap *mii_regmap,
|
const char *name, struct regmap *mii_regmap,
|
||||||
int status_offset);
|
int status_offset, bool ignore_read_errors);
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
|
||||||
|
|
@ -491,4 +491,9 @@ extern const struct nfnl_ct_hook __rcu *nfnl_ct_hook;
|
||||||
*/
|
*/
|
||||||
DECLARE_PER_CPU(bool, nf_skb_duplicated);
|
DECLARE_PER_CPU(bool, nf_skb_duplicated);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Contains bitmask of ctnetlink event subscribers, if any.
|
||||||
|
* Can't be pernet due to NETLINK_LISTEN_ALL_NSID setsockopt flag.
|
||||||
|
*/
|
||||||
|
extern u8 nf_ctnetlink_has_listener;
|
||||||
#endif /*__LINUX_NETFILTER_H*/
|
#endif /*__LINUX_NETFILTER_H*/
|
||||||
|
|
|
||||||
|
|
@ -575,6 +575,8 @@ struct macsec_ops;
|
||||||
* @advertising: Currently advertised linkmodes
|
* @advertising: Currently advertised linkmodes
|
||||||
* @adv_old: Saved advertised while power saving for WoL
|
* @adv_old: Saved advertised while power saving for WoL
|
||||||
* @supported_eee: supported PHY EEE linkmodes
|
* @supported_eee: supported PHY EEE linkmodes
|
||||||
|
* @advertising_eee: Currently advertised EEE linkmodes
|
||||||
|
* @eee_enabled: Flag indicating whether the EEE feature is enabled
|
||||||
* @lp_advertising: Current link partner advertised linkmodes
|
* @lp_advertising: Current link partner advertised linkmodes
|
||||||
* @host_interfaces: PHY interface modes supported by host
|
* @host_interfaces: PHY interface modes supported by host
|
||||||
* @eee_broken_modes: Energy efficient ethernet modes which should be prohibited
|
* @eee_broken_modes: Energy efficient ethernet modes which should be prohibited
|
||||||
|
|
@ -681,6 +683,8 @@ struct phy_device {
|
||||||
__ETHTOOL_DECLARE_LINK_MODE_MASK(adv_old);
|
__ETHTOOL_DECLARE_LINK_MODE_MASK(adv_old);
|
||||||
/* used for eee validation */
|
/* used for eee validation */
|
||||||
__ETHTOOL_DECLARE_LINK_MODE_MASK(supported_eee);
|
__ETHTOOL_DECLARE_LINK_MODE_MASK(supported_eee);
|
||||||
|
__ETHTOOL_DECLARE_LINK_MODE_MASK(advertising_eee);
|
||||||
|
bool eee_enabled;
|
||||||
|
|
||||||
/* Host supported PHY interface types. Should be ignored if empty. */
|
/* Host supported PHY interface types. Should be ignored if empty. */
|
||||||
DECLARE_PHY_INTERFACE_MASK(host_interfaces);
|
DECLARE_PHY_INTERFACE_MASK(host_interfaces);
|
||||||
|
|
@ -1765,6 +1769,8 @@ int genphy_c45_ethtool_get_eee(struct phy_device *phydev,
|
||||||
int genphy_c45_ethtool_set_eee(struct phy_device *phydev,
|
int genphy_c45_ethtool_set_eee(struct phy_device *phydev,
|
||||||
struct ethtool_eee *data);
|
struct ethtool_eee *data);
|
||||||
int genphy_c45_write_eee_adv(struct phy_device *phydev, unsigned long *adv);
|
int genphy_c45_write_eee_adv(struct phy_device *phydev, unsigned long *adv);
|
||||||
|
int genphy_c45_an_config_eee_aneg(struct phy_device *phydev);
|
||||||
|
int genphy_c45_read_eee_adv(struct phy_device *phydev, unsigned long *adv);
|
||||||
|
|
||||||
/* Generic C45 PHY driver */
|
/* Generic C45 PHY driver */
|
||||||
extern struct phy_driver genphy_c45_driver;
|
extern struct phy_driver genphy_c45_driver;
|
||||||
|
|
|
||||||
|
|
@ -95,7 +95,6 @@ struct nf_ip_net {
|
||||||
|
|
||||||
struct netns_ct {
|
struct netns_ct {
|
||||||
#ifdef CONFIG_NF_CONNTRACK_EVENTS
|
#ifdef CONFIG_NF_CONNTRACK_EVENTS
|
||||||
u8 ctnetlink_has_listener;
|
|
||||||
bool ecache_dwork_pending;
|
bool ecache_dwork_pending;
|
||||||
#endif
|
#endif
|
||||||
u8 sysctl_log_invalid; /* Log invalid packets */
|
u8 sysctl_log_invalid; /* Log invalid packets */
|
||||||
|
|
|
||||||
|
|
@ -1412,6 +1412,7 @@ struct sctp_stream_priorities {
|
||||||
/* The next stream in line */
|
/* The next stream in line */
|
||||||
struct sctp_stream_out_ext *next;
|
struct sctp_stream_out_ext *next;
|
||||||
__u16 prio;
|
__u16 prio;
|
||||||
|
__u16 users;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct sctp_stream_out_ext {
|
struct sctp_stream_out_ext {
|
||||||
|
|
|
||||||
|
|
@ -19,7 +19,7 @@
|
||||||
* @NETDEV_XDP_ACT_XSK_ZEROCOPY: This feature informs if netdev supports AF_XDP
|
* @NETDEV_XDP_ACT_XSK_ZEROCOPY: This feature informs if netdev supports AF_XDP
|
||||||
* in zero copy mode.
|
* in zero copy mode.
|
||||||
* @NETDEV_XDP_ACT_HW_OFFLOAD: This feature informs if netdev supports XDP hw
|
* @NETDEV_XDP_ACT_HW_OFFLOAD: This feature informs if netdev supports XDP hw
|
||||||
* oflloading.
|
* offloading.
|
||||||
* @NETDEV_XDP_ACT_RX_SG: This feature informs if netdev implements non-linear
|
* @NETDEV_XDP_ACT_RX_SG: This feature informs if netdev implements non-linear
|
||||||
* XDP buffer support in the driver napi callback.
|
* XDP buffer support in the driver napi callback.
|
||||||
* @NETDEV_XDP_ACT_NDO_XMIT_SG: This feature informs if netdev implements
|
* @NETDEV_XDP_ACT_NDO_XMIT_SG: This feature informs if netdev implements
|
||||||
|
|
|
||||||
|
|
@ -1090,7 +1090,7 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
|
||||||
|
|
||||||
audit_log_nfcfg(repl->name, AF_BRIDGE, repl->nentries,
|
audit_log_nfcfg(repl->name, AF_BRIDGE, repl->nentries,
|
||||||
AUDIT_XT_OP_REPLACE, GFP_KERNEL);
|
AUDIT_XT_OP_REPLACE, GFP_KERNEL);
|
||||||
return ret;
|
return 0;
|
||||||
|
|
||||||
free_unlock:
|
free_unlock:
|
||||||
mutex_unlock(&ebt_mutex);
|
mutex_unlock(&ebt_mutex);
|
||||||
|
|
|
||||||
|
|
@ -3134,8 +3134,10 @@ void __dev_kfree_skb_any(struct sk_buff *skb, enum skb_free_reason reason)
|
||||||
{
|
{
|
||||||
if (in_hardirq() || irqs_disabled())
|
if (in_hardirq() || irqs_disabled())
|
||||||
__dev_kfree_skb_irq(skb, reason);
|
__dev_kfree_skb_irq(skb, reason);
|
||||||
|
else if (unlikely(reason == SKB_REASON_DROPPED))
|
||||||
|
kfree_skb(skb);
|
||||||
else
|
else
|
||||||
dev_kfree_skb(skb);
|
consume_skb(skb);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(__dev_kfree_skb_any);
|
EXPORT_SYMBOL(__dev_kfree_skb_any);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1525,6 +1525,10 @@ int arpt_register_table(struct net *net,
|
||||||
|
|
||||||
new_table = xt_register_table(net, table, &bootstrap, newinfo);
|
new_table = xt_register_table(net, table, &bootstrap, newinfo);
|
||||||
if (IS_ERR(new_table)) {
|
if (IS_ERR(new_table)) {
|
||||||
|
struct arpt_entry *iter;
|
||||||
|
|
||||||
|
xt_entry_foreach(iter, loc_cpu_entry, newinfo->size)
|
||||||
|
cleanup_entry(iter, net);
|
||||||
xt_free_table_info(newinfo);
|
xt_free_table_info(newinfo);
|
||||||
return PTR_ERR(new_table);
|
return PTR_ERR(new_table);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1045,7 +1045,6 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
|
||||||
struct xt_counters *counters;
|
struct xt_counters *counters;
|
||||||
struct ipt_entry *iter;
|
struct ipt_entry *iter;
|
||||||
|
|
||||||
ret = 0;
|
|
||||||
counters = xt_counters_alloc(num_counters);
|
counters = xt_counters_alloc(num_counters);
|
||||||
if (!counters) {
|
if (!counters) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
|
@ -1091,7 +1090,7 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
|
||||||
net_warn_ratelimited("iptables: counters copy to user failed while replacing table\n");
|
net_warn_ratelimited("iptables: counters copy to user failed while replacing table\n");
|
||||||
}
|
}
|
||||||
vfree(counters);
|
vfree(counters);
|
||||||
return ret;
|
return 0;
|
||||||
|
|
||||||
put_module:
|
put_module:
|
||||||
module_put(t->me);
|
module_put(t->me);
|
||||||
|
|
@ -1742,6 +1741,10 @@ int ipt_register_table(struct net *net, const struct xt_table *table,
|
||||||
|
|
||||||
new_table = xt_register_table(net, table, &bootstrap, newinfo);
|
new_table = xt_register_table(net, table, &bootstrap, newinfo);
|
||||||
if (IS_ERR(new_table)) {
|
if (IS_ERR(new_table)) {
|
||||||
|
struct ipt_entry *iter;
|
||||||
|
|
||||||
|
xt_entry_foreach(iter, loc_cpu_entry, newinfo->size)
|
||||||
|
cleanup_entry(iter, net);
|
||||||
xt_free_table_info(newinfo);
|
xt_free_table_info(newinfo);
|
||||||
return PTR_ERR(new_table);
|
return PTR_ERR(new_table);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -597,6 +597,9 @@ EXPORT_SYMBOL(tcp_create_openreq_child);
|
||||||
* validation and inside tcp_v4_reqsk_send_ack(). Can we do better?
|
* validation and inside tcp_v4_reqsk_send_ack(). Can we do better?
|
||||||
*
|
*
|
||||||
* We don't need to initialize tmp_opt.sack_ok as we don't use the results
|
* We don't need to initialize tmp_opt.sack_ok as we don't use the results
|
||||||
|
*
|
||||||
|
* Note: If @fastopen is true, this can be called from process context.
|
||||||
|
* Otherwise, this is from BH context.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
|
struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
|
||||||
|
|
@ -748,7 +751,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
|
||||||
&tcp_rsk(req)->last_oow_ack_time))
|
&tcp_rsk(req)->last_oow_ack_time))
|
||||||
req->rsk_ops->send_ack(sk, skb, req);
|
req->rsk_ops->send_ack(sk, skb, req);
|
||||||
if (paws_reject)
|
if (paws_reject)
|
||||||
__NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
|
NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -767,7 +770,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
|
||||||
* "fourth, check the SYN bit"
|
* "fourth, check the SYN bit"
|
||||||
*/
|
*/
|
||||||
if (flg & (TCP_FLAG_RST|TCP_FLAG_SYN)) {
|
if (flg & (TCP_FLAG_RST|TCP_FLAG_SYN)) {
|
||||||
__TCP_INC_STATS(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
|
TCP_INC_STATS(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
|
||||||
goto embryonic_reset;
|
goto embryonic_reset;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1062,7 +1062,6 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
|
||||||
struct xt_counters *counters;
|
struct xt_counters *counters;
|
||||||
struct ip6t_entry *iter;
|
struct ip6t_entry *iter;
|
||||||
|
|
||||||
ret = 0;
|
|
||||||
counters = xt_counters_alloc(num_counters);
|
counters = xt_counters_alloc(num_counters);
|
||||||
if (!counters) {
|
if (!counters) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
|
@ -1108,7 +1107,7 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
|
||||||
net_warn_ratelimited("ip6tables: counters copy to user failed while replacing table\n");
|
net_warn_ratelimited("ip6tables: counters copy to user failed while replacing table\n");
|
||||||
}
|
}
|
||||||
vfree(counters);
|
vfree(counters);
|
||||||
return ret;
|
return 0;
|
||||||
|
|
||||||
put_module:
|
put_module:
|
||||||
module_put(t->me);
|
module_put(t->me);
|
||||||
|
|
@ -1751,6 +1750,10 @@ int ip6t_register_table(struct net *net, const struct xt_table *table,
|
||||||
|
|
||||||
new_table = xt_register_table(net, table, &bootstrap, newinfo);
|
new_table = xt_register_table(net, table, &bootstrap, newinfo);
|
||||||
if (IS_ERR(new_table)) {
|
if (IS_ERR(new_table)) {
|
||||||
|
struct ip6t_entry *iter;
|
||||||
|
|
||||||
|
xt_entry_foreach(iter, loc_cpu_entry, newinfo->size)
|
||||||
|
cleanup_entry(iter, net);
|
||||||
xt_free_table_info(newinfo);
|
xt_free_table_info(newinfo);
|
||||||
return PTR_ERR(new_table);
|
return PTR_ERR(new_table);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -72,7 +72,9 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (rt->rt6i_idev->dev == dev || (flags & XT_RPFILTER_LOOSE))
|
if (rt->rt6i_idev->dev == dev ||
|
||||||
|
l3mdev_master_ifindex_rcu(rt->rt6i_idev->dev) == dev->ifindex ||
|
||||||
|
(flags & XT_RPFILTER_LOOSE))
|
||||||
ret = true;
|
ret = true;
|
||||||
out:
|
out:
|
||||||
ip6_rt_put(rt);
|
ip6_rt_put(rt);
|
||||||
|
|
|
||||||
|
|
@ -5533,16 +5533,17 @@ static size_t rt6_nlmsg_size(struct fib6_info *f6i)
|
||||||
nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
|
nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
|
||||||
&nexthop_len);
|
&nexthop_len);
|
||||||
} else {
|
} else {
|
||||||
|
struct fib6_info *sibling, *next_sibling;
|
||||||
struct fib6_nh *nh = f6i->fib6_nh;
|
struct fib6_nh *nh = f6i->fib6_nh;
|
||||||
|
|
||||||
nexthop_len = 0;
|
nexthop_len = 0;
|
||||||
if (f6i->fib6_nsiblings) {
|
if (f6i->fib6_nsiblings) {
|
||||||
nexthop_len = nla_total_size(0) /* RTA_MULTIPATH */
|
rt6_nh_nlmsg_size(nh, &nexthop_len);
|
||||||
+ NLA_ALIGN(sizeof(struct rtnexthop))
|
|
||||||
+ nla_total_size(16) /* RTA_GATEWAY */
|
|
||||||
+ lwtunnel_get_encap_size(nh->fib_nh_lws);
|
|
||||||
|
|
||||||
nexthop_len *= f6i->fib6_nsiblings;
|
list_for_each_entry_safe(sibling, next_sibling,
|
||||||
|
&f6i->fib6_siblings, fib6_siblings) {
|
||||||
|
rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
|
nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -669,6 +669,9 @@ const struct nf_ct_hook __rcu *nf_ct_hook __read_mostly;
|
||||||
EXPORT_SYMBOL_GPL(nf_ct_hook);
|
EXPORT_SYMBOL_GPL(nf_ct_hook);
|
||||||
|
|
||||||
#if IS_ENABLED(CONFIG_NF_CONNTRACK)
|
#if IS_ENABLED(CONFIG_NF_CONNTRACK)
|
||||||
|
u8 nf_ctnetlink_has_listener;
|
||||||
|
EXPORT_SYMBOL_GPL(nf_ctnetlink_has_listener);
|
||||||
|
|
||||||
const struct nf_nat_hook __rcu *nf_nat_hook __read_mostly;
|
const struct nf_nat_hook __rcu *nf_nat_hook __read_mostly;
|
||||||
EXPORT_SYMBOL_GPL(nf_nat_hook);
|
EXPORT_SYMBOL_GPL(nf_nat_hook);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -381,7 +381,6 @@ __bpf_kfunc struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i)
|
||||||
struct nf_conn *nfct = (struct nf_conn *)nfct_i;
|
struct nf_conn *nfct = (struct nf_conn *)nfct_i;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
nfct->status |= IPS_CONFIRMED;
|
|
||||||
err = nf_conntrack_hash_check_insert(nfct);
|
err = nf_conntrack_hash_check_insert(nfct);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
nf_conntrack_free(nfct);
|
nf_conntrack_free(nfct);
|
||||||
|
|
|
||||||
|
|
@ -884,10 +884,8 @@ nf_conntrack_hash_check_insert(struct nf_conn *ct)
|
||||||
|
|
||||||
zone = nf_ct_zone(ct);
|
zone = nf_ct_zone(ct);
|
||||||
|
|
||||||
if (!nf_ct_ext_valid_pre(ct->ext)) {
|
if (!nf_ct_ext_valid_pre(ct->ext))
|
||||||
NF_CT_STAT_INC_ATOMIC(net, insert_failed);
|
return -EAGAIN;
|
||||||
return -ETIMEDOUT;
|
|
||||||
}
|
|
||||||
|
|
||||||
local_bh_disable();
|
local_bh_disable();
|
||||||
do {
|
do {
|
||||||
|
|
@ -922,6 +920,19 @@ nf_conntrack_hash_check_insert(struct nf_conn *ct)
|
||||||
goto chaintoolong;
|
goto chaintoolong;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* If genid has changed, we can't insert anymore because ct
|
||||||
|
* extensions could have stale pointers and nf_ct_iterate_destroy
|
||||||
|
* might have completed its table scan already.
|
||||||
|
*
|
||||||
|
* Increment of the ext genid right after this check is fine:
|
||||||
|
* nf_ct_iterate_destroy blocks until locks are released.
|
||||||
|
*/
|
||||||
|
if (!nf_ct_ext_valid_post(ct->ext)) {
|
||||||
|
err = -EAGAIN;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
|
ct->status |= IPS_CONFIRMED;
|
||||||
smp_wmb();
|
smp_wmb();
|
||||||
/* The caller holds a reference to this object */
|
/* The caller holds a reference to this object */
|
||||||
refcount_set(&ct->ct_general.use, 2);
|
refcount_set(&ct->ct_general.use, 2);
|
||||||
|
|
@ -930,12 +941,6 @@ nf_conntrack_hash_check_insert(struct nf_conn *ct)
|
||||||
NF_CT_STAT_INC(net, insert);
|
NF_CT_STAT_INC(net, insert);
|
||||||
local_bh_enable();
|
local_bh_enable();
|
||||||
|
|
||||||
if (!nf_ct_ext_valid_post(ct->ext)) {
|
|
||||||
nf_ct_kill(ct);
|
|
||||||
NF_CT_STAT_INC_ATOMIC(net, drop);
|
|
||||||
return -ETIMEDOUT;
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
chaintoolong:
|
chaintoolong:
|
||||||
NF_CT_STAT_INC(net, chaintoolong);
|
NF_CT_STAT_INC(net, chaintoolong);
|
||||||
|
|
|
||||||
|
|
@ -309,7 +309,7 @@ bool nf_ct_ecache_ext_add(struct nf_conn *ct, u16 ctmask, u16 expmask, gfp_t gfp
|
||||||
break;
|
break;
|
||||||
return true;
|
return true;
|
||||||
case 2: /* autodetect: no event listener, don't allocate extension. */
|
case 2: /* autodetect: no event listener, don't allocate extension. */
|
||||||
if (!READ_ONCE(net->ct.ctnetlink_has_listener))
|
if (!READ_ONCE(nf_ctnetlink_has_listener))
|
||||||
return true;
|
return true;
|
||||||
fallthrough;
|
fallthrough;
|
||||||
case 1:
|
case 1:
|
||||||
|
|
|
||||||
|
|
@ -2316,9 +2316,6 @@ ctnetlink_create_conntrack(struct net *net,
|
||||||
nfct_seqadj_ext_add(ct);
|
nfct_seqadj_ext_add(ct);
|
||||||
nfct_synproxy_ext_add(ct);
|
nfct_synproxy_ext_add(ct);
|
||||||
|
|
||||||
/* we must add conntrack extensions before confirmation. */
|
|
||||||
ct->status |= IPS_CONFIRMED;
|
|
||||||
|
|
||||||
if (cda[CTA_STATUS]) {
|
if (cda[CTA_STATUS]) {
|
||||||
err = ctnetlink_change_status(ct, cda);
|
err = ctnetlink_change_status(ct, cda);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
|
|
@ -2375,12 +2372,15 @@ ctnetlink_create_conntrack(struct net *net,
|
||||||
|
|
||||||
err = nf_conntrack_hash_check_insert(ct);
|
err = nf_conntrack_hash_check_insert(ct);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err2;
|
goto err3;
|
||||||
|
|
||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
|
|
||||||
return ct;
|
return ct;
|
||||||
|
|
||||||
|
err3:
|
||||||
|
if (ct->master)
|
||||||
|
nf_ct_put(ct->master);
|
||||||
err2:
|
err2:
|
||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
err1:
|
err1:
|
||||||
|
|
|
||||||
|
|
@ -5507,7 +5507,7 @@ static int nf_tables_getsetelem(struct sk_buff *skb,
|
||||||
int rem, err = 0;
|
int rem, err = 0;
|
||||||
|
|
||||||
table = nft_table_lookup(net, nla[NFTA_SET_ELEM_LIST_TABLE], family,
|
table = nft_table_lookup(net, nla[NFTA_SET_ELEM_LIST_TABLE], family,
|
||||||
genmask, NETLINK_CB(skb).portid);
|
genmask, 0);
|
||||||
if (IS_ERR(table)) {
|
if (IS_ERR(table)) {
|
||||||
NL_SET_BAD_ATTR(extack, nla[NFTA_SET_ELEM_LIST_TABLE]);
|
NL_SET_BAD_ATTR(extack, nla[NFTA_SET_ELEM_LIST_TABLE]);
|
||||||
return PTR_ERR(table);
|
return PTR_ERR(table);
|
||||||
|
|
|
||||||
|
|
@ -29,6 +29,7 @@
|
||||||
|
|
||||||
#include <net/netlink.h>
|
#include <net/netlink.h>
|
||||||
#include <net/netns/generic.h>
|
#include <net/netns/generic.h>
|
||||||
|
#include <linux/netfilter.h>
|
||||||
#include <linux/netfilter/nfnetlink.h>
|
#include <linux/netfilter/nfnetlink.h>
|
||||||
|
|
||||||
MODULE_LICENSE("GPL");
|
MODULE_LICENSE("GPL");
|
||||||
|
|
@ -685,12 +686,12 @@ static void nfnetlink_bind_event(struct net *net, unsigned int group)
|
||||||
group_bit = (1 << group);
|
group_bit = (1 << group);
|
||||||
|
|
||||||
spin_lock(&nfnl_grp_active_lock);
|
spin_lock(&nfnl_grp_active_lock);
|
||||||
v = READ_ONCE(net->ct.ctnetlink_has_listener);
|
v = READ_ONCE(nf_ctnetlink_has_listener);
|
||||||
if ((v & group_bit) == 0) {
|
if ((v & group_bit) == 0) {
|
||||||
v |= group_bit;
|
v |= group_bit;
|
||||||
|
|
||||||
/* read concurrently without nfnl_grp_active_lock held. */
|
/* read concurrently without nfnl_grp_active_lock held. */
|
||||||
WRITE_ONCE(net->ct.ctnetlink_has_listener, v);
|
WRITE_ONCE(nf_ctnetlink_has_listener, v);
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_unlock(&nfnl_grp_active_lock);
|
spin_unlock(&nfnl_grp_active_lock);
|
||||||
|
|
@ -744,12 +745,12 @@ static void nfnetlink_unbind(struct net *net, int group)
|
||||||
|
|
||||||
spin_lock(&nfnl_grp_active_lock);
|
spin_lock(&nfnl_grp_active_lock);
|
||||||
if (!nfnetlink_has_listeners(net, group)) {
|
if (!nfnetlink_has_listeners(net, group)) {
|
||||||
u8 v = READ_ONCE(net->ct.ctnetlink_has_listener);
|
u8 v = READ_ONCE(nf_ctnetlink_has_listener);
|
||||||
|
|
||||||
v &= ~group_bit;
|
v &= ~group_bit;
|
||||||
|
|
||||||
/* read concurrently without nfnl_grp_active_lock held. */
|
/* read concurrently without nfnl_grp_active_lock held. */
|
||||||
WRITE_ONCE(net->ct.ctnetlink_has_listener, v);
|
WRITE_ONCE(nf_ctnetlink_has_listener, v);
|
||||||
}
|
}
|
||||||
spin_unlock(&nfnl_grp_active_lock);
|
spin_unlock(&nfnl_grp_active_lock);
|
||||||
#endif
|
#endif
|
||||||
|
|
|
||||||
|
|
@ -30,8 +30,7 @@ static bool
|
||||||
length_mt6(const struct sk_buff *skb, struct xt_action_param *par)
|
length_mt6(const struct sk_buff *skb, struct xt_action_param *par)
|
||||||
{
|
{
|
||||||
const struct xt_length_info *info = par->matchinfo;
|
const struct xt_length_info *info = par->matchinfo;
|
||||||
const u_int16_t pktlen = ntohs(ipv6_hdr(skb)->payload_len) +
|
u32 pktlen = skb->len;
|
||||||
sizeof(struct ipv6hdr);
|
|
||||||
|
|
||||||
return (pktlen >= info->min && pktlen <= info->max) ^ info->invert;
|
return (pktlen >= info->min && pktlen <= info->max) ^ info->invert;
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1442,7 +1442,11 @@ static int nfc_se_io(struct nfc_dev *dev, u32 se_idx,
|
||||||
rc = dev->ops->se_io(dev, se_idx, apdu,
|
rc = dev->ops->se_io(dev, se_idx, apdu,
|
||||||
apdu_length, cb, cb_context);
|
apdu_length, cb, cb_context);
|
||||||
|
|
||||||
|
device_unlock(&dev->dev);
|
||||||
|
return rc;
|
||||||
|
|
||||||
error:
|
error:
|
||||||
|
kfree(cb_context);
|
||||||
device_unlock(&dev->dev);
|
device_unlock(&dev->dev);
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1596,12 +1596,12 @@ static int tca_get_fill(struct sk_buff *skb, struct tc_action *actions[],
|
||||||
if (tcf_action_dump(skb, actions, bind, ref, false) < 0)
|
if (tcf_action_dump(skb, actions, bind, ref, false) < 0)
|
||||||
goto out_nlmsg_trim;
|
goto out_nlmsg_trim;
|
||||||
|
|
||||||
nla_nest_end(skb, nest);
|
|
||||||
|
|
||||||
if (extack && extack->_msg &&
|
if (extack && extack->_msg &&
|
||||||
nla_put_string(skb, TCA_EXT_WARN_MSG, extack->_msg))
|
nla_put_string(skb, TCA_EXT_WARN_MSG, extack->_msg))
|
||||||
goto out_nlmsg_trim;
|
goto out_nlmsg_trim;
|
||||||
|
|
||||||
|
nla_nest_end(skb, nest);
|
||||||
|
|
||||||
nlh->nlmsg_len = skb_tail_pointer(skb) - b;
|
nlh->nlmsg_len = skb_tail_pointer(skb) - b;
|
||||||
|
|
||||||
return skb->len;
|
return skb->len;
|
||||||
|
|
|
||||||
|
|
@ -190,40 +190,67 @@ static int tcf_mpls_init(struct net *net, struct nlattr *nla,
|
||||||
parm = nla_data(tb[TCA_MPLS_PARMS]);
|
parm = nla_data(tb[TCA_MPLS_PARMS]);
|
||||||
index = parm->index;
|
index = parm->index;
|
||||||
|
|
||||||
|
err = tcf_idr_check_alloc(tn, &index, a, bind);
|
||||||
|
if (err < 0)
|
||||||
|
return err;
|
||||||
|
exists = err;
|
||||||
|
if (exists && bind)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (!exists) {
|
||||||
|
ret = tcf_idr_create(tn, index, est, a, &act_mpls_ops, bind,
|
||||||
|
true, flags);
|
||||||
|
if (ret) {
|
||||||
|
tcf_idr_cleanup(tn, index);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = ACT_P_CREATED;
|
||||||
|
} else if (!(flags & TCA_ACT_FLAGS_REPLACE)) {
|
||||||
|
tcf_idr_release(*a, bind);
|
||||||
|
return -EEXIST;
|
||||||
|
}
|
||||||
|
|
||||||
/* Verify parameters against action type. */
|
/* Verify parameters against action type. */
|
||||||
switch (parm->m_action) {
|
switch (parm->m_action) {
|
||||||
case TCA_MPLS_ACT_POP:
|
case TCA_MPLS_ACT_POP:
|
||||||
if (!tb[TCA_MPLS_PROTO]) {
|
if (!tb[TCA_MPLS_PROTO]) {
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Protocol must be set for MPLS pop");
|
NL_SET_ERR_MSG_MOD(extack, "Protocol must be set for MPLS pop");
|
||||||
return -EINVAL;
|
err = -EINVAL;
|
||||||
|
goto release_idr;
|
||||||
}
|
}
|
||||||
if (!eth_proto_is_802_3(nla_get_be16(tb[TCA_MPLS_PROTO]))) {
|
if (!eth_proto_is_802_3(nla_get_be16(tb[TCA_MPLS_PROTO]))) {
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Invalid protocol type for MPLS pop");
|
NL_SET_ERR_MSG_MOD(extack, "Invalid protocol type for MPLS pop");
|
||||||
return -EINVAL;
|
err = -EINVAL;
|
||||||
|
goto release_idr;
|
||||||
}
|
}
|
||||||
if (tb[TCA_MPLS_LABEL] || tb[TCA_MPLS_TTL] || tb[TCA_MPLS_TC] ||
|
if (tb[TCA_MPLS_LABEL] || tb[TCA_MPLS_TTL] || tb[TCA_MPLS_TC] ||
|
||||||
tb[TCA_MPLS_BOS]) {
|
tb[TCA_MPLS_BOS]) {
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Label, TTL, TC or BOS cannot be used with MPLS pop");
|
NL_SET_ERR_MSG_MOD(extack, "Label, TTL, TC or BOS cannot be used with MPLS pop");
|
||||||
return -EINVAL;
|
err = -EINVAL;
|
||||||
|
goto release_idr;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case TCA_MPLS_ACT_DEC_TTL:
|
case TCA_MPLS_ACT_DEC_TTL:
|
||||||
if (tb[TCA_MPLS_PROTO] || tb[TCA_MPLS_LABEL] ||
|
if (tb[TCA_MPLS_PROTO] || tb[TCA_MPLS_LABEL] ||
|
||||||
tb[TCA_MPLS_TTL] || tb[TCA_MPLS_TC] || tb[TCA_MPLS_BOS]) {
|
tb[TCA_MPLS_TTL] || tb[TCA_MPLS_TC] || tb[TCA_MPLS_BOS]) {
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Label, TTL, TC, BOS or protocol cannot be used with MPLS dec_ttl");
|
NL_SET_ERR_MSG_MOD(extack, "Label, TTL, TC, BOS or protocol cannot be used with MPLS dec_ttl");
|
||||||
return -EINVAL;
|
err = -EINVAL;
|
||||||
|
goto release_idr;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case TCA_MPLS_ACT_PUSH:
|
case TCA_MPLS_ACT_PUSH:
|
||||||
case TCA_MPLS_ACT_MAC_PUSH:
|
case TCA_MPLS_ACT_MAC_PUSH:
|
||||||
if (!tb[TCA_MPLS_LABEL]) {
|
if (!tb[TCA_MPLS_LABEL]) {
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Label is required for MPLS push");
|
NL_SET_ERR_MSG_MOD(extack, "Label is required for MPLS push");
|
||||||
return -EINVAL;
|
err = -EINVAL;
|
||||||
|
goto release_idr;
|
||||||
}
|
}
|
||||||
if (tb[TCA_MPLS_PROTO] &&
|
if (tb[TCA_MPLS_PROTO] &&
|
||||||
!eth_p_mpls(nla_get_be16(tb[TCA_MPLS_PROTO]))) {
|
!eth_p_mpls(nla_get_be16(tb[TCA_MPLS_PROTO]))) {
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Protocol must be an MPLS type for MPLS push");
|
NL_SET_ERR_MSG_MOD(extack, "Protocol must be an MPLS type for MPLS push");
|
||||||
return -EPROTONOSUPPORT;
|
err = -EPROTONOSUPPORT;
|
||||||
|
goto release_idr;
|
||||||
}
|
}
|
||||||
/* Push needs a TTL - if not specified, set a default value. */
|
/* Push needs a TTL - if not specified, set a default value. */
|
||||||
if (!tb[TCA_MPLS_TTL]) {
|
if (!tb[TCA_MPLS_TTL]) {
|
||||||
|
|
@ -238,33 +265,14 @@ static int tcf_mpls_init(struct net *net, struct nlattr *nla,
|
||||||
case TCA_MPLS_ACT_MODIFY:
|
case TCA_MPLS_ACT_MODIFY:
|
||||||
if (tb[TCA_MPLS_PROTO]) {
|
if (tb[TCA_MPLS_PROTO]) {
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Protocol cannot be used with MPLS modify");
|
NL_SET_ERR_MSG_MOD(extack, "Protocol cannot be used with MPLS modify");
|
||||||
return -EINVAL;
|
err = -EINVAL;
|
||||||
|
goto release_idr;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Unknown MPLS action");
|
NL_SET_ERR_MSG_MOD(extack, "Unknown MPLS action");
|
||||||
return -EINVAL;
|
err = -EINVAL;
|
||||||
}
|
goto release_idr;
|
||||||
|
|
||||||
err = tcf_idr_check_alloc(tn, &index, a, bind);
|
|
||||||
if (err < 0)
|
|
||||||
return err;
|
|
||||||
exists = err;
|
|
||||||
if (exists && bind)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
if (!exists) {
|
|
||||||
ret = tcf_idr_create(tn, index, est, a,
|
|
||||||
&act_mpls_ops, bind, true, flags);
|
|
||||||
if (ret) {
|
|
||||||
tcf_idr_cleanup(tn, index);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = ACT_P_CREATED;
|
|
||||||
} else if (!(flags & TCA_ACT_FLAGS_REPLACE)) {
|
|
||||||
tcf_idr_release(*a, bind);
|
|
||||||
return -EEXIST;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
|
err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
|
||||||
|
|
|
||||||
|
|
@ -181,26 +181,6 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
|
||||||
}
|
}
|
||||||
|
|
||||||
parm = nla_data(pattr);
|
parm = nla_data(pattr);
|
||||||
if (!parm->nkeys) {
|
|
||||||
NL_SET_ERR_MSG_MOD(extack, "Pedit requires keys to be passed");
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
ksize = parm->nkeys * sizeof(struct tc_pedit_key);
|
|
||||||
if (nla_len(pattr) < sizeof(*parm) + ksize) {
|
|
||||||
NL_SET_ERR_MSG_ATTR(extack, pattr, "Length of TCA_PEDIT_PARMS or TCA_PEDIT_PARMS_EX pedit attribute is invalid");
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
nparms = kzalloc(sizeof(*nparms), GFP_KERNEL);
|
|
||||||
if (!nparms)
|
|
||||||
return -ENOMEM;
|
|
||||||
|
|
||||||
nparms->tcfp_keys_ex =
|
|
||||||
tcf_pedit_keys_ex_parse(tb[TCA_PEDIT_KEYS_EX], parm->nkeys);
|
|
||||||
if (IS_ERR(nparms->tcfp_keys_ex)) {
|
|
||||||
ret = PTR_ERR(nparms->tcfp_keys_ex);
|
|
||||||
goto out_free;
|
|
||||||
}
|
|
||||||
|
|
||||||
index = parm->index;
|
index = parm->index;
|
||||||
err = tcf_idr_check_alloc(tn, &index, a, bind);
|
err = tcf_idr_check_alloc(tn, &index, a, bind);
|
||||||
|
|
@ -209,25 +189,49 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
|
||||||
&act_pedit_ops, bind, flags);
|
&act_pedit_ops, bind, flags);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
tcf_idr_cleanup(tn, index);
|
tcf_idr_cleanup(tn, index);
|
||||||
goto out_free_ex;
|
return ret;
|
||||||
}
|
}
|
||||||
ret = ACT_P_CREATED;
|
ret = ACT_P_CREATED;
|
||||||
} else if (err > 0) {
|
} else if (err > 0) {
|
||||||
if (bind)
|
if (bind)
|
||||||
goto out_free;
|
return 0;
|
||||||
if (!(flags & TCA_ACT_FLAGS_REPLACE)) {
|
if (!(flags & TCA_ACT_FLAGS_REPLACE)) {
|
||||||
ret = -EEXIST;
|
ret = -EEXIST;
|
||||||
goto out_release;
|
goto out_release;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
ret = err;
|
return err;
|
||||||
goto out_free_ex;
|
}
|
||||||
|
|
||||||
|
if (!parm->nkeys) {
|
||||||
|
NL_SET_ERR_MSG_MOD(extack, "Pedit requires keys to be passed");
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto out_release;
|
||||||
|
}
|
||||||
|
ksize = parm->nkeys * sizeof(struct tc_pedit_key);
|
||||||
|
if (nla_len(pattr) < sizeof(*parm) + ksize) {
|
||||||
|
NL_SET_ERR_MSG_ATTR(extack, pattr, "Length of TCA_PEDIT_PARMS or TCA_PEDIT_PARMS_EX pedit attribute is invalid");
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto out_release;
|
||||||
|
}
|
||||||
|
|
||||||
|
nparms = kzalloc(sizeof(*nparms), GFP_KERNEL);
|
||||||
|
if (!nparms) {
|
||||||
|
ret = -ENOMEM;
|
||||||
|
goto out_release;
|
||||||
|
}
|
||||||
|
|
||||||
|
nparms->tcfp_keys_ex =
|
||||||
|
tcf_pedit_keys_ex_parse(tb[TCA_PEDIT_KEYS_EX], parm->nkeys);
|
||||||
|
if (IS_ERR(nparms->tcfp_keys_ex)) {
|
||||||
|
ret = PTR_ERR(nparms->tcfp_keys_ex);
|
||||||
|
goto out_free;
|
||||||
}
|
}
|
||||||
|
|
||||||
err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
|
err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
ret = err;
|
ret = err;
|
||||||
goto out_release;
|
goto out_free_ex;
|
||||||
}
|
}
|
||||||
|
|
||||||
nparms->tcfp_off_max_hint = 0;
|
nparms->tcfp_off_max_hint = 0;
|
||||||
|
|
@ -278,12 +282,12 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
|
||||||
put_chain:
|
put_chain:
|
||||||
if (goto_ch)
|
if (goto_ch)
|
||||||
tcf_chain_put_by_act(goto_ch);
|
tcf_chain_put_by_act(goto_ch);
|
||||||
out_release:
|
|
||||||
tcf_idr_release(*a, bind);
|
|
||||||
out_free_ex:
|
out_free_ex:
|
||||||
kfree(nparms->tcfp_keys_ex);
|
kfree(nparms->tcfp_keys_ex);
|
||||||
out_free:
|
out_free:
|
||||||
kfree(nparms);
|
kfree(nparms);
|
||||||
|
out_release:
|
||||||
|
tcf_idr_release(*a, bind);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -55,8 +55,8 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
|
||||||
sample_policy, NULL);
|
sample_policy, NULL);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
if (!tb[TCA_SAMPLE_PARMS] || !tb[TCA_SAMPLE_RATE] ||
|
|
||||||
!tb[TCA_SAMPLE_PSAMPLE_GROUP])
|
if (!tb[TCA_SAMPLE_PARMS])
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
parm = nla_data(tb[TCA_SAMPLE_PARMS]);
|
parm = nla_data(tb[TCA_SAMPLE_PARMS]);
|
||||||
|
|
@ -80,6 +80,13 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
|
||||||
tcf_idr_release(*a, bind);
|
tcf_idr_release(*a, bind);
|
||||||
return -EEXIST;
|
return -EEXIST;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (!tb[TCA_SAMPLE_RATE] || !tb[TCA_SAMPLE_PSAMPLE_GROUP]) {
|
||||||
|
NL_SET_ERR_MSG(extack, "sample rate and group are required");
|
||||||
|
err = -EINVAL;
|
||||||
|
goto release_idr;
|
||||||
|
}
|
||||||
|
|
||||||
err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
|
err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto release_idr;
|
goto release_idr;
|
||||||
|
|
|
||||||
|
|
@ -3241,9 +3241,9 @@ EXPORT_SYMBOL(tcf_exts_init_ex);
|
||||||
|
|
||||||
void tcf_exts_destroy(struct tcf_exts *exts)
|
void tcf_exts_destroy(struct tcf_exts *exts)
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_NET_CLS_ACT
|
|
||||||
tcf_exts_miss_cookie_base_destroy(exts);
|
tcf_exts_miss_cookie_base_destroy(exts);
|
||||||
|
|
||||||
|
#ifdef CONFIG_NET_CLS_ACT
|
||||||
if (exts->actions) {
|
if (exts->actions) {
|
||||||
tcf_action_destroy(exts->actions, TCA_ACT_UNBIND);
|
tcf_action_destroy(exts->actions, TCA_ACT_UNBIND);
|
||||||
kfree(exts->actions);
|
kfree(exts->actions);
|
||||||
|
|
|
||||||
|
|
@ -25,6 +25,18 @@
|
||||||
|
|
||||||
static void sctp_sched_prio_unsched_all(struct sctp_stream *stream);
|
static void sctp_sched_prio_unsched_all(struct sctp_stream *stream);
|
||||||
|
|
||||||
|
static struct sctp_stream_priorities *sctp_sched_prio_head_get(struct sctp_stream_priorities *p)
|
||||||
|
{
|
||||||
|
p->users++;
|
||||||
|
return p;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void sctp_sched_prio_head_put(struct sctp_stream_priorities *p)
|
||||||
|
{
|
||||||
|
if (p && --p->users == 0)
|
||||||
|
kfree(p);
|
||||||
|
}
|
||||||
|
|
||||||
static struct sctp_stream_priorities *sctp_sched_prio_new_head(
|
static struct sctp_stream_priorities *sctp_sched_prio_new_head(
|
||||||
struct sctp_stream *stream, int prio, gfp_t gfp)
|
struct sctp_stream *stream, int prio, gfp_t gfp)
|
||||||
{
|
{
|
||||||
|
|
@ -38,6 +50,7 @@ static struct sctp_stream_priorities *sctp_sched_prio_new_head(
|
||||||
INIT_LIST_HEAD(&p->active);
|
INIT_LIST_HEAD(&p->active);
|
||||||
p->next = NULL;
|
p->next = NULL;
|
||||||
p->prio = prio;
|
p->prio = prio;
|
||||||
|
p->users = 1;
|
||||||
|
|
||||||
return p;
|
return p;
|
||||||
}
|
}
|
||||||
|
|
@ -53,7 +66,7 @@ static struct sctp_stream_priorities *sctp_sched_prio_get_head(
|
||||||
*/
|
*/
|
||||||
list_for_each_entry(p, &stream->prio_list, prio_sched) {
|
list_for_each_entry(p, &stream->prio_list, prio_sched) {
|
||||||
if (p->prio == prio)
|
if (p->prio == prio)
|
||||||
return p;
|
return sctp_sched_prio_head_get(p);
|
||||||
if (p->prio > prio)
|
if (p->prio > prio)
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
@ -70,7 +83,7 @@ static struct sctp_stream_priorities *sctp_sched_prio_get_head(
|
||||||
*/
|
*/
|
||||||
break;
|
break;
|
||||||
if (p->prio == prio)
|
if (p->prio == prio)
|
||||||
return p;
|
return sctp_sched_prio_head_get(p);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* If not even there, allocate a new one. */
|
/* If not even there, allocate a new one. */
|
||||||
|
|
@ -154,32 +167,21 @@ static int sctp_sched_prio_set(struct sctp_stream *stream, __u16 sid,
|
||||||
struct sctp_stream_out_ext *soute = sout->ext;
|
struct sctp_stream_out_ext *soute = sout->ext;
|
||||||
struct sctp_stream_priorities *prio_head, *old;
|
struct sctp_stream_priorities *prio_head, *old;
|
||||||
bool reschedule = false;
|
bool reschedule = false;
|
||||||
int i;
|
|
||||||
|
old = soute->prio_head;
|
||||||
|
if (old && old->prio == prio)
|
||||||
|
return 0;
|
||||||
|
|
||||||
prio_head = sctp_sched_prio_get_head(stream, prio, gfp);
|
prio_head = sctp_sched_prio_get_head(stream, prio, gfp);
|
||||||
if (!prio_head)
|
if (!prio_head)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
reschedule = sctp_sched_prio_unsched(soute);
|
reschedule = sctp_sched_prio_unsched(soute);
|
||||||
old = soute->prio_head;
|
|
||||||
soute->prio_head = prio_head;
|
soute->prio_head = prio_head;
|
||||||
if (reschedule)
|
if (reschedule)
|
||||||
sctp_sched_prio_sched(stream, soute);
|
sctp_sched_prio_sched(stream, soute);
|
||||||
|
|
||||||
if (!old)
|
sctp_sched_prio_head_put(old);
|
||||||
/* Happens when we set the priority for the first time */
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
for (i = 0; i < stream->outcnt; i++) {
|
|
||||||
soute = SCTP_SO(stream, i)->ext;
|
|
||||||
if (soute && soute->prio_head == old)
|
|
||||||
/* It's still in use, nothing else to do here. */
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* No hits, we are good to free it. */
|
|
||||||
kfree(old);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -206,20 +208,8 @@ static int sctp_sched_prio_init_sid(struct sctp_stream *stream, __u16 sid,
|
||||||
|
|
||||||
static void sctp_sched_prio_free_sid(struct sctp_stream *stream, __u16 sid)
|
static void sctp_sched_prio_free_sid(struct sctp_stream *stream, __u16 sid)
|
||||||
{
|
{
|
||||||
struct sctp_stream_priorities *prio = SCTP_SO(stream, sid)->ext->prio_head;
|
sctp_sched_prio_head_put(SCTP_SO(stream, sid)->ext->prio_head);
|
||||||
int i;
|
|
||||||
|
|
||||||
if (!prio)
|
|
||||||
return;
|
|
||||||
|
|
||||||
SCTP_SO(stream, sid)->ext->prio_head = NULL;
|
SCTP_SO(stream, sid)->ext->prio_head = NULL;
|
||||||
for (i = 0; i < stream->outcnt; i++) {
|
|
||||||
if (SCTP_SO(stream, i)->ext &&
|
|
||||||
SCTP_SO(stream, i)->ext->prio_head == prio)
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
kfree(prio);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sctp_sched_prio_enqueue(struct sctp_outq *q,
|
static void sctp_sched_prio_enqueue(struct sctp_outq *q,
|
||||||
|
|
|
||||||
|
|
@ -641,8 +641,8 @@ static void wireless_warn_cfg80211_wext(void)
|
||||||
{
|
{
|
||||||
char name[sizeof(current->comm)];
|
char name[sizeof(current->comm)];
|
||||||
|
|
||||||
pr_warn_ratelimited("warning: `%s' uses wireless extensions that are deprecated for modern drivers; use nl80211\n",
|
pr_warn_once("warning: `%s' uses wireless extensions which will stop working for Wi-Fi 7 hardware; use nl80211\n",
|
||||||
get_task_comm(name, current));
|
get_task_comm(name, current));
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -19,7 +19,7 @@
|
||||||
* @NETDEV_XDP_ACT_XSK_ZEROCOPY: This feature informs if netdev supports AF_XDP
|
* @NETDEV_XDP_ACT_XSK_ZEROCOPY: This feature informs if netdev supports AF_XDP
|
||||||
* in zero copy mode.
|
* in zero copy mode.
|
||||||
* @NETDEV_XDP_ACT_HW_OFFLOAD: This feature informs if netdev supports XDP hw
|
* @NETDEV_XDP_ACT_HW_OFFLOAD: This feature informs if netdev supports XDP hw
|
||||||
* oflloading.
|
* offloading.
|
||||||
* @NETDEV_XDP_ACT_RX_SG: This feature informs if netdev implements non-linear
|
* @NETDEV_XDP_ACT_RX_SG: This feature informs if netdev implements non-linear
|
||||||
* XDP buffer support in the driver napi callback.
|
* XDP buffer support in the driver napi callback.
|
||||||
* @NETDEV_XDP_ACT_NDO_XMIT_SG: This feature informs if netdev implements
|
* @NETDEV_XDP_ACT_NDO_XMIT_SG: This feature informs if netdev implements
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1 @@
|
||||||
|
__pycache__/
|
||||||
|
|
@ -3,7 +3,6 @@
|
||||||
import collections
|
import collections
|
||||||
import importlib
|
import importlib
|
||||||
import os
|
import os
|
||||||
import traceback
|
|
||||||
import yaml
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -234,8 +233,7 @@ class SpecFamily(SpecElement):
|
||||||
resolved.append(elem)
|
resolved.append(elem)
|
||||||
|
|
||||||
if len(resolved) == 0:
|
if len(resolved) == 0:
|
||||||
traceback.print_exception(last_exception)
|
raise last_exception
|
||||||
raise Exception("Could not resolve any spec element, infinite loop?")
|
|
||||||
|
|
||||||
def new_attr_set(self, elem):
|
def new_attr_set(self, elem):
|
||||||
return SpecAttrSet(self, elem)
|
return SpecAttrSet(self, elem)
|
||||||
|
|
|
||||||
|
|
@ -546,7 +546,7 @@ class Struct:
|
||||||
max_val = 0
|
max_val = 0
|
||||||
self.attr_max_val = None
|
self.attr_max_val = None
|
||||||
for name, attr in self.attr_list:
|
for name, attr in self.attr_list:
|
||||||
if attr.value > max_val:
|
if attr.value >= max_val:
|
||||||
max_val = attr.value
|
max_val = attr.value
|
||||||
self.attr_max_val = attr
|
self.attr_max_val = attr
|
||||||
self.attrs[name] = attr
|
self.attrs[name] = attr
|
||||||
|
|
|
||||||
|
|
@ -9,7 +9,7 @@ ret=0
|
||||||
ksft_skip=4
|
ksft_skip=4
|
||||||
|
|
||||||
# all tests in this script. Can be overridden with -t option
|
# all tests in this script. Can be overridden with -t option
|
||||||
TESTS="unregister down carrier nexthop suppress ipv6_rt ipv4_rt ipv6_addr_metric ipv4_addr_metric ipv6_route_metrics ipv4_route_metrics ipv4_route_v6_gw rp_filter ipv4_del_addr ipv4_mangle ipv6_mangle ipv4_bcast_neigh"
|
TESTS="unregister down carrier nexthop suppress ipv6_notify ipv4_notify ipv6_rt ipv4_rt ipv6_addr_metric ipv4_addr_metric ipv6_route_metrics ipv4_route_metrics ipv4_route_v6_gw rp_filter ipv4_del_addr ipv4_mangle ipv6_mangle ipv4_bcast_neigh"
|
||||||
|
|
||||||
VERBOSE=0
|
VERBOSE=0
|
||||||
PAUSE_ON_FAIL=no
|
PAUSE_ON_FAIL=no
|
||||||
|
|
@ -655,6 +655,98 @@ fib_nexthop_test()
|
||||||
cleanup
|
cleanup
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fib6_notify_test()
|
||||||
|
{
|
||||||
|
setup
|
||||||
|
|
||||||
|
echo
|
||||||
|
echo "Fib6 info length calculation in route notify test"
|
||||||
|
set -e
|
||||||
|
|
||||||
|
for i in 10 20 30 40 50 60 70;
|
||||||
|
do
|
||||||
|
$IP link add dummy_$i type dummy
|
||||||
|
$IP link set dev dummy_$i up
|
||||||
|
$IP -6 address add 2001:$i::1/64 dev dummy_$i
|
||||||
|
done
|
||||||
|
|
||||||
|
$NS_EXEC ip monitor route &> errors.txt &
|
||||||
|
sleep 2
|
||||||
|
|
||||||
|
$IP -6 route add 2001::/64 \
|
||||||
|
nexthop via 2001:10::2 dev dummy_10 \
|
||||||
|
nexthop encap ip6 dst 2002::20 via 2001:20::2 dev dummy_20 \
|
||||||
|
nexthop encap ip6 dst 2002::30 via 2001:30::2 dev dummy_30 \
|
||||||
|
nexthop encap ip6 dst 2002::40 via 2001:40::2 dev dummy_40 \
|
||||||
|
nexthop encap ip6 dst 2002::50 via 2001:50::2 dev dummy_50 \
|
||||||
|
nexthop encap ip6 dst 2002::60 via 2001:60::2 dev dummy_60 \
|
||||||
|
nexthop encap ip6 dst 2002::70 via 2001:70::2 dev dummy_70
|
||||||
|
|
||||||
|
set +e
|
||||||
|
|
||||||
|
err=`cat errors.txt |grep "Message too long"`
|
||||||
|
if [ -z "$err" ];then
|
||||||
|
ret=0
|
||||||
|
else
|
||||||
|
ret=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_test $ret 0 "ipv6 route add notify"
|
||||||
|
|
||||||
|
{ kill %% && wait %%; } 2>/dev/null
|
||||||
|
|
||||||
|
#rm errors.txt
|
||||||
|
|
||||||
|
cleanup &> /dev/null
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
fib_notify_test()
|
||||||
|
{
|
||||||
|
setup
|
||||||
|
|
||||||
|
echo
|
||||||
|
echo "Fib4 info length calculation in route notify test"
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
for i in 10 20 30 40 50 60 70;
|
||||||
|
do
|
||||||
|
$IP link add dummy_$i type dummy
|
||||||
|
$IP link set dev dummy_$i up
|
||||||
|
$IP address add 20.20.$i.2/24 dev dummy_$i
|
||||||
|
done
|
||||||
|
|
||||||
|
$NS_EXEC ip monitor route &> errors.txt &
|
||||||
|
sleep 2
|
||||||
|
|
||||||
|
$IP route add 10.0.0.0/24 \
|
||||||
|
nexthop via 20.20.10.1 dev dummy_10 \
|
||||||
|
nexthop encap ip dst 192.168.10.20 via 20.20.20.1 dev dummy_20 \
|
||||||
|
nexthop encap ip dst 192.168.10.30 via 20.20.30.1 dev dummy_30 \
|
||||||
|
nexthop encap ip dst 192.168.10.40 via 20.20.40.1 dev dummy_40 \
|
||||||
|
nexthop encap ip dst 192.168.10.50 via 20.20.50.1 dev dummy_50 \
|
||||||
|
nexthop encap ip dst 192.168.10.60 via 20.20.60.1 dev dummy_60 \
|
||||||
|
nexthop encap ip dst 192.168.10.70 via 20.20.70.1 dev dummy_70
|
||||||
|
|
||||||
|
set +e
|
||||||
|
|
||||||
|
err=`cat errors.txt |grep "Message too long"`
|
||||||
|
if [ -z "$err" ];then
|
||||||
|
ret=0
|
||||||
|
else
|
||||||
|
ret=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_test $ret 0 "ipv4 route add notify"
|
||||||
|
|
||||||
|
{ kill %% && wait %%; } 2>/dev/null
|
||||||
|
|
||||||
|
rm errors.txt
|
||||||
|
|
||||||
|
cleanup &> /dev/null
|
||||||
|
}
|
||||||
|
|
||||||
fib_suppress_test()
|
fib_suppress_test()
|
||||||
{
|
{
|
||||||
echo
|
echo
|
||||||
|
|
@ -2111,6 +2203,8 @@ do
|
||||||
fib_carrier_test|carrier) fib_carrier_test;;
|
fib_carrier_test|carrier) fib_carrier_test;;
|
||||||
fib_rp_filter_test|rp_filter) fib_rp_filter_test;;
|
fib_rp_filter_test|rp_filter) fib_rp_filter_test;;
|
||||||
fib_nexthop_test|nexthop) fib_nexthop_test;;
|
fib_nexthop_test|nexthop) fib_nexthop_test;;
|
||||||
|
fib_notify_test|ipv4_notify) fib_notify_test;;
|
||||||
|
fib6_notify_test|ipv6_notify) fib6_notify_test;;
|
||||||
fib_suppress_test|suppress) fib_suppress_test;;
|
fib_suppress_test|suppress) fib_suppress_test;;
|
||||||
ipv6_route_test|ipv6_rt) ipv6_route_test;;
|
ipv6_route_test|ipv6_rt) ipv6_route_test;;
|
||||||
ipv4_route_test|ipv4_rt) ipv4_route_test;;
|
ipv4_route_test|ipv4_rt) ipv4_route_test;;
|
||||||
|
|
|
||||||
|
|
@ -62,10 +62,16 @@ ip -net "$ns1" a a fec0:42::2/64 dev v0 nodad
|
||||||
ip -net "$ns2" a a fec0:42::1/64 dev d0 nodad
|
ip -net "$ns2" a a fec0:42::1/64 dev d0 nodad
|
||||||
|
|
||||||
# firewall matches to test
|
# firewall matches to test
|
||||||
[ -n "$iptables" ] && ip netns exec "$ns2" \
|
[ -n "$iptables" ] && {
|
||||||
"$iptables" -t raw -A PREROUTING -s 192.168.0.0/16 -m rpfilter
|
common='-t raw -A PREROUTING -s 192.168.0.0/16'
|
||||||
[ -n "$ip6tables" ] && ip netns exec "$ns2" \
|
ip netns exec "$ns2" "$iptables" $common -m rpfilter
|
||||||
"$ip6tables" -t raw -A PREROUTING -s fec0::/16 -m rpfilter
|
ip netns exec "$ns2" "$iptables" $common -m rpfilter --invert
|
||||||
|
}
|
||||||
|
[ -n "$ip6tables" ] && {
|
||||||
|
common='-t raw -A PREROUTING -s fec0::/16'
|
||||||
|
ip netns exec "$ns2" "$ip6tables" $common -m rpfilter
|
||||||
|
ip netns exec "$ns2" "$ip6tables" $common -m rpfilter --invert
|
||||||
|
}
|
||||||
[ -n "$nft" ] && ip netns exec "$ns2" $nft -f - <<EOF
|
[ -n "$nft" ] && ip netns exec "$ns2" $nft -f - <<EOF
|
||||||
table inet t {
|
table inet t {
|
||||||
chain c {
|
chain c {
|
||||||
|
|
@ -89,6 +95,11 @@ ipt_zero_rule() { # (command)
|
||||||
[ -n "$1" ] || return 0
|
[ -n "$1" ] || return 0
|
||||||
ip netns exec "$ns2" "$1" -t raw -vS | grep -q -- "-m rpfilter -c 0 0"
|
ip netns exec "$ns2" "$1" -t raw -vS | grep -q -- "-m rpfilter -c 0 0"
|
||||||
}
|
}
|
||||||
|
ipt_zero_reverse_rule() { # (command)
|
||||||
|
[ -n "$1" ] || return 0
|
||||||
|
ip netns exec "$ns2" "$1" -t raw -vS | \
|
||||||
|
grep -q -- "-m rpfilter --invert -c 0 0"
|
||||||
|
}
|
||||||
nft_zero_rule() { # (family)
|
nft_zero_rule() { # (family)
|
||||||
[ -n "$nft" ] || return 0
|
[ -n "$nft" ] || return 0
|
||||||
ip netns exec "$ns2" "$nft" list chain inet t c | \
|
ip netns exec "$ns2" "$nft" list chain inet t c | \
|
||||||
|
|
@ -101,8 +112,7 @@ netns_ping() { # (netns, args...)
|
||||||
ip netns exec "$netns" ping -q -c 1 -W 1 "$@" >/dev/null
|
ip netns exec "$netns" ping -q -c 1 -W 1 "$@" >/dev/null
|
||||||
}
|
}
|
||||||
|
|
||||||
testrun() {
|
clear_counters() {
|
||||||
# clear counters first
|
|
||||||
[ -n "$iptables" ] && ip netns exec "$ns2" "$iptables" -t raw -Z
|
[ -n "$iptables" ] && ip netns exec "$ns2" "$iptables" -t raw -Z
|
||||||
[ -n "$ip6tables" ] && ip netns exec "$ns2" "$ip6tables" -t raw -Z
|
[ -n "$ip6tables" ] && ip netns exec "$ns2" "$ip6tables" -t raw -Z
|
||||||
if [ -n "$nft" ]; then
|
if [ -n "$nft" ]; then
|
||||||
|
|
@ -111,6 +121,10 @@ testrun() {
|
||||||
ip netns exec "$ns2" $nft -s list table inet t;
|
ip netns exec "$ns2" $nft -s list table inet t;
|
||||||
) | ip netns exec "$ns2" $nft -f -
|
) | ip netns exec "$ns2" $nft -f -
|
||||||
fi
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
testrun() {
|
||||||
|
clear_counters
|
||||||
|
|
||||||
# test 1: martian traffic should fail rpfilter matches
|
# test 1: martian traffic should fail rpfilter matches
|
||||||
netns_ping "$ns1" -I v0 192.168.42.1 && \
|
netns_ping "$ns1" -I v0 192.168.42.1 && \
|
||||||
|
|
@ -120,9 +134,13 @@ testrun() {
|
||||||
|
|
||||||
ipt_zero_rule "$iptables" || die "iptables matched martian"
|
ipt_zero_rule "$iptables" || die "iptables matched martian"
|
||||||
ipt_zero_rule "$ip6tables" || die "ip6tables matched martian"
|
ipt_zero_rule "$ip6tables" || die "ip6tables matched martian"
|
||||||
|
ipt_zero_reverse_rule "$iptables" && die "iptables not matched martian"
|
||||||
|
ipt_zero_reverse_rule "$ip6tables" && die "ip6tables not matched martian"
|
||||||
nft_zero_rule ip || die "nft IPv4 matched martian"
|
nft_zero_rule ip || die "nft IPv4 matched martian"
|
||||||
nft_zero_rule ip6 || die "nft IPv6 matched martian"
|
nft_zero_rule ip6 || die "nft IPv6 matched martian"
|
||||||
|
|
||||||
|
clear_counters
|
||||||
|
|
||||||
# test 2: rpfilter match should pass for regular traffic
|
# test 2: rpfilter match should pass for regular traffic
|
||||||
netns_ping "$ns1" 192.168.23.1 || \
|
netns_ping "$ns1" 192.168.23.1 || \
|
||||||
die "regular ping 192.168.23.1 failed"
|
die "regular ping 192.168.23.1 failed"
|
||||||
|
|
@ -131,6 +149,8 @@ testrun() {
|
||||||
|
|
||||||
ipt_zero_rule "$iptables" && die "iptables match not effective"
|
ipt_zero_rule "$iptables" && die "iptables match not effective"
|
||||||
ipt_zero_rule "$ip6tables" && die "ip6tables match not effective"
|
ipt_zero_rule "$ip6tables" && die "ip6tables match not effective"
|
||||||
|
ipt_zero_reverse_rule "$iptables" || die "iptables match over-effective"
|
||||||
|
ipt_zero_reverse_rule "$ip6tables" || die "ip6tables match over-effective"
|
||||||
nft_zero_rule ip && die "nft IPv4 match not effective"
|
nft_zero_rule ip && die "nft IPv4 match not effective"
|
||||||
nft_zero_rule ip6 && die "nft IPv6 match not effective"
|
nft_zero_rule ip6 && die "nft IPv6 match not effective"
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue