Including fixes from can. Slim pickings, I'm guessing people haven't

really started testing.
 
 Current release - new code bugs:
 
  - eth: mlx5e:
    - psp: avoid 'accel' NULL pointer dereference
    - skip PPHCR register query for FEC histogram if not supported
 
 Previous releases - regressions:
 
  - bonding: update the slave array for broadcast mode
 
  - rtnetlink: re-allow deleting FDB entries in user namespace
 
  - eth: dpaa2: fix the pointer passed to PTR_ALIGN on Tx path
 
 Previous releases - always broken:
 
  - can: drop skb on xmit if device is in listen-only mode
 
  - gro: clear skb_shinfo(skb)->hwtstamps in napi_reuse_skb()
 
  - eth: mlx5e
    - RX, fix generating skb from non-linear xdp_buff if program
      trims frags
    - make devcom init failures non-fatal, fix races with IPSec
 
 Misc:
 
  - some documentation formatting "fixes"
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmj6QcgACgkQMUZtbf5S
 IrsRvw/+N4rc/M7X6kYXftxW+DchJ9daXVFFXDR6/L05LyOhNL6wDi0TNT0MM2f0
 RLQQ6ReOyCGhDGDzUJN6FNdn6B75HSzoeZKDoCVBoCpHg8C55X2YXMpTZILlKxH+
 A7yI8H6Zb5ljiXlOwhqSrIzth6orNxjqUBAOIiu0JUhvfvuYnRh1RZPYX+3babjS
 2DcEGK36GLe671PR6S33tu2q/xcmPonbBtQjplEtDKyGrV6+/BtvHSJ/bnKjtnar
 5DR1yJE8PdtujdXyxzaS7x2xfVnbnX/huNepqY3K0d5ZXJphQGxCEfY5DTWAsooF
 zX7Q/lRe56bmNzjg+LRhYzIeKaSFlXw23FQe2b+Dpz/qPMKkm4F2LdIq1C9GOXom
 StX0zz7v6C332N/fABAzCUbklaclZA+HyVmjEJYIdK037xaIpYdh9AfkX0Byd8xP
 JOGFeVtqlW/Vg7yqmiJ8N66yaMKOtmzfA3yyQid0F0CVwy/h3z/fDp78rfTjgORp
 tXX2DO9jF5aUZXE5p6VAWQGHOylpVP83GHKzTZtLxDgzK0d4VBXUGslwifn1rhTz
 yzr//qwt18Rcbv7ezmpsLT+Q2S7pTXtEblWZgf/IM5HE0AGdJIItbhYm3sjUYMiM
 UGZgBr0AwWywXcUmtiD/IdA85UJLUGjDKx88Du356VGPBizIG3A=
 =rD+V
 -----END PGP SIGNATURE-----

Merge tag 'net-6.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from can. Slim pickings, I'm guessing people haven't
  really started testing.

  Current release - new code bugs:

   - eth: mlx5e:
       - psp: avoid 'accel' NULL pointer dereference
       - skip PPHCR register query for FEC histogram if not supported

  Previous releases - regressions:

   - bonding: update the slave array for broadcast mode

   - rtnetlink: re-allow deleting FDB entries in user namespace

   - eth: dpaa2: fix the pointer passed to PTR_ALIGN on Tx path

  Previous releases - always broken:

   - can: drop skb on xmit if device is in listen-only mode

   - gro: clear skb_shinfo(skb)->hwtstamps in napi_reuse_skb()

   - eth: mlx5e
       - RX, fix generating skb from non-linear xdp_buff if program
         trims frags
       - make devcom init failures non-fatal, fix races with IPSec

  Misc:

   - some documentation formatting 'fixes'"

* tag 'net-6.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (47 commits)
  net/mlx5: Fix IPsec cleanup over MPV device
  net/mlx5: Refactor devcom to return NULL on failure
  net/mlx5e: Skip PPHCR register query if not supported by the device
  net/mlx5: Add PPHCR to PCAM supported registers mask
  virtio-net: zero unused hash fields
  net: phy: micrel: always set shared->phydev for LAN8814
  vsock: fix lock inversion in vsock_assign_transport()
  ovpn: use datagram_poll_queue for socket readiness in TCP
  espintcp: use datagram_poll_queue for socket readiness
  net: datagram: introduce datagram_poll_queue for custom receive queues
  net: bonding: fix possible peer notify event loss or dup issue
  net: hsr: prevent creation of HSR device with slaves from another netns
  sctp: avoid NULL dereference when chunk data buffer is missing
  ptp: ocp: Fix typo using index 1 instead of i in SMA initialization loop
  net: ravb: Ensure memory write completes before ringing TX doorbell
  net: ravb: Enforce descriptor type ordering
  net: hibmcge: select FIXED_PHY
  net: dlink: use dev_kfree_skb_any instead of dev_kfree_skb
  Documentation: networking: ax25: update the mailing list info.
  net: gro_cells: fix lock imbalance in gro_cells_receive()
  ...
This commit is contained in:
Linus Torvalds 2025-10-23 07:03:18 -10:00
commit ab431bc397
50 changed files with 451 additions and 263 deletions

View File

@ -11,6 +11,7 @@ found on https://linux-ax25.in-berlin.de.
There is a mailing list for discussing Linux amateur radio matters
called linux-hams@vger.kernel.org. To subscribe to it, send a message to
majordomo@vger.kernel.org with the words "subscribe linux-hams" in the body
of the message, the subject field is ignored. You don't need to be
subscribed to post but of course that means you might miss an answer.
linux-hams+subscribe@vger.kernel.org or use the web interface at
https://vger.kernel.org. The subject and body of the message are
ignored. You don't need to be subscribed to post but of course that
means you might miss an answer.

View File

@ -137,16 +137,20 @@ d. Checksum offload header v5
Checksum offload header fields are in big endian format.
Packet format::
Bit 0 - 6 7 8-15 16-31
Function Header Type Next Header Checksum Valid Reserved
Header Type is to indicate the type of header, this usually is set to CHECKSUM
Header types
= ==========================================
= ===============
0 Reserved
1 Reserved
2 checksum header
= ===============
Checksum Valid is to indicate whether the header checksum is valid. Value of 1
implies that checksum is calculated on this packet and is valid, value of 0
@ -183,9 +187,11 @@ rmnet in a single linear skb. rmnet will process the individual
packets and either ACK the MAP command or deliver the IP packet to the
network stack as needed
MAP header|IP Packet|Optional padding|MAP header|IP Packet|Optional padding....
Packet format::
MAP header|IP Packet|Optional padding|MAP header|Command Packet|Optional pad...
MAP header|IP Packet|Optional padding|MAP header|IP Packet|Optional padding....
MAP header|IP Packet|Optional padding|MAP header|Command Packet|Optional pad...
3. Userspace configuration
==========================

View File

@ -96,9 +96,8 @@ needed to these network configuration daemons to make sure that an IP is
received only on the 'failover' device.
Below is the patch snippet used with 'cloud-ifupdown-helper' script found on
Debian cloud images:
Debian cloud images::
::
@@ -27,6 +27,8 @@ do_setup() {
local working="$cfgdir/.$INTERFACE"
local final="$cfgdir/$INTERFACE"
@ -172,9 +171,8 @@ appropriate FDB entry is added.
The following script is executed on the destination hypervisor once migration
completes, and it reattaches the VF to the VM and brings down the virtio-net
interface.
interface::
::
# reattach-vf.sh
#!/bin/bash

View File

@ -2287,7 +2287,9 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
unblock_netpoll_tx();
}
if (bond_mode_can_use_xmit_hash(bond))
/* broadcast mode uses the all_slaves to loop through slaves. */
if (bond_mode_can_use_xmit_hash(bond) ||
BOND_MODE(bond) == BOND_MODE_BROADCAST)
bond_update_slave_arr(bond, NULL);
if (!slave_dev->netdev_ops->ndo_bpf ||
@ -2463,7 +2465,8 @@ static int __bond_release_one(struct net_device *bond_dev,
bond_upper_dev_unlink(bond, slave);
if (bond_mode_can_use_xmit_hash(bond))
if (bond_mode_can_use_xmit_hash(bond) ||
BOND_MODE(bond) == BOND_MODE_BROADCAST)
bond_update_slave_arr(bond, slave);
slave_info(bond_dev, slave_dev, "Releasing %s interface\n",
@ -2871,7 +2874,7 @@ static void bond_mii_monitor(struct work_struct *work)
{
struct bonding *bond = container_of(work, struct bonding,
mii_work.work);
bool should_notify_peers = false;
bool should_notify_peers;
bool commit;
unsigned long delay;
struct slave *slave;
@ -2883,30 +2886,33 @@ static void bond_mii_monitor(struct work_struct *work)
goto re_arm;
rcu_read_lock();
should_notify_peers = bond_should_notify_peers(bond);
commit = !!bond_miimon_inspect(bond);
if (bond->send_peer_notif) {
rcu_read_unlock();
if (rtnl_trylock()) {
bond->send_peer_notif--;
rtnl_unlock();
}
} else {
rcu_read_unlock();
}
if (commit) {
rcu_read_unlock();
if (commit || bond->send_peer_notif) {
/* Race avoidance with bond_close cancel of workqueue */
if (!rtnl_trylock()) {
delay = 1;
should_notify_peers = false;
goto re_arm;
}
bond_for_each_slave(bond, slave, iter) {
bond_commit_link_state(slave, BOND_SLAVE_NOTIFY_LATER);
if (commit) {
bond_for_each_slave(bond, slave, iter) {
bond_commit_link_state(slave,
BOND_SLAVE_NOTIFY_LATER);
}
bond_miimon_commit(bond);
}
if (bond->send_peer_notif) {
bond->send_peer_notif--;
if (should_notify_peers)
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,
bond->dev);
}
bond_miimon_commit(bond);
rtnl_unlock(); /* might sleep, hold no other locks */
}
@ -2914,13 +2920,6 @@ static void bond_mii_monitor(struct work_struct *work)
re_arm:
if (bond->params.miimon)
queue_delayed_work(bond->wq, &bond->mii_work, delay);
if (should_notify_peers) {
if (!rtnl_trylock())
return;
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, bond->dev);
rtnl_unlock();
}
}
static int bond_upper_dev_walk(struct net_device *upper,

View File

@ -842,7 +842,7 @@ static netdev_tx_t bxcan_start_xmit(struct sk_buff *skb,
u32 id;
int i, j;
if (can_dropped_invalid_skb(ndev, skb))
if (can_dev_dropped_skb(ndev, skb))
return NETDEV_TX_OK;
if (bxcan_tx_busy(priv))

View File

@ -452,7 +452,9 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
}
if (data[IFLA_CAN_RESTART_MS]) {
if (!priv->do_set_mode) {
unsigned int restart_ms = nla_get_u32(data[IFLA_CAN_RESTART_MS]);
if (restart_ms != 0 && !priv->do_set_mode) {
NL_SET_ERR_MSG(extack,
"Device doesn't support restart from Bus Off");
return -EOPNOTSUPP;
@ -461,7 +463,7 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
/* Do not allow changing restart delay while running */
if (dev->flags & IFF_UP)
return -EBUSY;
priv->restart_ms = nla_get_u32(data[IFLA_CAN_RESTART_MS]);
priv->restart_ms = restart_ms;
}
if (data[IFLA_CAN_RESTART]) {

View File

@ -254,7 +254,7 @@ netdev_tx_t acc_start_xmit(struct sk_buff *skb, struct net_device *netdev)
u32 acc_id;
u32 acc_dlc;
if (can_dropped_invalid_skb(netdev, skb))
if (can_dev_dropped_skb(netdev, skb))
return NETDEV_TX_OK;
/* Access core->tx_fifo_tail only once because it may be changed

View File

@ -72,7 +72,7 @@ netdev_tx_t rkcanfd_start_xmit(struct sk_buff *skb, struct net_device *ndev)
int err;
u8 i;
if (can_dropped_invalid_skb(ndev, skb))
if (can_dev_dropped_skb(ndev, skb))
return NETDEV_TX_OK;
if (!netif_subqueue_maybe_stop(priv->ndev, 0,

View File

@ -733,7 +733,7 @@ start_xmit (struct sk_buff *skb, struct net_device *dev)
u64 tfc_vlan_tag = 0;
if (np->link_status == 0) { /* Link Down */
dev_kfree_skb(skb);
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
entry = np->cur_tx % TX_RING_SIZE;

View File

@ -1077,8 +1077,7 @@ static int dpaa2_eth_build_single_fd(struct dpaa2_eth_priv *priv,
dma_addr_t addr;
buffer_start = skb->data - dpaa2_eth_needed_headroom(skb);
aligned_start = PTR_ALIGN(buffer_start - DPAA2_ETH_TX_BUF_ALIGN,
DPAA2_ETH_TX_BUF_ALIGN);
aligned_start = PTR_ALIGN(buffer_start, DPAA2_ETH_TX_BUF_ALIGN);
if (aligned_start >= skb->head)
buffer_start = aligned_start;
else

View File

@ -1595,6 +1595,8 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
/* next descriptor to process */
i = rx_ring->next_to_clean;
enetc_lock_mdio();
while (likely(rx_frm_cnt < work_limit)) {
union enetc_rx_bd *rxbd;
struct sk_buff *skb;
@ -1630,7 +1632,9 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
rx_byte_cnt += skb->len + ETH_HLEN;
rx_frm_cnt++;
enetc_unlock_mdio();
napi_gro_receive(napi, skb);
enetc_lock_mdio();
}
rx_ring->next_to_clean = i;
@ -1638,6 +1642,8 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
rx_ring->stats.packets += rx_frm_cnt;
rx_ring->stats.bytes += rx_byte_cnt;
enetc_unlock_mdio();
return rx_frm_cnt;
}
@ -1947,6 +1953,8 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
/* next descriptor to process */
i = rx_ring->next_to_clean;
enetc_lock_mdio();
while (likely(rx_frm_cnt < work_limit)) {
union enetc_rx_bd *rxbd, *orig_rxbd;
struct xdp_buff xdp_buff;
@ -2010,7 +2018,9 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
*/
enetc_bulk_flip_buff(rx_ring, orig_i, i);
enetc_unlock_mdio();
napi_gro_receive(napi, skb);
enetc_lock_mdio();
break;
case XDP_TX:
tx_ring = priv->xdp_tx_ring[rx_ring->index];
@ -2045,7 +2055,9 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
}
break;
case XDP_REDIRECT:
enetc_unlock_mdio();
err = xdp_do_redirect(rx_ring->ndev, &xdp_buff, prog);
enetc_lock_mdio();
if (unlikely(err)) {
enetc_xdp_drop(rx_ring, orig_i, i);
rx_ring->stats.xdp_redirect_failures++;
@ -2065,8 +2077,11 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
rx_ring->stats.packets += rx_frm_cnt;
rx_ring->stats.bytes += rx_byte_cnt;
if (xdp_redirect_frm_cnt)
if (xdp_redirect_frm_cnt) {
enetc_unlock_mdio();
xdp_do_flush();
enetc_lock_mdio();
}
if (xdp_tx_frm_cnt)
enetc_update_tx_ring_tail(tx_ring);
@ -2075,6 +2090,8 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring) -
rx_ring->xdp.xdp_tx_in_flight);
enetc_unlock_mdio();
return rx_frm_cnt;
}
@ -2093,6 +2110,7 @@ static int enetc_poll(struct napi_struct *napi, int budget)
for (i = 0; i < v->count_tx_rings; i++)
if (!enetc_clean_tx_ring(&v->tx_ring[i], budget))
complete = false;
enetc_unlock_mdio();
prog = rx_ring->xdp.prog;
if (prog)
@ -2104,10 +2122,8 @@ static int enetc_poll(struct napi_struct *napi, int budget)
if (work_done)
v->rx_napi_work = true;
if (!complete) {
enetc_unlock_mdio();
if (!complete)
return budget;
}
napi_complete_done(napi, work_done);
@ -2116,6 +2132,7 @@ static int enetc_poll(struct napi_struct *napi, int budget)
v->rx_napi_work = false;
enetc_lock_mdio();
/* enable interrupts */
enetc_wr_reg_hot(v->rbier, ENETC_RBIER_RXTIE);

View File

@ -76,7 +76,7 @@ struct enetc_lso_t {
#define ENETC_LSO_MAX_DATA_LEN SZ_256K
#define ENETC_RX_MAXFRM_SIZE ENETC_MAC_MAXFRM_SIZE
#define ENETC_RXB_TRUESIZE 2048 /* PAGE_SIZE >> 1 */
#define ENETC_RXB_TRUESIZE (PAGE_SIZE >> 1)
#define ENETC_RXB_PAD NET_SKB_PAD /* add extra space if needed */
#define ENETC_RXB_DMA_SIZE \
(SKB_WITH_OVERHEAD(ENETC_RXB_TRUESIZE) - ENETC_RXB_PAD)

View File

@ -148,6 +148,7 @@ config HIBMCGE
tristate "Hisilicon BMC Gigabit Ethernet Device Support"
depends on PCI && PCI_MSI
select PHYLIB
select FIXED_PHY
select MOTORCOMM_PHY
select REALTEK_PHY
help

View File

@ -100,7 +100,7 @@ u8 mlx5e_mpwrq_umr_entry_size(enum mlx5e_mpwrq_umr_mode mode)
return sizeof(struct mlx5_ksm) * 4;
}
WARN_ONCE(1, "MPWRQ UMR mode %d is not known\n", mode);
return 0;
return 1;
}
u8 mlx5e_mpwrq_log_wqe_sz(struct mlx5_core_dev *mdev, u8 page_shift,

View File

@ -342,6 +342,7 @@ void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry,
void mlx5e_ipsec_handle_mpv_event(int event, struct mlx5e_priv *slave_priv,
struct mlx5e_priv *master_priv);
void mlx5e_ipsec_send_event(struct mlx5e_priv *priv, int event);
void mlx5e_ipsec_disable_events(struct mlx5e_priv *priv);
static inline struct mlx5_core_dev *
mlx5e_ipsec_sa2dev(struct mlx5e_ipsec_sa_entry *sa_entry)
@ -387,6 +388,10 @@ static inline void mlx5e_ipsec_handle_mpv_event(int event, struct mlx5e_priv *sl
static inline void mlx5e_ipsec_send_event(struct mlx5e_priv *priv, int event)
{
}
static inline void mlx5e_ipsec_disable_events(struct mlx5e_priv *priv)
{
}
#endif
#endif /* __MLX5E_IPSEC_H__ */

View File

@ -2893,9 +2893,30 @@ void mlx5e_ipsec_handle_mpv_event(int event, struct mlx5e_priv *slave_priv,
void mlx5e_ipsec_send_event(struct mlx5e_priv *priv, int event)
{
if (!priv->ipsec)
return; /* IPsec not supported */
if (!priv->ipsec || mlx5_devcom_comp_get_size(priv->devcom) < 2)
return; /* IPsec not supported or no peers */
mlx5_devcom_send_event(priv->devcom, event, event, priv);
wait_for_completion(&priv->ipsec->comp);
}
void mlx5e_ipsec_disable_events(struct mlx5e_priv *priv)
{
struct mlx5_devcom_comp_dev *tmp = NULL;
struct mlx5e_priv *peer_priv;
if (!priv->devcom)
return;
if (!mlx5_devcom_for_each_peer_begin(priv->devcom))
goto out;
peer_priv = mlx5_devcom_get_next_peer_data(priv->devcom, &tmp);
if (peer_priv)
complete_all(&peer_priv->ipsec->comp);
mlx5_devcom_for_each_peer_end(priv->devcom);
out:
mlx5_devcom_unregister_component(priv->devcom);
priv->devcom = NULL;
}

View File

@ -242,8 +242,8 @@ static int mlx5e_devcom_init_mpv(struct mlx5e_priv *priv, u64 *data)
&attr,
mlx5e_devcom_event_mpv,
priv);
if (IS_ERR(priv->devcom))
return PTR_ERR(priv->devcom);
if (!priv->devcom)
return -EINVAL;
if (mlx5_core_is_mp_master(priv->mdev)) {
mlx5_devcom_send_event(priv->devcom, MPV_DEVCOM_MASTER_UP,
@ -256,7 +256,7 @@ static int mlx5e_devcom_init_mpv(struct mlx5e_priv *priv, u64 *data)
static void mlx5e_devcom_cleanup_mpv(struct mlx5e_priv *priv)
{
if (IS_ERR_OR_NULL(priv->devcom))
if (!priv->devcom)
return;
if (mlx5_core_is_mp_master(priv->mdev)) {
@ -266,6 +266,7 @@ static void mlx5e_devcom_cleanup_mpv(struct mlx5e_priv *priv)
}
mlx5_devcom_unregister_component(priv->devcom);
priv->devcom = NULL;
}
static int blocking_event(struct notifier_block *nb, unsigned long event, void *data)
@ -6120,6 +6121,7 @@ static void mlx5e_nic_disable(struct mlx5e_priv *priv)
if (mlx5e_monitor_counter_supported(priv))
mlx5e_monitor_counter_cleanup(priv);
mlx5e_ipsec_disable_events(priv);
mlx5e_disable_blocking_events(priv);
mlx5e_disable_async_events(priv);
mlx5_lag_remove_netdev(mdev, priv->netdev);

View File

@ -1794,14 +1794,27 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
}
prog = rcu_dereference(rq->xdp_prog);
if (prog && mlx5e_xdp_handle(rq, prog, mxbuf)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
struct mlx5e_wqe_frag_info *pwi;
if (prog) {
u8 nr_frags_free, old_nr_frags = sinfo->nr_frags;
for (pwi = head_wi; pwi < wi; pwi++)
pwi->frag_page->frags++;
if (mlx5e_xdp_handle(rq, prog, mxbuf)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT,
rq->flags)) {
struct mlx5e_wqe_frag_info *pwi;
wi -= old_nr_frags - sinfo->nr_frags;
for (pwi = head_wi; pwi < wi; pwi++)
pwi->frag_page->frags++;
}
return NULL; /* page/packet was consumed by XDP */
}
nr_frags_free = old_nr_frags - sinfo->nr_frags;
if (unlikely(nr_frags_free)) {
wi -= nr_frags_free;
truesize -= nr_frags_free * frag_info->frag_stride;
}
return NULL; /* page/packet was consumed by XDP */
}
skb = mlx5e_build_linear_skb(
@ -2027,6 +2040,7 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
u32 byte_cnt = cqe_bcnt;
struct skb_shared_info *sinfo;
unsigned int truesize = 0;
u32 pg_consumed_bytes;
struct bpf_prog *prog;
struct sk_buff *skb;
u32 linear_frame_sz;
@ -2080,7 +2094,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
while (byte_cnt) {
/* Non-linear mode, hence non-XSK, which always uses PAGE_SIZE. */
u32 pg_consumed_bytes = min_t(u32, PAGE_SIZE - frag_offset, byte_cnt);
pg_consumed_bytes =
min_t(u32, PAGE_SIZE - frag_offset, byte_cnt);
if (test_bit(MLX5E_RQ_STATE_SHAMPO, &rq->state))
truesize += pg_consumed_bytes;
@ -2096,10 +2111,15 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
}
if (prog) {
u8 nr_frags_free, old_nr_frags = sinfo->nr_frags;
u32 len;
if (mlx5e_xdp_handle(rq, prog, mxbuf)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
struct mlx5e_frag_page *pfp;
frag_page -= old_nr_frags - sinfo->nr_frags;
for (pfp = head_page; pfp < frag_page; pfp++)
pfp->frags++;
@ -2110,9 +2130,19 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
return NULL; /* page/packet was consumed by XDP */
}
nr_frags_free = old_nr_frags - sinfo->nr_frags;
if (unlikely(nr_frags_free)) {
frag_page -= nr_frags_free;
truesize -= (nr_frags_free - 1) * PAGE_SIZE +
ALIGN(pg_consumed_bytes,
BIT(rq->mpwqe.log_stride_sz));
}
len = mxbuf->xdp.data_end - mxbuf->xdp.data;
skb = mlx5e_build_linear_skb(
rq, mxbuf->xdp.data_hard_start, linear_frame_sz,
mxbuf->xdp.data - mxbuf->xdp.data_hard_start, 0,
mxbuf->xdp.data - mxbuf->xdp.data_hard_start, len,
mxbuf->xdp.data - mxbuf->xdp.data_meta);
if (unlikely(!skb)) {
mlx5e_page_release_fragmented(rq->page_pool,
@ -2137,8 +2167,11 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
do
pagep->frags++;
while (++pagep < frag_page);
headlen = min_t(u16, MLX5E_RX_MAX_HEAD - len,
skb->data_len);
__pskb_pull_tail(skb, headlen);
}
__pskb_pull_tail(skb, headlen);
} else {
dma_addr_t addr;

View File

@ -1614,7 +1614,9 @@ void mlx5e_stats_fec_get(struct mlx5e_priv *priv,
fec_set_corrected_bits_total(priv, fec_stats);
fec_set_block_stats(priv, mode, fec_stats);
fec_set_histograms_stats(priv, mode, hist);
if (MLX5_CAP_PCAM_REG(priv->mdev, pphcr))
fec_set_histograms_stats(priv, mode, hist);
}
#define PPORT_ETH_EXT_OFF(c) \

View File

@ -256,7 +256,7 @@ mlx5e_tx_wqe_inline_mode(struct mlx5e_txqsq *sq, struct sk_buff *skb,
u8 mode;
#ifdef CONFIG_MLX5_EN_TLS
if (accel && accel->tls.tls_tisn)
if (accel->tls.tls_tisn)
return MLX5_INLINE_MODE_TCP_UDP;
#endif
@ -982,6 +982,7 @@ void mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
struct mlx5e_tx_attr attr;
struct mlx5i_tx_wqe *wqe;
struct mlx5e_accel_tx_state accel = {};
struct mlx5_wqe_datagram_seg *datagram;
struct mlx5_wqe_ctrl_seg *cseg;
struct mlx5_wqe_eth_seg *eseg;
@ -992,7 +993,7 @@ void mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
int num_dma;
u16 pi;
mlx5e_sq_xmit_prepare(sq, skb, NULL, &attr);
mlx5e_sq_xmit_prepare(sq, skb, &accel, &attr);
mlx5i_sq_calc_wqe_attr(skb, &attr, &wqe_attr);
pi = mlx5e_txqsq_get_next_pi(sq, wqe_attr.num_wqebbs);
@ -1009,7 +1010,7 @@ void mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
mlx5i_txwqe_build_datagram(av, dqpn, dqkey, datagram);
mlx5e_txwqe_build_eseg_csum(sq, skb, NULL, eseg);
mlx5e_txwqe_build_eseg_csum(sq, skb, &accel, eseg);
eseg->mss = attr.mss;

View File

@ -3129,7 +3129,7 @@ void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw,
attr,
mlx5_esw_offloads_devcom_event,
esw);
if (IS_ERR(esw->devcom))
if (!esw->devcom)
return;
mlx5_devcom_send_event(esw->devcom,
@ -3140,7 +3140,7 @@ void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw,
void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw)
{
if (IS_ERR_OR_NULL(esw->devcom))
if (!esw->devcom)
return;
mlx5_devcom_send_event(esw->devcom,

View File

@ -1430,11 +1430,10 @@ static int mlx5_lag_register_hca_devcom_comp(struct mlx5_core_dev *dev)
mlx5_devcom_register_component(dev->priv.devc,
MLX5_DEVCOM_HCA_PORTS,
&attr, NULL, dev);
if (IS_ERR(dev->priv.hca_devcom_comp)) {
if (!dev->priv.hca_devcom_comp) {
mlx5_core_err(dev,
"Failed to register devcom HCA component, err: %ld\n",
PTR_ERR(dev->priv.hca_devcom_comp));
return PTR_ERR(dev->priv.hca_devcom_comp);
"Failed to register devcom HCA component.");
return -EINVAL;
}
return 0;

View File

@ -1444,7 +1444,7 @@ static void mlx5_shared_clock_register(struct mlx5_core_dev *mdev, u64 key)
compd = mlx5_devcom_register_component(mdev->priv.devc,
MLX5_DEVCOM_SHARED_CLOCK,
&attr, NULL, mdev);
if (IS_ERR(compd))
if (!compd)
return;
mdev->clock_state->compdev = compd;

View File

@ -76,20 +76,18 @@ mlx5_devcom_dev_alloc(struct mlx5_core_dev *dev)
struct mlx5_devcom_dev *
mlx5_devcom_register_device(struct mlx5_core_dev *dev)
{
struct mlx5_devcom_dev *devc;
struct mlx5_devcom_dev *devc = NULL;
mutex_lock(&dev_list_lock);
if (devcom_dev_exists(dev)) {
devc = ERR_PTR(-EEXIST);
mlx5_core_err(dev, "devcom device already exists");
goto out;
}
devc = mlx5_devcom_dev_alloc(dev);
if (!devc) {
devc = ERR_PTR(-ENOMEM);
if (!devc)
goto out;
}
list_add_tail(&devc->list, &devcom_dev_list);
out:
@ -110,8 +108,10 @@ mlx5_devcom_dev_release(struct kref *ref)
void mlx5_devcom_unregister_device(struct mlx5_devcom_dev *devc)
{
if (!IS_ERR_OR_NULL(devc))
kref_put(&devc->ref, mlx5_devcom_dev_release);
if (!devc)
return;
kref_put(&devc->ref, mlx5_devcom_dev_release);
}
static struct mlx5_devcom_comp *
@ -122,7 +122,7 @@ mlx5_devcom_comp_alloc(u64 id, const struct mlx5_devcom_match_attr *attr,
comp = kzalloc(sizeof(*comp), GFP_KERNEL);
if (!comp)
return ERR_PTR(-ENOMEM);
return NULL;
comp->id = id;
comp->key.key = attr->key;
@ -160,7 +160,7 @@ devcom_alloc_comp_dev(struct mlx5_devcom_dev *devc,
devcom = kzalloc(sizeof(*devcom), GFP_KERNEL);
if (!devcom)
return ERR_PTR(-ENOMEM);
return NULL;
kref_get(&devc->ref);
devcom->devc = devc;
@ -240,31 +240,28 @@ mlx5_devcom_register_component(struct mlx5_devcom_dev *devc,
mlx5_devcom_event_handler_t handler,
void *data)
{
struct mlx5_devcom_comp_dev *devcom;
struct mlx5_devcom_comp_dev *devcom = NULL;
struct mlx5_devcom_comp *comp;
if (IS_ERR_OR_NULL(devc))
return ERR_PTR(-EINVAL);
if (!devc)
return NULL;
mutex_lock(&comp_list_lock);
comp = devcom_component_get(devc, id, attr, handler);
if (IS_ERR(comp)) {
devcom = ERR_PTR(-EINVAL);
if (IS_ERR(comp))
goto out_unlock;
}
if (!comp) {
comp = mlx5_devcom_comp_alloc(id, attr, handler);
if (IS_ERR(comp)) {
devcom = ERR_CAST(comp);
if (!comp)
goto out_unlock;
}
list_add_tail(&comp->comp_list, &devcom_comp_list);
}
mutex_unlock(&comp_list_lock);
devcom = devcom_alloc_comp_dev(devc, comp, data);
if (IS_ERR(devcom))
if (!devcom)
kref_put(&comp->ref, mlx5_devcom_comp_release);
return devcom;
@ -276,8 +273,10 @@ mlx5_devcom_register_component(struct mlx5_devcom_dev *devc,
void mlx5_devcom_unregister_component(struct mlx5_devcom_comp_dev *devcom)
{
if (!IS_ERR_OR_NULL(devcom))
devcom_free_comp_dev(devcom);
if (!devcom)
return;
devcom_free_comp_dev(devcom);
}
int mlx5_devcom_comp_get_size(struct mlx5_devcom_comp_dev *devcom)
@ -296,7 +295,7 @@ int mlx5_devcom_send_event(struct mlx5_devcom_comp_dev *devcom,
int err = 0;
void *data;
if (IS_ERR_OR_NULL(devcom))
if (!devcom)
return -ENODEV;
comp = devcom->comp;
@ -338,7 +337,7 @@ void mlx5_devcom_comp_set_ready(struct mlx5_devcom_comp_dev *devcom, bool ready)
bool mlx5_devcom_comp_is_ready(struct mlx5_devcom_comp_dev *devcom)
{
if (IS_ERR_OR_NULL(devcom))
if (!devcom)
return false;
return READ_ONCE(devcom->comp->ready);
@ -348,7 +347,7 @@ bool mlx5_devcom_for_each_peer_begin(struct mlx5_devcom_comp_dev *devcom)
{
struct mlx5_devcom_comp *comp;
if (IS_ERR_OR_NULL(devcom))
if (!devcom)
return false;
comp = devcom->comp;
@ -421,21 +420,21 @@ void *mlx5_devcom_get_next_peer_data_rcu(struct mlx5_devcom_comp_dev *devcom,
void mlx5_devcom_comp_lock(struct mlx5_devcom_comp_dev *devcom)
{
if (IS_ERR_OR_NULL(devcom))
if (!devcom)
return;
down_write(&devcom->comp->sem);
}
void mlx5_devcom_comp_unlock(struct mlx5_devcom_comp_dev *devcom)
{
if (IS_ERR_OR_NULL(devcom))
if (!devcom)
return;
up_write(&devcom->comp->sem);
}
int mlx5_devcom_comp_trylock(struct mlx5_devcom_comp_dev *devcom)
{
if (IS_ERR_OR_NULL(devcom))
if (!devcom)
return 0;
return down_write_trylock(&devcom->comp->sem);
}

View File

@ -221,8 +221,8 @@ static int sd_register(struct mlx5_core_dev *dev)
attr.net = mlx5_core_net(dev);
devcom = mlx5_devcom_register_component(dev->priv.devc, MLX5_DEVCOM_SD_GROUP,
&attr, NULL, dev);
if (IS_ERR(devcom))
return PTR_ERR(devcom);
if (!devcom)
return -EINVAL;
sd->devcom = devcom;

View File

@ -978,9 +978,8 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
int err;
dev->priv.devc = mlx5_devcom_register_device(dev);
if (IS_ERR(dev->priv.devc))
mlx5_core_warn(dev, "failed to register devcom device %pe\n",
dev->priv.devc);
if (!dev->priv.devc)
mlx5_core_warn(dev, "failed to register devcom device\n");
err = mlx5_query_board_id(dev);
if (err) {

View File

@ -2211,15 +2211,35 @@ static netdev_tx_t ravb_start_xmit(struct sk_buff *skb, struct net_device *ndev)
skb_tx_timestamp(skb);
}
/* Descriptor type must be set after all the above writes */
dma_wmb();
if (num_tx_desc > 1) {
desc->die_dt = DT_FEND;
desc--;
/* When using multi-descriptors, DT_FEND needs to get written
* before DT_FSTART, but the compiler may reorder the memory
* writes in an attempt to optimize the code.
* Use a dma_wmb() barrier to make sure DT_FEND and DT_FSTART
* are written exactly in the order shown in the code.
* This is particularly important for cases where the DMA engine
* is already running when we are running this code. If the DMA
* sees DT_FSTART without the corresponding DT_FEND it will enter
* an error condition.
*/
dma_wmb();
desc->die_dt = DT_FSTART;
} else {
/* Descriptor type must be set after all the above writes */
dma_wmb();
desc->die_dt = DT_FSINGLE;
}
/* Before ringing the doorbell we need to make sure that the latest
* writes have been committed to memory, otherwise it could delay
* things until the doorbell is rang again.
* This is in replacement of the read operation mentioned in the HW
* manuals.
*/
dma_wmb();
ravb_modify(ndev, TCCR, TCCR_TSRQ0 << q, TCCR_TSRQ0 << q);
priv->cur_tx[q] += num_tx_desc;

View File

@ -1446,14 +1446,15 @@ static int gmac_clk_enable(struct rk_priv_data *bsp_priv, bool enable)
}
} else {
if (bsp_priv->clk_enabled) {
if (bsp_priv->ops && bsp_priv->ops->set_clock_selection) {
bsp_priv->ops->set_clock_selection(bsp_priv,
bsp_priv->clock_input, false);
}
clk_bulk_disable_unprepare(bsp_priv->num_clks,
bsp_priv->clks);
clk_disable_unprepare(bsp_priv->clk_phy);
if (bsp_priv->ops && bsp_priv->ops->set_clock_selection)
bsp_priv->ops->set_clock_selection(bsp_priv,
bsp_priv->clock_input, false);
bsp_priv->clk_enabled = false;
}
}

View File

@ -163,7 +163,9 @@ struct am65_cpts {
struct device_node *clk_mux_np;
struct clk *refclk;
u32 refclk_freq;
struct list_head events;
/* separate lists to handle TX and RX timestamp independently */
struct list_head events_tx;
struct list_head events_rx;
struct list_head pool;
struct am65_cpts_event pool_data[AM65_CPTS_MAX_EVENTS];
spinlock_t lock; /* protects events lists*/
@ -227,6 +229,24 @@ static void am65_cpts_disable(struct am65_cpts *cpts)
am65_cpts_write32(cpts, 0, int_enable);
}
static int am65_cpts_purge_event_list(struct am65_cpts *cpts,
struct list_head *events)
{
struct list_head *this, *next;
struct am65_cpts_event *event;
int removed = 0;
list_for_each_safe(this, next, events) {
event = list_entry(this, struct am65_cpts_event, list);
if (time_after(jiffies, event->tmo)) {
list_del_init(&event->list);
list_add(&event->list, &cpts->pool);
++removed;
}
}
return removed;
}
static int am65_cpts_event_get_port(struct am65_cpts_event *event)
{
return (event->event1 & AM65_CPTS_EVENT_1_PORT_NUMBER_MASK) >>
@ -239,20 +259,12 @@ static int am65_cpts_event_get_type(struct am65_cpts_event *event)
AM65_CPTS_EVENT_1_EVENT_TYPE_SHIFT;
}
static int am65_cpts_cpts_purge_events(struct am65_cpts *cpts)
static int am65_cpts_purge_events(struct am65_cpts *cpts)
{
struct list_head *this, *next;
struct am65_cpts_event *event;
int removed = 0;
list_for_each_safe(this, next, &cpts->events) {
event = list_entry(this, struct am65_cpts_event, list);
if (time_after(jiffies, event->tmo)) {
list_del_init(&event->list);
list_add(&event->list, &cpts->pool);
++removed;
}
}
removed += am65_cpts_purge_event_list(cpts, &cpts->events_tx);
removed += am65_cpts_purge_event_list(cpts, &cpts->events_rx);
if (removed)
dev_dbg(cpts->dev, "event pool cleaned up %d\n", removed);
@ -287,7 +299,7 @@ static int __am65_cpts_fifo_read(struct am65_cpts *cpts)
struct am65_cpts_event, list);
if (!event) {
if (am65_cpts_cpts_purge_events(cpts)) {
if (am65_cpts_purge_events(cpts)) {
dev_err(cpts->dev, "cpts: event pool empty\n");
ret = -1;
goto out;
@ -306,11 +318,21 @@ static int __am65_cpts_fifo_read(struct am65_cpts *cpts)
cpts->timestamp);
break;
case AM65_CPTS_EV_RX:
event->tmo = jiffies +
msecs_to_jiffies(AM65_CPTS_EVENT_RX_TX_TIMEOUT);
list_move_tail(&event->list, &cpts->events_rx);
dev_dbg(cpts->dev,
"AM65_CPTS_EV_RX e1:%08x e2:%08x t:%lld\n",
event->event1, event->event2,
event->timestamp);
break;
case AM65_CPTS_EV_TX:
event->tmo = jiffies +
msecs_to_jiffies(AM65_CPTS_EVENT_RX_TX_TIMEOUT);
list_move_tail(&event->list, &cpts->events);
list_move_tail(&event->list, &cpts->events_tx);
dev_dbg(cpts->dev,
"AM65_CPTS_EV_TX e1:%08x e2:%08x t:%lld\n",
@ -828,7 +850,7 @@ static bool am65_cpts_match_tx_ts(struct am65_cpts *cpts,
return found;
}
static void am65_cpts_find_ts(struct am65_cpts *cpts)
static void am65_cpts_find_tx_ts(struct am65_cpts *cpts)
{
struct am65_cpts_event *event;
struct list_head *this, *next;
@ -837,7 +859,7 @@ static void am65_cpts_find_ts(struct am65_cpts *cpts)
LIST_HEAD(events);
spin_lock_irqsave(&cpts->lock, flags);
list_splice_init(&cpts->events, &events);
list_splice_init(&cpts->events_tx, &events);
spin_unlock_irqrestore(&cpts->lock, flags);
list_for_each_safe(this, next, &events) {
@ -850,7 +872,7 @@ static void am65_cpts_find_ts(struct am65_cpts *cpts)
}
spin_lock_irqsave(&cpts->lock, flags);
list_splice_tail(&events, &cpts->events);
list_splice_tail(&events, &cpts->events_tx);
list_splice_tail(&events_free, &cpts->pool);
spin_unlock_irqrestore(&cpts->lock, flags);
}
@ -861,7 +883,7 @@ static long am65_cpts_ts_work(struct ptp_clock_info *ptp)
unsigned long flags;
long delay = -1;
am65_cpts_find_ts(cpts);
am65_cpts_find_tx_ts(cpts);
spin_lock_irqsave(&cpts->txq.lock, flags);
if (!skb_queue_empty(&cpts->txq))
@ -905,7 +927,7 @@ static u64 am65_cpts_find_rx_ts(struct am65_cpts *cpts, u32 skb_mtype_seqid)
spin_lock_irqsave(&cpts->lock, flags);
__am65_cpts_fifo_read(cpts);
list_for_each_safe(this, next, &cpts->events) {
list_for_each_safe(this, next, &cpts->events_rx) {
event = list_entry(this, struct am65_cpts_event, list);
if (time_after(jiffies, event->tmo)) {
list_move(&event->list, &cpts->pool);
@ -1155,7 +1177,8 @@ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs,
return ERR_PTR(ret);
mutex_init(&cpts->ptp_clk_lock);
INIT_LIST_HEAD(&cpts->events);
INIT_LIST_HEAD(&cpts->events_tx);
INIT_LIST_HEAD(&cpts->events_rx);
INIT_LIST_HEAD(&cpts->pool);
spin_lock_init(&cpts->lock);
skb_queue_head_init(&cpts->txq);

View File

@ -560,16 +560,34 @@ static void ovpn_tcp_close(struct sock *sk, long timeout)
static __poll_t ovpn_tcp_poll(struct file *file, struct socket *sock,
poll_table *wait)
{
__poll_t mask = datagram_poll(file, sock, wait);
struct sk_buff_head *queue = &sock->sk->sk_receive_queue;
struct ovpn_socket *ovpn_sock;
struct ovpn_peer *peer = NULL;
__poll_t mask;
rcu_read_lock();
ovpn_sock = rcu_dereference_sk_user_data(sock->sk);
if (ovpn_sock && ovpn_sock->peer &&
!skb_queue_empty(&ovpn_sock->peer->tcp.user_queue))
mask |= EPOLLIN | EPOLLRDNORM;
/* if we landed in this callback, we expect to have a
* meaningful state. The ovpn_socket lifecycle would
* prevent it otherwise.
*/
if (WARN(!ovpn_sock || !ovpn_sock->peer,
"ovpn: null state in ovpn_tcp_poll!")) {
rcu_read_unlock();
return 0;
}
if (ovpn_peer_hold(ovpn_sock->peer)) {
peer = ovpn_sock->peer;
queue = &peer->tcp.user_queue;
}
rcu_read_unlock();
mask = datagram_poll_queue(file, sock, wait, queue);
if (peer)
ovpn_peer_put(peer);
return mask;
}

View File

@ -4262,6 +4262,8 @@ static int __lan8814_ptp_probe_once(struct phy_device *phydev, char *pin_name,
{
struct lan8814_shared_priv *shared = phy_package_get_priv(phydev);
shared->phydev = phydev;
/* Initialise shared lock for clock*/
mutex_init(&shared->shared_lock);
@ -4317,8 +4319,6 @@ static int __lan8814_ptp_probe_once(struct phy_device *phydev, char *pin_name,
phydev_dbg(phydev, "successfully registered ptp clock\n");
shared->phydev = phydev;
/* The EP.4 is shared between all the PHYs in the package and also it
* can be accessed by any of the PHYs
*/

View File

@ -154,7 +154,7 @@
#define RTL_8211FVD_PHYID 0x001cc878
#define RTL_8221B 0x001cc840
#define RTL_8221B_VB_CG 0x001cc849
#define RTL_8221B_VN_CG 0x001cc84a
#define RTL_8221B_VM_CG 0x001cc84a
#define RTL_8251B 0x001cc862
#define RTL_8261C 0x001cc890
@ -1523,16 +1523,16 @@ static int rtl8221b_vb_cg_c45_match_phy_device(struct phy_device *phydev,
return rtlgen_is_c45_match(phydev, RTL_8221B_VB_CG, true);
}
static int rtl8221b_vn_cg_c22_match_phy_device(struct phy_device *phydev,
static int rtl8221b_vm_cg_c22_match_phy_device(struct phy_device *phydev,
const struct phy_driver *phydrv)
{
return rtlgen_is_c45_match(phydev, RTL_8221B_VN_CG, false);
return rtlgen_is_c45_match(phydev, RTL_8221B_VM_CG, false);
}
static int rtl8221b_vn_cg_c45_match_phy_device(struct phy_device *phydev,
static int rtl8221b_vm_cg_c45_match_phy_device(struct phy_device *phydev,
const struct phy_driver *phydrv)
{
return rtlgen_is_c45_match(phydev, RTL_8221B_VN_CG, true);
return rtlgen_is_c45_match(phydev, RTL_8221B_VM_CG, true);
}
static int rtl_internal_nbaset_match_phy_device(struct phy_device *phydev,
@ -1879,7 +1879,7 @@ static struct phy_driver realtek_drvs[] = {
.suspend = genphy_c45_pma_suspend,
.resume = rtlgen_c45_resume,
}, {
.match_phy_device = rtl8221b_vn_cg_c22_match_phy_device,
.match_phy_device = rtl8221b_vm_cg_c22_match_phy_device,
.name = "RTL8221B-VM-CG 2.5Gbps PHY (C22)",
.probe = rtl822x_probe,
.get_features = rtl822x_get_features,
@ -1892,8 +1892,8 @@ static struct phy_driver realtek_drvs[] = {
.read_page = rtl821x_read_page,
.write_page = rtl821x_write_page,
}, {
.match_phy_device = rtl8221b_vn_cg_c45_match_phy_device,
.name = "RTL8221B-VN-CG 2.5Gbps PHY (C45)",
.match_phy_device = rtl8221b_vm_cg_c45_match_phy_device,
.name = "RTL8221B-VM-CG 2.5Gbps PHY (C45)",
.probe = rtl822x_probe,
.config_init = rtl822xb_config_init,
.get_rate_matching = rtl822xb_get_rate_matching,

View File

@ -685,9 +685,16 @@ static netdev_tx_t rtl8150_start_xmit(struct sk_buff *skb,
rtl8150_t *dev = netdev_priv(netdev);
int count, res;
/* pad the frame and ensure terminating USB packet, datasheet 9.2.3 */
count = max(skb->len, ETH_ZLEN);
if (count % 64 == 0)
count++;
if (skb_padto(skb, count)) {
netdev->stats.tx_dropped++;
return NETDEV_TX_OK;
}
netif_stop_queue(netdev);
count = (skb->len < 60) ? 60 : skb->len;
count = (count & 0x3f) ? count : count + 1;
dev->tx_skb = skb;
usb_fill_bulk_urb(dev->tx_urb, dev->udev, usb_sndbulkpipe(dev->udev, 2),
skb->data, count, write_bulk_callback, dev);

View File

@ -2548,7 +2548,7 @@ ptp_ocp_sma_fb_init(struct ptp_ocp *bp)
for (i = 0; i < OCP_SMA_NUM; i++) {
bp->sma[i].fixed_fcn = true;
bp->sma[i].fixed_dir = true;
bp->sma[1].dpll_prop.capabilities &=
bp->sma[i].dpll_prop.capabilities &=
~DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE;
}
return;

View File

@ -10833,7 +10833,9 @@ struct mlx5_ifc_pcam_regs_5000_to_507f_bits {
u8 port_access_reg_cap_mask_127_to_96[0x20];
u8 port_access_reg_cap_mask_95_to_64[0x20];
u8 port_access_reg_cap_mask_63_to_36[0x1c];
u8 port_access_reg_cap_mask_63[0x1];
u8 pphcr[0x1];
u8 port_access_reg_cap_mask_61_to_36[0x1a];
u8 pplm[0x1];
u8 port_access_reg_cap_mask_34_to_32[0x3];

View File

@ -4204,6 +4204,9 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk,
struct sk_buff_head *sk_queue,
unsigned int flags, int *off, int *err);
struct sk_buff *skb_recv_datagram(struct sock *sk, unsigned int flags, int *err);
__poll_t datagram_poll_queue(struct file *file, struct socket *sock,
struct poll_table_struct *wait,
struct sk_buff_head *rcv_queue);
__poll_t datagram_poll(struct file *file, struct socket *sock,
struct poll_table_struct *wait);
int skb_copy_datagram_iter(const struct sk_buff *from, int offset,

View File

@ -401,6 +401,10 @@ virtio_net_hdr_tnl_from_skb(const struct sk_buff *skb,
if (!tnl_hdr_negotiated)
return -EINVAL;
vhdr->hash_hdr.hash_value = 0;
vhdr->hash_hdr.hash_report = 0;
vhdr->hash_hdr.padding = 0;
/* Let the basic parsing deal with plain GSO features. */
skb_shinfo(skb)->gso_type &= ~tnl_gso_type;
ret = virtio_net_hdr_from_skb(skb, hdr, true, false, vlan_hlen);

View File

@ -920,21 +920,22 @@ int skb_copy_and_csum_datagram_msg(struct sk_buff *skb,
EXPORT_SYMBOL(skb_copy_and_csum_datagram_msg);
/**
* datagram_poll - generic datagram poll
* datagram_poll_queue - same as datagram_poll, but on a specific receive
* queue
* @file: file struct
* @sock: socket
* @wait: poll table
* @rcv_queue: receive queue to poll
*
* Datagram poll: Again totally generic. This also handles
* sequenced packet sockets providing the socket receive queue
* is only ever holding data ready to receive.
* Performs polling on the given receive queue, handling shutdown, error,
* and connection state. This is useful for protocols that deliver
* userspace-bound packets through a custom queue instead of
* sk->sk_receive_queue.
*
* Note: when you *don't* use this routine for this protocol,
* and you use a different write policy from sock_writeable()
* then please supply your own write_space callback.
* Return: poll bitmask indicating the socket's current state
*/
__poll_t datagram_poll(struct file *file, struct socket *sock,
poll_table *wait)
__poll_t datagram_poll_queue(struct file *file, struct socket *sock,
poll_table *wait, struct sk_buff_head *rcv_queue)
{
struct sock *sk = sock->sk;
__poll_t mask;
@ -956,7 +957,7 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
mask |= EPOLLHUP;
/* readable? */
if (!skb_queue_empty_lockless(&sk->sk_receive_queue))
if (!skb_queue_empty_lockless(rcv_queue))
mask |= EPOLLIN | EPOLLRDNORM;
/* Connection-based need to check for termination and startup */
@ -978,4 +979,27 @@ __poll_t datagram_poll(struct file *file, struct socket *sock,
return mask;
}
EXPORT_SYMBOL(datagram_poll_queue);
/**
* datagram_poll - generic datagram poll
* @file: file struct
* @sock: socket
* @wait: poll table
*
* Datagram poll: Again totally generic. This also handles
* sequenced packet sockets providing the socket receive queue
* is only ever holding data ready to receive.
*
* Note: when you *don't* use this routine for this protocol,
* and you use a different write policy from sock_writeable()
* then please supply your own write_space callback.
*
* Return: poll bitmask indicating the socket's current state
*/
__poll_t datagram_poll(struct file *file, struct socket *sock, poll_table *wait)
{
return datagram_poll_queue(file, sock, wait,
&sock->sk->sk_receive_queue);
}
EXPORT_SYMBOL(datagram_poll);

View File

@ -639,6 +639,8 @@ EXPORT_SYMBOL(gro_receive_skb);
static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
{
struct skb_shared_info *shinfo;
if (unlikely(skb->pfmemalloc)) {
consume_skb(skb);
return;
@ -655,8 +657,12 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
skb->encapsulation = 0;
skb->ip_summed = CHECKSUM_NONE;
skb_shinfo(skb)->gso_type = 0;
skb_shinfo(skb)->gso_size = 0;
shinfo = skb_shinfo(skb);
shinfo->gso_type = 0;
shinfo->gso_size = 0;
shinfo->hwtstamps.hwtstamp = 0;
if (unlikely(skb->slow_gro)) {
skb_orphan(skb);
skb_ext_reset(skb);

View File

@ -43,12 +43,11 @@ int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
if (skb_queue_len(&cell->napi_skbs) == 1)
napi_schedule(&cell->napi);
if (have_bh_lock)
local_unlock_nested_bh(&gcells->cells->bh_lock);
res = NET_RX_SUCCESS;
unlock:
if (have_bh_lock)
local_unlock_nested_bh(&gcells->cells->bh_lock);
rcu_read_unlock();
return res;
}

View File

@ -4715,9 +4715,6 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
int err;
u16 vid;
if (!netlink_capable(skb, CAP_NET_ADMIN))
return -EPERM;
if (!del_bulk) {
err = nlmsg_parse_deprecated(nlh, sizeof(*ndm), tb, NDA_MAX,
NULL, extack);

View File

@ -34,12 +34,18 @@ static int hsr_newlink(struct net_device *dev,
struct netlink_ext_ack *extack)
{
struct net *link_net = rtnl_newlink_link_net(params);
struct net_device *link[2], *interlink = NULL;
struct nlattr **data = params->data;
enum hsr_version proto_version;
unsigned char multicast_spec;
u8 proto = HSR_PROTOCOL_HSR;
struct net_device *link[2], *interlink = NULL;
if (!net_eq(link_net, dev_net(dev))) {
NL_SET_ERR_MSG_MOD(extack,
"HSR slaves/interlink must be on the same net namespace than HSR link");
return -EINVAL;
}
if (!data) {
NL_SET_ERR_MSG_MOD(extack, "No slave devices specified");
return -EINVAL;

View File

@ -370,6 +370,10 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
}
subflow:
/* No need to try establishing subflows to remote id0 if not allowed */
if (mptcp_pm_add_addr_c_flag_case(msk))
goto exit;
/* check if should create a new subflow */
while (msk->pm.local_addr_used < endp_subflow_max &&
msk->pm.extra_subflows < limit_extra_subflows) {
@ -401,6 +405,8 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
__mptcp_subflow_connect(sk, &local, &addrs[i]);
spin_lock_bh(&msk->pm.lock);
}
exit:
mptcp_pm_nl_check_work_pending(msk);
}

View File

@ -169,13 +169,14 @@ struct sctp_chunk *sctp_inq_pop(struct sctp_inq *queue)
chunk->head_skb = chunk->skb;
/* skbs with "cover letter" */
if (chunk->head_skb && chunk->skb->data_len == chunk->skb->len)
if (chunk->head_skb && chunk->skb->data_len == chunk->skb->len) {
if (WARN_ON(!skb_shinfo(chunk->skb)->frag_list)) {
__SCTP_INC_STATS(dev_net(chunk->skb->dev),
SCTP_MIB_IN_PKT_DISCARDS);
sctp_chunk_free(chunk);
goto next_chunk;
}
chunk->skb = skb_shinfo(chunk->skb)->frag_list;
if (WARN_ON(!chunk->skb)) {
__SCTP_INC_STATS(dev_net(chunk->skb->dev), SCTP_MIB_IN_PKT_DISCARDS);
sctp_chunk_free(chunk);
goto next_chunk;
}
}

View File

@ -56,7 +56,6 @@ static struct inet_protosw smc_inet_protosw = {
.protocol = IPPROTO_SMC,
.prot = &smc_inet_prot,
.ops = &smc_inet_stream_ops,
.flags = INET_PROTOSW_ICSK,
};
#if IS_ENABLED(CONFIG_IPV6)
@ -104,27 +103,15 @@ static struct inet_protosw smc_inet6_protosw = {
.protocol = IPPROTO_SMC,
.prot = &smc_inet6_prot,
.ops = &smc_inet6_stream_ops,
.flags = INET_PROTOSW_ICSK,
};
#endif /* CONFIG_IPV6 */
static unsigned int smc_sync_mss(struct sock *sk, u32 pmtu)
{
/* No need pass it through to clcsock, mss can always be set by
* sock_create_kern or smc_setsockopt.
*/
return 0;
}
static int smc_inet_init_sock(struct sock *sk)
{
struct net *net = sock_net(sk);
/* init common smc sock */
smc_sk_init(net, sk, IPPROTO_SMC);
inet_csk(sk)->icsk_sync_mss = smc_sync_mss;
/* create clcsock */
return smc_create_clcsk(net, sk, sk->sk_family);
}

View File

@ -487,12 +487,26 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
goto err;
}
if (vsk->transport) {
if (vsk->transport == new_transport) {
ret = 0;
goto err;
}
if (vsk->transport && vsk->transport == new_transport) {
ret = 0;
goto err;
}
/* We increase the module refcnt to prevent the transport unloading
* while there are open sockets assigned to it.
*/
if (!new_transport || !try_module_get(new_transport->module)) {
ret = -ENODEV;
goto err;
}
/* It's safe to release the mutex after a successful try_module_get().
* Whichever transport `new_transport` points at, it won't go away until
* the last module_put() below or in vsock_deassign_transport().
*/
mutex_unlock(&vsock_register_mutex);
if (vsk->transport) {
/* transport->release() must be called with sock lock acquired.
* This path can only be taken during vsock_connect(), where we
* have already held the sock lock. In the other cases, this
@ -512,20 +526,6 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
vsk->peer_shutdown = 0;
}
/* We increase the module refcnt to prevent the transport unloading
* while there are open sockets assigned to it.
*/
if (!new_transport || !try_module_get(new_transport->module)) {
ret = -ENODEV;
goto err;
}
/* It's safe to release the mutex after a successful try_module_get().
* Whichever transport `new_transport` points at, it won't go away until
* the last module_put() below or in vsock_deassign_transport().
*/
mutex_unlock(&vsock_register_mutex);
if (sk->sk_type == SOCK_SEQPACKET) {
if (!new_transport->seqpacket_allow ||
!new_transport->seqpacket_allow(remote_cid)) {

View File

@ -555,14 +555,10 @@ static void espintcp_close(struct sock *sk, long timeout)
static __poll_t espintcp_poll(struct file *file, struct socket *sock,
poll_table *wait)
{
__poll_t mask = datagram_poll(file, sock, wait);
struct sock *sk = sock->sk;
struct espintcp_ctx *ctx = espintcp_getctx(sk);
if (!skb_queue_empty(&ctx->ike_queue))
mask |= EPOLLIN | EPOLLRDNORM;
return mask;
return datagram_poll_queue(file, sock, wait, &ctx->ike_queue);
}
static void build_protos(struct proto *espintcp_prot,

View File

@ -2324,7 +2324,7 @@ laminar_endp_tests()
{
# no laminar endpoints: routing rules are used
if reset_with_tcp_filter "without a laminar endpoint" ns1 10.0.2.2 REJECT &&
mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 2 2
pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
@ -2336,7 +2336,7 @@ laminar_endp_tests()
# laminar endpoints: this endpoint is used
if reset_with_tcp_filter "with a laminar endpoint" ns1 10.0.2.2 REJECT &&
mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 2 2
pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
@ -2348,7 +2348,7 @@ laminar_endp_tests()
# laminar endpoints: these endpoints are used
if reset_with_tcp_filter "with multiple laminar endpoints" ns1 10.0.2.2 REJECT &&
mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 2 2
pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
@ -2363,7 +2363,7 @@ laminar_endp_tests()
# laminar endpoints: only one endpoint is used
if reset_with_tcp_filter "single laminar endpoint" ns1 10.0.2.2 REJECT &&
mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 2 2
pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
@ -2376,7 +2376,7 @@ laminar_endp_tests()
# laminar endpoints: subflow and laminar flags
if reset_with_tcp_filter "sublow + laminar endpoints" ns1 10.0.2.2 REJECT &&
mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then
pm_nl_set_limits $ns1 0 4
pm_nl_set_limits $ns2 2 4
pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
@ -3939,7 +3939,7 @@ endpoint_tests()
# subflow_rebuild_header is needed to support the implicit flag
# userspace pm type prevents add_addr
if reset "implicit EP" &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
pm_nl_set_limits $ns1 2 2
pm_nl_set_limits $ns2 2 2
pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
@ -3964,7 +3964,7 @@ endpoint_tests()
fi
if reset_with_tcp_filter "delete and re-add" ns2 10.0.3.2 REJECT OUTPUT &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
start_events
pm_nl_set_limits $ns1 0 3
pm_nl_set_limits $ns2 0 3
@ -4040,7 +4040,7 @@ endpoint_tests()
# remove and re-add
if reset_with_events "delete re-add signal" &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=0
pm_nl_set_limits $ns1 0 3
pm_nl_set_limits $ns2 3 3
@ -4115,7 +4115,7 @@ endpoint_tests()
# flush and re-add
if reset_with_tcp_filter "flush re-add" ns2 10.0.3.2 REJECT OUTPUT &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 1 2
# broadcast IP: no packet for this address will be received on ns1

View File

@ -29,7 +29,6 @@ static void set_addr(struct sockaddr_storage *ss, char *ip, char *port, int *len
static int do_client(int argc, char *argv[])
{
struct sockaddr_storage ss;
char buf[] = "hello";
int csk, ret, len;
if (argc < 5) {
@ -56,16 +55,10 @@ static int do_client(int argc, char *argv[])
set_addr(&ss, argv[3], argv[4], &len);
ret = connect(csk, (struct sockaddr *)&ss, len);
if (ret < 0) {
printf("failed to connect to peer\n");
if (ret < 0)
return -1;
}
ret = send(csk, buf, strlen(buf) + 1, 0);
if (ret < 0) {
printf("failed to send msg %d\n", ret);
return -1;
}
recv(csk, NULL, 0, 0);
close(csk);
return 0;
@ -75,7 +68,6 @@ int main(int argc, char *argv[])
{
struct sockaddr_storage ss;
int lsk, csk, ret, len;
char buf[20];
if (argc < 2 || (strcmp(argv[1], "server") && strcmp(argv[1], "client"))) {
printf("%s server|client ...\n", argv[0]);
@ -125,11 +117,6 @@ int main(int argc, char *argv[])
return -1;
}
ret = recv(csk, buf, sizeof(buf), 0);
if (ret <= 0) {
printf("failed to recv msg %d\n", ret);
return -1;
}
close(csk);
close(lsk);

View File

@ -20,9 +20,9 @@ setup() {
modprobe sctp_diag
setup_ns CLIENT_NS1 CLIENT_NS2 SERVER_NS
ip net exec $CLIENT_NS1 sysctl -w net.ipv6.conf.default.accept_dad=0 2>&1 >/dev/null
ip net exec $CLIENT_NS2 sysctl -w net.ipv6.conf.default.accept_dad=0 2>&1 >/dev/null
ip net exec $SERVER_NS sysctl -w net.ipv6.conf.default.accept_dad=0 2>&1 >/dev/null
ip net exec $CLIENT_NS1 sysctl -wq net.ipv6.conf.default.accept_dad=0
ip net exec $CLIENT_NS2 sysctl -wq net.ipv6.conf.default.accept_dad=0
ip net exec $SERVER_NS sysctl -wq net.ipv6.conf.default.accept_dad=0
ip -n $SERVER_NS link add veth1 type veth peer name veth1 netns $CLIENT_NS1
ip -n $SERVER_NS link add veth2 type veth peer name veth1 netns $CLIENT_NS2
@ -62,17 +62,40 @@ setup() {
}
cleanup() {
ip netns exec $SERVER_NS pkill sctp_hello 2>&1 >/dev/null
wait_client $CLIENT_NS1
wait_client $CLIENT_NS2
stop_server
cleanup_ns $CLIENT_NS1 $CLIENT_NS2 $SERVER_NS
}
wait_server() {
start_server() {
local IFACE=$1
local CNT=0
until ip netns exec $SERVER_NS ss -lS src $SERVER_IP:$SERVER_PORT | \
grep LISTEN | grep "$IFACE" 2>&1 >/dev/null; do
[ $((CNT++)) = "20" ] && { RET=3; return $RET; }
ip netns exec $SERVER_NS ./sctp_hello server $AF $SERVER_IP $SERVER_PORT $IFACE &
disown
until ip netns exec $SERVER_NS ss -SlH | grep -q "$IFACE"; do
[ $((CNT++)) -eq 30 ] && { RET=3; return $RET; }
sleep 0.1
done
}
stop_server() {
local CNT=0
ip netns exec $SERVER_NS pkill sctp_hello
while ip netns exec $SERVER_NS ss -SaH | grep -q .; do
[ $((CNT++)) -eq 30 ] && break
sleep 0.1
done
}
wait_client() {
local CLIENT_NS=$1
local CNT=0
while ip netns exec $CLIENT_NS ss -SaH | grep -q .; do
[ $((CNT++)) -eq 30 ] && break
sleep 0.1
done
}
@ -81,14 +104,12 @@ do_test() {
local CLIENT_NS=$1
local IFACE=$2
ip netns exec $SERVER_NS pkill sctp_hello 2>&1 >/dev/null
ip netns exec $SERVER_NS ./sctp_hello server $AF $SERVER_IP \
$SERVER_PORT $IFACE 2>&1 >/dev/null &
disown
wait_server $IFACE || return $RET
start_server $IFACE || return $RET
timeout 3 ip netns exec $CLIENT_NS ./sctp_hello client $AF \
$SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT 2>&1 >/dev/null
$SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT
RET=$?
wait_client $CLIENT_NS
stop_server
return $RET
}
@ -96,25 +117,21 @@ do_testx() {
local IFACE1=$1
local IFACE2=$2
ip netns exec $SERVER_NS pkill sctp_hello 2>&1 >/dev/null
ip netns exec $SERVER_NS ./sctp_hello server $AF $SERVER_IP \
$SERVER_PORT $IFACE1 2>&1 >/dev/null &
disown
wait_server $IFACE1 || return $RET
ip netns exec $SERVER_NS ./sctp_hello server $AF $SERVER_IP \
$SERVER_PORT $IFACE2 2>&1 >/dev/null &
disown
wait_server $IFACE2 || return $RET
start_server $IFACE1 || return $RET
start_server $IFACE2 || return $RET
timeout 3 ip netns exec $CLIENT_NS1 ./sctp_hello client $AF \
$SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT 2>&1 >/dev/null && \
$SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT && \
timeout 3 ip netns exec $CLIENT_NS2 ./sctp_hello client $AF \
$SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT 2>&1 >/dev/null
$SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT
RET=$?
wait_client $CLIENT_NS1
wait_client $CLIENT_NS2
stop_server
return $RET
}
testup() {
ip netns exec $SERVER_NS sysctl -w net.sctp.l3mdev_accept=1 2>&1 >/dev/null
ip netns exec $SERVER_NS sysctl -wq net.sctp.l3mdev_accept=1
echo -n "TEST 01: nobind, connect from client 1, l3mdev_accept=1, Y "
do_test $CLIENT_NS1 || { echo "[FAIL]"; return $RET; }
echo "[PASS]"
@ -123,7 +140,7 @@ testup() {
do_test $CLIENT_NS2 && { echo "[FAIL]"; return $RET; }
echo "[PASS]"
ip netns exec $SERVER_NS sysctl -w net.sctp.l3mdev_accept=0 2>&1 >/dev/null
ip netns exec $SERVER_NS sysctl -wq net.sctp.l3mdev_accept=0
echo -n "TEST 03: nobind, connect from client 1, l3mdev_accept=0, N "
do_test $CLIENT_NS1 && { echo "[FAIL]"; return $RET; }
echo "[PASS]"
@ -160,7 +177,7 @@ testup() {
do_testx vrf-1 vrf-2 || { echo "[FAIL]"; return $RET; }
echo "[PASS]"
echo -n "TEST 12: bind vrf-2 & 1 in server, connect from client 1 & 2, N "
echo -n "TEST 12: bind vrf-2 & 1 in server, connect from client 1 & 2, Y "
do_testx vrf-2 vrf-1 || { echo "[FAIL]"; return $RET; }
echo "[PASS]"
}