[Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param
kernel test robot
lkp at intel.com
Wed Jul 20 17:17:16 UTC 2022
Hi Michal,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on net-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Michal-Wilczynski/ice-Support-5-layer-tx-scheduler-topology/20220720-224322
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 5fb859f79f4f49d9df16bac2b3a84a6fa3aaccf1
config: x86_64-randconfig-a002 (https://download.01.org/0day-ci/archive/20220721/202207210108.7ZpVcgDQ-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-3) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/15b804e74b266402a1af3d04b1b3106d06670c23
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Michal-Wilczynski/ice-Support-5-layer-tx-scheduler-topology/20220720-224322
git checkout 15b804e74b266402a1af3d04b1b3106d06670c23
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp at intel.com>
All warnings (new ones prefixed by >>):
>> drivers/net/ethernet/intel/ice/ice_devlink.c:389:5: warning: no previous prototype for 'ice_get_tx_topo_user_sel' [-Wmissing-prototypes]
389 | int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
| ^~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/net/ethernet/intel/ice/ice_devlink.c:421:1: warning: no previous prototype for 'ice_update_tx_topo_user_sel' [-Wmissing-prototypes]
421 | ice_update_tx_topo_user_sel(struct ice_pf *pf, bool txbalance_ena)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
vim +/ice_get_tx_topo_user_sel +389 drivers/net/ethernet/intel/ice/ice_devlink.c
379
380 /**
381 * ice_get_tx_topo_user_sel - Read user's choice from flash
382 * @pf: pointer to pf structure
383 * @txbalance_ena: value read from flash will be saved here
384 *
385 * Reads user's preference for Tx Scheduler Topology Tree from PFA TLV.
386 *
387 * Returns zero when read was successful, negative values otherwise.
388 */
> 389 int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
390 {
391 struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
392 struct ice_hw *hw = &pf->hw;
393 int status;
394
395 status = ice_acquire_nvm(hw, ICE_RES_READ);
396 if (status)
397 return status;
398
399 status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
400 sizeof(usr_sel), &usr_sel, true, true, NULL);
401 ice_release_nvm(hw);
402
403 *txbalance_ena = usr_sel.data & ICE_AQC_NVM_TX_TOPO_USER_SEL;
404
405 return status;
406 }
407
408 /**
409 * ice_update_tx_topo_user_sel - Save user's preference in flash
410 * @pf: pointer to pf structure
411 * @txbalance_ena: value to be saved in flash
412 *
413 * When txbalance_ena is set to true it means user's preference is to use
414 * five layer Tx Scheduler Topology Tree, when it is set to false then it is
415 * nine layer. This choice should be stored in PFA TLV field and should be
416 * picked up by driver, next time during init.
417 *
418 * Returns zero when save was successful, negative values otherwise.
419 */
420 int
> 421 ice_update_tx_topo_user_sel(struct ice_pf *pf, bool txbalance_ena)
422 {
423 struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
424 struct ice_hw *hw = &pf->hw;
425 int status;
426
427 status = ice_acquire_nvm(hw, ICE_RES_WRITE);
428 if (status)
429 return status;
430
431 status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
432 sizeof(usr_sel), &usr_sel, true, true, NULL);
433 if (status)
434 goto exit_release_res;
435
436 if (txbalance_ena)
437 usr_sel.data |= ICE_AQC_NVM_TX_TOPO_USER_SEL;
438 else
439 usr_sel.data &= ~ICE_AQC_NVM_TX_TOPO_USER_SEL;
440
441 status = ice_write_one_nvm_block(pf, ICE_AQC_NVM_TX_TOPO_MOD_ID, 2,
442 sizeof(usr_sel.data), &usr_sel.data,
443 true, NULL, NULL);
444
445 exit_release_res:
446 ice_release_nvm(hw);
447
448 return status;
449 }
450
--
0-DAY CI Kernel Test Service
https://01.org/lkp
More information about the Intel-wired-lan
mailing list