Skip to content

Commit

Permalink
[Bug Fix] Update Advanced rings pkt processing (#212)
Browse files Browse the repository at this point in the history
Previously the advanced rings NF's would enqueue packets onto its TX ring and let that thread decide what to do next with the packet. This pull request changes the logic to directly transmit packets. This fixes NF stats reporting in advanced rings mode.

Commit log:

* Small fix in load_balancer

* Added the nf_setup call before run

* Merging

* Scaling example adv rings edit, documentation edits

* Update docs

* Formatting

* Check if tx_batch_size < 32 before flushing

* Fix typo

* Style fixes

Co-authored-by: dennisa <[email protected]>
Co-authored-by: Dennis Afanasev <[email protected]>
Co-authored-by: Dennis Afanasev <[email protected]>
Co-authored-by: Dennis Afanasev <[email protected]>
Co-authored-by: Dennis Afanasev <[email protected]>
  • Loading branch information
6 people authored May 8, 2020
1 parent fe5b253 commit e574c6f
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 14 deletions.
3 changes: 1 addition & 2 deletions docs/NF_Dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,7 @@ Here are some of the frequently used functions of this library (to see the full
- `int onvm_nflib_run(struct onvm_nf_info* info, void(*handler)(struct rte_mbuf* pkt, struct onvm_pkt_meta* meta))`, is the communication protocol between NF and manager, where the NF provides a pointer to a packet handler function to the manager. The manager uses this function pointer to pass packets to the NF as it is routing traffic. This function continuously loops, giving packets one-by-one to the destined NF as they arrive.

### Advanced Ring Manipulation
For advanced NFs, calling `onvm_nf_run` (as described above) is actually optional. There is a second mode where NFs can interface directly with the shared data structures. Be warned that using this interface means the NF is responsible for its own packets, and the NF Guest Library can make fewer guarantees about overall system performance. The advanced rings NFs are also responsible for managing their own cores, the NF can call the `onvm_threading_core_affinitize(nf_info->core)` function, the `nf_info->core` will have the core assigned by the manager. Additionally, the NF is responsible for maintaining its own statistics. An advanced NF can call `onvm_nflib_get_nf(uint16_t id)` to get the reference to `struct onvm_nf`, which has `struct rte_ring *` for RX and TX, a stat structure for that NF, and the `struct onvm_nf_info`. Alternatively NF can call `onvm_nflib_get_rx_ring(struct onvm_nf_info *info)` or `onvm_nflib_get_tx_ring(struct onvm_nf_info *info)` to get the `struct rte_ring *` for RX and TX, respectively. Finally, note that using any of these functions precludes you from calling `onvm_nf_run`, and calling `onvm_nf_run` precludes you from calling any of these advanced functions (they will return `NULL`). The first interface you use is the one you get. To start receiving packets, you must first signal to the manager that the NF is ready by calling `onvm_nflib_nf_ready`.
Example use of Advanced Rings can be seen in the speed_tester NF or the scaling example NF.
For advanced NFs, calling `onvm_nf_run` (as described above) is actually optional. There is a second mode where NFs can interface directly with the shared data structures. Be warned that using this interface means the NF is responsible for its own packets, and the NF Guest Library can make fewer guarantees about overall system performance. The advanced rings NFs are also responsible for managing their own cores, the NF can call the `onvm_threading_core_affinitize(nf_info->core)` function, the `nf_info->core` will have the core assigned by the manager. An advanced NF can call `onvm_nflib_get_nf(uint16_t id)` to get the reference to `struct onvm_nf`, which has `struct rte_ring *` for RX and TX, a stat structure for that NF, and the `struct onvm_nf_info`. Alternatively the NF can call `onvm_nflib_get_rx_ring(struct onvm_nf_info *info)` or `onvm_nflib_get_tx_ring(struct onvm_nf_info *info)` to get the `struct rte_ring *` for RX and TX, respectively. Instead of enqueueing packets directly onto the TX ring, the NF should call `onvm_pkt_process_tx_batch(nf->nf_tx_mgr, pktsTX, tx_batch_size, nf);` followed by `onvm_pkt_flush_all_nfs(nf->nf_tx_mgr, nf);` (if the number of packets dequeued is less than the burst size, `PACKET_READ_SIZE`) to TX packets out of the NF, where `nf->nf_tx_mgr` is the NF's queue manager, pktsTX is a `struct rte_mbuf **` array with packets to be transmitted, `tx_batch_size` is the number of packets in the array, and NF is the calling NF's `struct onvm_nf *` object. Finally, note that using any of these functions precludes you from calling `onvm_nf_run`, and calling `onvm_nf_run` precludes you from calling any of these advanced functions (they will return `NULL`). The first interface you use is the one you get. To start receiving packets, you must first signal to the manager that the NF is ready by calling `onvm_nflib_nf_ready`. Example usage of Advanced Rings can be seen in the scaling_example NF.

### Multithreaded NFs, scaling
NFs can scale by running multiple threads. For launching more threads the main NF had to be launched with more than 1 core. For running a new thread the NF should call `onvm_nflib_scale(struct onvm_nf_scale_info *scale_info)`. The `struct scale_info` has all the required information for starting a new child NF, service and instance ids, NF state data, and the packet handling functions. The struct can be obtained either by calling the `onvm_nflib_get_empty_scaling_config(struct onvm_nf_info *parent_info)` and manually filling it in or by inheriting the parent behavior by using `onvm_nflib_inherit_parent_config(struct onvm_nf_info *parent_info)`. As the spawned NFs are threads they will share all the global variables with its parent, the `onvm_nf_info->data` is a void pointer that should be used for NF state data.
Expand Down
18 changes: 6 additions & 12 deletions examples/scaling_example/scaling.c
Original file line number Diff line number Diff line change
Expand Up @@ -318,11 +318,10 @@ int
thread_main_loop(struct onvm_nf_local_ctx *nf_local_ctx) {
void *pkts[PKT_READ_SIZE];
struct onvm_pkt_meta *meta;
uint16_t i, j, nb_pkts;
void *pktsTX[PKT_READ_SIZE];
uint16_t i, nb_pkts;
struct rte_mbuf *pktsTX[PKT_READ_SIZE];
int tx_batch_size;
struct rte_ring *rx_ring;
struct rte_ring *tx_ring;
struct rte_ring *msg_q;
struct onvm_nf *nf;
struct onvm_nf_msg *msg;
Expand All @@ -335,7 +334,6 @@ thread_main_loop(struct onvm_nf_local_ctx *nf_local_ctx) {

/* Get rings from nflib */
rx_ring = nf->rx_q;
tx_ring = nf->tx_q;
msg_q = nf->msg_q;
nf_msg_pool = rte_mempool_lookup(_NF_MSG_POOL_NAME);

Expand Down Expand Up @@ -373,14 +371,10 @@ thread_main_loop(struct onvm_nf_local_ctx *nf_local_ctx) {
packet_handler_fwd((struct rte_mbuf *)pkts[i], meta, nf_local_ctx);
pktsTX[tx_batch_size++] = pkts[i];
}

if (unlikely(tx_batch_size > 0 && rte_ring_enqueue_bulk(tx_ring, pktsTX, tx_batch_size, NULL) == 0)) {
nf->stats.tx_drop += tx_batch_size;
for (j = 0; j < tx_batch_size; j++) {
rte_pktmbuf_free(pktsTX[j]);
}
} else {
nf->stats.tx += tx_batch_size;
/* Process all packet actions */
onvm_pkt_process_tx_batch(nf->nf_tx_mgr, pktsTX, tx_batch_size, nf);
if (tx_batch_size < PACKET_READ_SIZE) {
onvm_pkt_flush_all_nfs(nf->nf_tx_mgr, nf);
}
}
return 0;
Expand Down

0 comments on commit e574c6f

Please sign in to comment.