This topic contains guidelines for using shared variables effectively in LabVIEW RT applications.
To minimize CPU usage, avoid using too many network-published shared variables. For high-channel-count applications, consider combining the channels into a single array and using a single network-published shared variable to transfer the array.
LabVIEW provides several options for transferring data between tasks running in separate loops. However, most of these options can impact determinism. Therefore, the LabVIEW Real-Time Module provides two RT FIFO options for communication between a deterministic loop and a non-deterministic loop.
For small applications, single-process shared variables with the RT FIFO enabled are a good choice for inter-task communication. However, to create a scalable architecture for large applications, use the RT FIFO functions instead.
|Application Size||Inter-Task Communication Method|
|Small||Shared variables with the RT FIFO enabled|
|Large||RT FIFO functions|
When you need a scalable inter-task communication architecture for a large application, use the RT FIFO Create function to create RT FIFOs programmatically. For example, you can use the RT FIFO Create function inside a For Loop to create as many RT FIFOs as you need. You then can use the RT FIFO Read and RT FIFO Write functions inside For Loops to read from and write to all your RT FIFOs consecutively. Using this technique, you can scale your application up to use as many RT FIFOs as you need while keeping the size of your block diagram manageable.
|Tip Use the polling mode of the RT FIFO Create function to optimize the throughput performance of read and write operations. Use the blocking mode to minimize CPU overhead. The polling mode continually polls the FIFO for new data or an open slot. The polling mode responds quicker than the blocking mode to new data or new empty slots, but requires more CPU overhead. In a multiple-CPU system, consider using polling mode and assigning the polling RT FIFO function to a dedicated CPU.|
Buffering consumes CPU and memory resources. Place a checkmark in the Use Buffering checkbox on the Network page of the Shared Variable Properties dialog box only when you need to read every value written to a network-published shared variable. Although buffering can help prevent data loss caused by temporary networking delays, buffering does not guarantee lossless data transfer. Buffers can overflow, causing LabVIEW to overwrite old data and generate a warning.
Improve performance and prevent race conditions by executing shared variable accessors serially. Use the error in and error out terminals of the Shared Variable node or the Shared Variable programmatic access functions to serialize execution. By explicitly serializing execution, you can minimize scheduling overhead and thus improve performance.
For example, the following block diagram reads from three shared variables in parallel.
The following block diagram uses the error in and error out terminals of the Shared Variable nodes to serialize execution.
To prevent reading the same value repeatedly in a loop, use the ms timeout input of a Shared Variable node or the Read Variable with Timeout function. To add a ms timeout input to the Shared Variable node, right-click the Shared Variable node and select Show Timeout from the shortcut menu. You only can enable a timeout period for Shared Variable nodes configured to read data. You cannot enable a timeout period for nodes that access I/O variables locally.
To prevent reading stale data from a previous run of your application, undeploy and redeploy shared variable libraries to clear all shared variable values. Use the Shared Variable Deployment page of the Application Properties dialog box to configure a stand-alone RT application to deploy shared variables before running the application and undeploy shared variable libraries when the application exits.
To safeguard your application against stale shared variable data, initialize all your shared variables before running the ongoing task loops of your application. To initialize an unbuffered network-published shared variable hosted on another computer, write the default value to the shared variable, then wait for the initialized value to propagate through the network to the Shared Variable Engine running on the host computer and back to the RT target. The following block diagram shows an example of unbuffered network-published shared variable initialization.
|Note To initialize buffered network-published shared variables, undeploy and then redeploy the shared variable library.|
In some cases, hosting shared variables on the RT target makes sense. In other cases, it is more appropriate to host shared variables on a host PC. Before finalizing your application, ensure that you are hosting shared variables on the most appropriate device. The following table summarizes the advantages and disadvantages of hosting shared variables on an RT target.
|Multiple PCs can access shared variables hosted by a single RT target||Adds CPU overhead on the RT target|
|High uptime due to the stability of the RT target||Adds memory overhead on the RT target|