|LabVIEW 2015 Real-Time Module Help|
|LabVIEW 2016 Real-Time Module Help|
|LabVIEW 2017 Real-Time Module Help|
|LabVIEW 2018 Real-Time Module Help|
|LabVIEW 2019 Real-Time Module Help|
This topic contains guidelines for using shared variables effectively in LabVIEW Real-Time Module applications.
To minimize CPU usage, avoid using too many network-published shared variables. For high-channel-count applications, consider combining the channels into a single array and using a single network-published shared variable to transfer the array.
LabVIEW provides several options for transferring data between tasks running in separate loops. However, most of these options can impact determinism. Therefore, the LabVIEW Real-Time Module provides shared variables with the RT FIFO enabled for communication between a deterministic loop and a non-deterministic loop.
Refer to the RT FIFO Variables - local.lvproj in the labview\examples\Real-Time Module\RT Communication\RT FIFO Variables directory for examples of using Shared Variables with RT FIFO enabled.
Buffering consumes CPU and memory resources. Place a checkmark in the Use Buffering checkbox on the Network page of the Shared Variable Properties dialog box only when you need to read every value written to a network-published shared variable. Although buffering can help prevent data loss caused by temporary networking delays, buffering does not guarantee lossless data transfer. Buffers can overflow, causing LabVIEW to overwrite old data and generate a warning.
Improve performance and prevent race conditions by executing shared variable accessors serially. Use the error in and error out terminals of the Shared Variable node or the Shared Variable programmatic access functions to serialize execution. By explicitly serializing execution, you can minimize scheduling overhead and thus improve performance.
For example, the following block diagram reads from three shared variables in parallel:
The following block diagram uses the error in and error out terminals of the Shared Variable nodes to serialize execution:
To prevent reading the same value repeatedly in a loop, use the ms timeout input of a Shared Variable node or the Read Variable with Timeout function. To add a ms timeout input to the Shared Variable node, right-click the Shared Variable node and select Show Timeout from the shortcut menu. You only can enable a timeout period for Shared Variable nodes configured to read data. You cannot enable a timeout period for nodes that access I/O variables locally.
To prevent reading stale data from a previous run of your application, undeploy and redeploy shared variable libraries to clear all shared variable values. Use the Shared Variable Deployment page of the Application Properties dialog box to configure a stand-alone RT application to deploy shared variables before running the application and undeploy shared variable libraries when the application exits.
To safeguard your application against stale shared variable data, initialize all your shared variables before running the ongoing task loops of your application. To initialize an unbuffered network-published shared variable hosted on another computer, write the default value to the shared variable, then wait for the initialized value to propagate through the network to the Shared Variable Engine running on the host computer and back to the RT target. The following block diagram shows an example of unbuffered network-published shared variable initialization:
|Note To initialize buffered network-published shared variables, undeploy and then redeploy the shared variable library.|
In some cases, hosting shared variables on the RT target makes sense. In other cases, it is more appropriate to host shared variables on a host PC. Before finalizing your application, ensure that you are hosting shared variables on the most appropriate device. The following table summarizes the advantages and disadvantages of hosting shared variables on an RT target:
|Multiple PCs can access shared variables hosted by a single RT target||Adds CPU overhead on the RT target|
|High uptime due to the stability of the RT target||Adds memory overhead on the RT target|