|Download Help (Windows Only)|
Deterministic applications benefit from multithreading and from the priority-based scheduling model of the real-time operating system (RTOS). When creating deterministic applications, divide and separate application tasks to create multithreaded applications that receive dedicated processor resources based on the priorities set by the RTOS. You also can use the Analysis Workspace on Phar Lap ETS targets to reduce jitter in Mathematics and Signal Processing VIs.
Single-CPU computers achieve multitasking by running one application for a short amount of time and then running other applications for short amounts of time. As long as the amount of processor time allocated for each application is small enough, computers appear to have multiple applications running simultaneously. Systems with multiple CPUs, on the other hand, can achieve true parallelism.
Multithreading applies the concept of multitasking to a single application by breaking it into smaller tasks that execute for short amounts of time in different execution system threads. A thread is a completely independent flow of execution for an application within the execution system. Multithreaded applications maximize the efficiency of the processor because the processor does not sit idle if there are other threads ready to run. An application that reads from and writes to a file, performs I/O, or polls the user interface for activity can benefit from multithreading because it can use the processor to run other tasks during breaks in these activities.
The RTOS on RT targets uses a combination of round robin and preemptive scheduling to execute threads in the execution system.
Round robing scheduling—Applies to threads of equal priority. Equal shares of processor time are allocated among equal priority threads. For example, each normal priority thread is allotted 10 ms to run. The processor executes all the tasks it can in 10 ms and any remaining tasks at the end of that period must wait to complete during the next allocation of time.
Preemptive scheduling—Higher priority threads execute immediately while lower priority threads pause execution. A time-critical priority thread is the highest priority and preempts all priorities.
LabVIEW executes timed structures threads below time-critical priority and above high priority. You can specify the priority level of Timed Loops relative to other timed structures within a VI by setting the priority of the Timed Loop. Use the Configure Timed Loop Dialog Box to configure a timing source, period, priority, and other advanced options for the execution of the Timed Loop.
Deterministic applications depend on deterministic tasks to complete on time, every time. Therefore, deterministic tasks need dedicated processor resources to ensure timely completion. Dividing tasks helps to ensure that each task receives the processor resources it needs to execute on time.
Separate deterministic tasks from all other tasks to ensure deterministic tasks receive enough processor resources. For example, if a control application acquires data at regular intervals and stores the data on disk, you must handle the timing and control of the data acquisition deterministically. However, storing the data on disk is inherently a non-deterministic task because file I/O operations have unpredictable response times that depend on the hardware and the availability of the hardware resource. You can use Timed Loops or VIs with different priorities to control the execution and timing of deterministic tasks.
|Note Within deterministic tasks, ensure that each operation receives dedicated processor resources by avoiding unnecessary parallelism. In a multiple CPU system, avoid creating more parallel operations than the number of available CPUs. Because it is impossible to determine the execution order of parallel operations, unnecessary parallelism can impede determinism.|
Many of the analysis VIs on the Mathematics and Signal Processing palettes require resource allocations for internal calculations. Because requesting resources from the operating system can introduce jitter, analysis VIs require a preallocated analysis workspace to minimize jitter. Use the Real-Time Analysis Utilities VIs to handle the resource requirements of Mathematics and Signal Processing VIs.
|Note The Real-Time Analysis Utilities VIs reduce jitter only when used on RT targets running the Phar Lap ETS real-time operating system. Running the Real-Time Analysis Utilities VIs on NI Linux Real-Time and VxWorks targets does not reduce execution jitter.|
To correctly allocate resources for internal calculations, use the Real-Time Analysis Utilities VIs in the following order:
|Caution You must place the Enable Analysis Workspace and Disable Analysis Workspace VIs in the same time-critical thread as the Mathematics and Signal Processing VIs that use the analysis workspace. Between the Enable Analysis Workspace and Disable Analysis Workspace VIs, do not include any functions that involve wait or timeout intervals. You can use only one analysis workspace in an RT application, and you cannot use the analysis workspace to execute multiple Mathematics and Signal Processing VIs in parallel, even on systems with multiple CPUs. Failure to follow these guidelines can cause race conditions, priority inversions, and other undefined behaviors.|
The following block diagram shows a typical usage of the analysis workspace:
The analysis workspace benefits, as shown in the previous example, Mathematics and Signal Processing VIs that request memory for internal calculations. Some Mathematics and Signal Processing VIs do not benefit from the analysis workspace. The analysis workspace reduces execution jitter only for the following VIs: