|Download Help (Windows Only)|
Setting the CPU affinity is optional. When you select Use Custom CPU Affinity as the CPU Affinity option for threads or executions with Sequence Call Adapter steps, you can specify the CPU affinity mask for a new thread or new execution. Because 32-bit Microsoft Windows uses a 32-bit CPU affinity mask, and 64-bit Windows uses a 64-bit mask, the mask behaves as a pointer-sized integer. 32-bit TestStand expects the affinity mask to be a TestStand Number data type with the default (double) representation. 64-bit TestStand expects the affinity mask to be a Number data type with the 64-bit unsigned integer representation.
If a sequence needs to support only one architecture, use the representation that matches the required architecture. If the sequence must support the 32-bit architecture and the 64-bit architecture, use bitness-conditional code in the mask expression. For example, to enable all CPUs, use the following expression:
RunState.Engine.Is64Bit? -1ui64 : -1
where -1 indicates to set all bits and therefore use all CPUs.
Only the Use Custom CPU Affinity option requires specifying the mask. The Use Station Option for CPU Affinity, Use CPU Affinity of Caller, and Use All CPUs options do not require or allow specification of the affinity mask and therefore do not require bitness-conditional code.
In TestStand 2014 or later, the Default CPU Affinity for Threads option on the Preferences tab of the Station Options dialog box is a 64-bit unsigned integer for 32-bit TestStand and 64-bit TestStand. However, because of the 32-bit limitation on affinity masks in 32-bit Windows, 32-bit TestStand cannot specify affinity for processors beyond the first 32 processors if more than 32 processors are present.
When you use the TestStand Engine API to set the default CPU affinity mask, use the StationOptions.DefaultCPUAffinityForThreadsEx property for 32-bit applications and for 64-bit applications instead of the now obsolete StationOptions.DefaultCPUAffinityForThreads property.