Xenomai  3.0-rc7
Scheduling management

Cobalt/POSIX scheduling management services. More...

Collaboration diagram for Scheduling management:

Functions

int pthread_setschedparam (pthread_t thread, int policy, const struct sched_param *param)
 Set the scheduling policy and parameters of the specified thread. More...
 
int pthread_setschedparam_ex (pthread_t thread, int policy, const struct sched_param_ex *param_ex)
 Set extended scheduling policy of thread. More...
 
int pthread_getschedparam (pthread_t thread, int *__restrict__ policy, struct sched_param *__restrict__ param)
 Get the scheduling policy and parameters of the specified thread. More...
 
int pthread_getschedparam_ex (pthread_t thread, int *__restrict__ policy_r, struct sched_param_ex *__restrict__ param_ex)
 Get extended scheduling policy of thread. More...
 
int sched_yield (void)
 Yield the processor. More...
 
int sched_get_priority_min (int policy)
 Get minimum priority of the specified scheduling policy. More...
 
int sched_get_priority_min_ex (int policy)
 Get extended minimum priority of the specified scheduling policy. More...
 
int sched_get_priority_max (int policy)
 Get maximum priority of the specified scheduling policy. More...
 
int sched_get_priority_max_ex (int policy)
 Get extended maximum priority of the specified scheduling policy. More...
 
int pthread_yield (void)
 Yield the processor. More...
 
int sched_setconfig_np (int cpu, int policy, const union sched_config *config, size_t len)
 Set CPU-specific scheduler settings for a policy. More...
 
ssize_t sched_getconfig_np (int cpu, int policy, union sched_config *config, size_t *len_r)
 Retrieve CPU-specific scheduler settings for a policy. More...
 

Detailed Description

Cobalt/POSIX scheduling management services.

Function Documentation

int pthread_getschedparam ( pthread_t  thread,
int *__restrict__  policy,
struct sched_param *__restrict__  param 
)

Get the scheduling policy and parameters of the specified thread.

This service returns, at the addresses pol and par, the current scheduling policy and scheduling parameters (i.e. priority) of the Xenomai POSIX skin thread tid. If this service is called from user-space and tid is not the identifier of a Xenomai POSIX skin thread, this service fallback to Linux regular pthread_getschedparam service.

Parameters
threadtarget thread;
policyaddress where the scheduling policy of tid is stored on success;
paramaddress where the scheduling parameters of tid is stored on success.
Returns
0 on success;
an error number if:
  • ESRCH, tid is invalid.
See also
Specification.

References pthread_getschedparam_ex().

Referenced by pthread_getschedparam_ex().

int pthread_getschedparam_ex ( pthread_t  thread,
int *__restrict__  policy_r,
struct sched_param_ex *__restrict__  param_ex 
)

Get extended scheduling policy of thread.

This service is an extended version of the regular pthread_getschedparam() service, which also supports Xenomai-specific or additional POSIX scheduling policies, not available with the host Linux environment.

Parameters
threadtarget thread;
policy_raddress where the scheduling policy of thread is stored on success;
param_exaddress where the scheduling parameters of thread are stored on success.
Returns
0 on success;
an error number if:
  • ESRCH, thread is invalid.
See also
Specification.

References pthread_getschedparam().

Referenced by pthread_getschedparam().

int pthread_setschedparam ( pthread_t  thread,
int  policy,
const struct sched_param *  param 
)

Set the scheduling policy and parameters of the specified thread.

This service set the scheduling policy of the Xenomai POSIX skin thread tid to the value pol, and its scheduling parameters (i.e. its priority) to the value pointed to by par.

When used in user-space, passing the current thread ID as tid argument, this service turns the current thread into a Xenomai POSIX skin thread. If tid is neither the identifier of the current thread nor the identifier of a Xenomai POSIX skin thread this service falls back to the regular pthread_setschedparam() service, hereby causing the current thread to switch to secondary mode if it is Xenomai thread.

Parameters
threadtarget thread;
policyscheduling policy, one of SCHED_FIFO, SCHED_RR, SCHED_SPORADIC, SCHED_TP or SCHED_OTHER;
paramscheduling parameters address.
Returns
0 on success;
an error number if:
  • ESRCH, tid is invalid;
  • EINVAL, pol or par->sched_priority is invalid;
  • EAGAIN, in user-space, insufficient memory exists in the system heap, increase CONFIG_XENO_OPT_SYS_HEAPSZ;
  • EFAULT, in user-space, par is an invalid address;
  • EPERM, in user-space, the calling process does not have superuser permissions.
See also
Specification.
Note

When creating or shadowing a Xenomai thread for the first time in user-space, Xenomai installs a handler for the SIGSHADOW signal. If you had installed a handler before that, it will be automatically called by Xenomai for SIGSHADOW signals that it has not sent.

If, however, you install a signal handler for SIGSHADOW after creating or shadowing the first Xenomai thread, you have to explicitly call the function xeno_sigwinch_handler at the beginning of your signal handler, using its return to know if the signal was in fact an internal signal of Xenomai (in which case it returns 1), or if you should handle the signal (in which case it returns 0). xeno_sigwinch_handler prototype is:

int xeno_sigwinch_handler(int sig, siginfo_t *si, void *ctxt);

Which means that you should register your handler with sigaction, using the SA_SIGINFO flag, and pass all the arguments you received to xeno_sigwinch_handler.

References pthread_setschedparam_ex().

int pthread_setschedparam_ex ( pthread_t  thread,
int  policy,
const struct sched_param_ex *  param_ex 
)

Set extended scheduling policy of thread.

This service is an extended version of the regular pthread_setschedparam() service, which supports Xenomai-specific or additional scheduling policies, not available with the host Linux environment.

This service set the scheduling policy of the Xenomai thread thread to the value policy, and its scheduling parameters (e.g. its priority) to the value pointed to by param_ex.

If thread does not match the identifier of a Xenomai thread, this action falls back to the regular pthread_setschedparam() service.

Parameters
threadtarget Cobalt thread;
policyscheduling policy, one of SCHED_WEAK, SCHED_FIFO, SCHED_COBALT, SCHED_RR, SCHED_SPORADIC, SCHED_TP, SCHED_QUOTA or SCHED_NORMAL;
param_exscheduling parameters address. As a special exception, a negative sched_priority value is interpreted as if SCHED_WEAK was given in policy, using the absolute value of this parameter as the weak priority level.

When CONFIG_XENO_OPT_SCHED_WEAK is enabled, SCHED_WEAK exhibits priority levels in the [0..99] range (inclusive). Otherwise, sched_priority must be zero for the SCHED_WEAK policy.

Returns
0 on success;
an error number if:
  • ESRCH, thread is invalid;
  • EINVAL, policy or param_ex->sched_priority is invalid;
  • EAGAIN, in user-space, insufficient memory exists in the system heap, increase CONFIG_XENO_OPT_SYS_HEAPSZ;
  • EFAULT, in user-space, param_ex is an invalid address;
  • EPERM, in user-space, the calling process does not have superuser permissions.
See also
Specification.
Note

When creating or shadowing a Xenomai thread for the first time in user-space, Xenomai installs a handler for the SIGSHADOW signal. If you had installed a handler before that, it will be automatically called by Xenomai for SIGSHADOW signals that it has not sent.

If, however, you install a signal handler for SIGSHADOW after creating or shadowing the first Xenomai thread, you have to explicitly call the function cobalt_sigshadow_handler at the beginning of your signal handler, using its return to know if the signal was in fact an internal signal of Xenomai (in which case it returns 1), or if you should handle the signal (in which case it returns 0). cobalt_sigshadow_handler prototype is:

int cobalt_sigshadow_handler(int sig, struct siginfo *si, void *ctxt);

Which means that you should register your handler with sigaction, using the SA_SIGINFO flag, and pass all the arguments you received to cobalt_sigshadow_handler.

pthread_setschedparam_ex() may switch the caller to secondary mode.

Referenced by pthread_setschedparam().

int pthread_yield ( void  )

Yield the processor.

This function move the current thread at the end of its priority group.

Return values
0
See also
Specification.

References sched_yield().

int sched_get_priority_max ( int  policy)

Get maximum priority of the specified scheduling policy.

This service returns the maximum priority of the scheduling policy policy.

Parameters
policyscheduling policy.
Return values
0on success;
-1with errno set if:
  • EINVAL, policy is invalid.
See also
Specification.

Referenced by sched_get_priority_max_ex().

int sched_get_priority_max_ex ( int  policy)

Get extended maximum priority of the specified scheduling policy.

This service returns the maximum priority of the scheduling policy policy, reflecting any Cobalt extension to standard classes.

Parameters
policyscheduling policy.
Return values
0on success;
-1with errno set if:
  • EINVAL, policy is invalid.
See also
Specification.

References sched_get_priority_max().

int sched_get_priority_min ( int  policy)

Get minimum priority of the specified scheduling policy.

This service returns the minimum priority of the scheduling policy policy.

Parameters
policyscheduling policy.
Return values
0on success;
-1with errno set if:
  • EINVAL, policy is invalid.
See also
Specification.

Referenced by sched_get_priority_min_ex().

int sched_get_priority_min_ex ( int  policy)

Get extended minimum priority of the specified scheduling policy.

This service returns the minimum priority of the scheduling policy policy, reflecting any Cobalt extension to the standard classes.

Parameters
policyscheduling policy.
Return values
0on success;
-1with errno set if:
  • EINVAL, policy is invalid.
See also
Specification.

References sched_get_priority_min().

ssize_t sched_getconfig_np ( int  cpu,
int  policy,
union sched_config *  config,
size_t *  len_r 
)

Retrieve CPU-specific scheduler settings for a policy.

A configuration is strictly local to the target cpu, and may differ from other processors.

Parameters
cpuprocessor to retrieve the configuration of.
policyscheduling policy to which the configuration data applies. Currently, only SCHED_TP and SCHED_QUOTA are valid input.
configa pointer to a memory area which receives the configuration settings upon success of this call.
SCHED_TP specifics

On successful return, config->quota.tp contains the TP schedule active on cpu.

SCHED_QUOTA specifics

On entry, config->quota.get.tgid must contain the thread group identifier to inquire about.

On successful exit, config->quota.info contains the information related to the thread group referenced to by config->quota.get.tgid.

Parameters
[in,out]len_ra pointer to a variable for collecting the overall length of the configuration data returned (in bytes). This variable must contain the amount of space available in config when the request is issued.
Returns
the number of bytes copied to config on success;
a negative error number if:
  • EINVAL, cpu is invalid, or policy is unsupported by the current kernel configuration, or len cannot hold the retrieved configuration data.
  • ESRCH, with policy equal to SCHED_QUOTA, if the group identifier required to perform the operation is not valid (i.e. config->quota.get.tgid is invalid).
  • ENOMEM, lack of memory to perform the operation.
  • ENOSPC, len is too short.
int sched_setconfig_np ( int  cpu,
int  policy,
const union sched_config *  config,
size_t  len 
)

Set CPU-specific scheduler settings for a policy.

A configuration is strictly local to the target cpu, and may differ from other processors.

Parameters
cpuprocessor to load the configuration of.
policyscheduling policy to which the configuration data applies. Currently, SCHED_TP and SCHED_QUOTA are valid.
configa pointer to the configuration data to load on cpu, applicable to policy.
Settings applicable to SCHED_TP

This call controls the temporal partitions for cpu, depending on the operation requested.

  • config.tp.op specifies the operation to perform:
  • sched_tp_install installs a new TP schedule on cpu, defined by config.tp.windows[]. The global time frame is not activated upon return from this request yet; sched_tp_start must be issued to activate the temporal scheduling on CPU.
  • sched_tp_uninstall removes the current TP schedule from cpu, releasing all the attached resources. If no TP schedule exists on CPU, this request has no effect.
  • sched_tp_start enables the temporal scheduling on cpu, starting the global time frame. If no TP schedule exists on cpu, this action has no effect.
  • sched_tp_stop disables the temporal scheduling on cpu. The current TP schedule is not uninstalled though, and may be re-started later by a sched_tp_start request. As a consequence of this request, threads assigned to the un-scheduled partitions may be starved from CPU time.
  • for a sched_tp_install operation, config.tp.nr_windows indicates the number of elements present in the config.tp.windows[] array. If config.tp.nr_windows is zero, the action taken is identical to sched_tp_uninstall.
  • if config.tp.nr_windows is non-zero, config.tp.windows[] is a set scheduling time slots for threads assigned to cpu. Each window is specified by its offset from the start of the global time frame (windows[].offset), its duration (windows[].duration), and the partition id it should activate during such period of time (windows[].ptid). This field is not considered for other requests than sched_tp_install.

Time slots must be strictly contiguous, i.e. windows[n].offset + windows[n].duration shall equal windows[n + 1].offset. If windows[].ptid is in the range [0..CONFIG_XENO_OPT_SCHED_TP_NRPART-1], SCHED_TP threads which belong to the partition being referred to may be given CPU time on cpu, from time windows[].offset to windows[].offset + windows[].duration, provided those threads are in a runnable state.

Time holes between valid time slots may be defined using windows activating the pseudo partition -1. When such window is active in the global time frame, no CPU time is available to SCHED_TP threads on cpu.

Note
The sched_tp_confsz(nr_windows) macro returns the length of config.tp depending on the number of time slots to be defined in config.tp.windows[], as specified by config.tp.nr_windows.
Settings applicable to SCHED_QUOTA

This call manages thread groups running on cpu, defining per-group quota for limiting their CPU consumption.

  • config.quota.op should define the operation to be carried out. Valid operations are:
    • sched_quota_add for creating a new thread group on cpu. The new group identifier will be written back to info.tgid upon success. A new group is given no initial runtime budget when created. sched_quota_set should be issued to enable it.
    • sched_quota_remove for deleting a thread group on cpu. The group identifier should be passed in config.quota.remove.tgid.
    • sched_quota_set for updating the scheduling parameters of a thread group defined on cpu. The group identifier should be passed in config.quota.set.tgid, along with the allotted percentage of the quota interval (config.quota.set.quota), and the peak percentage allowed (config.quota.set.quota_peak).

All three operations fill in the config.info structure with the information reflecting the state of the scheduler on cpu with respect to policy, after the requested changes have been applied.

Parameters
lenoverall length of the configuration data (in bytes).
Returns
0 on success;
an error number if:
  • EINVAL, cpu is invalid, or policy is unsupported by the current kernel configuration, len is invalid, or config contains invalid parameters.
  • ENOMEM, lack of memory to perform the operation.
  • EBUSY, with policy equal to SCHED_QUOTA, if an attempt is made to remove a thread group which still manages threads.
  • ESRCH, with policy equal to SCHED_QUOTA, if the group identifier required to perform the operation is not valid.
int sched_yield ( void  )

Yield the processor.

This function move the current thread at the end of its priority group.

Return values
0
See also
Specification.

References XNRELAX, and XNWEAK.

Referenced by pthread_yield().