SimGrid  3.10
Versatile Simulation of Distributed Systems
 All Data Structures Functions Variables Typedefs Enumerations Enumerator Groups Pages
Simgrid options and configurations

A number of options can be given at runtime to change the default SimGrid behavior. For a complete list of all configuration options accepted by the SimGrid version used in your simulator, simply pass the –help configuration flag to your program. If some of the options are not documented on this page, this is a bug that you should please report so that we can fix it. Note that some of the options presented here may not be available in your simulators, depending on the compile-time options that you used.

Passing configuration options to the simulators

There is several way to pass configuration options to the simulators. The most common way is to use the –cfg command line argument. For example, to set the item Item to the value Value, simply type the following:

my_simulator --cfg=Item:Value (other arguments)

Several –cfg command line arguments can naturally be used. If you need to include spaces in the argument, don't forget to quote the argument. You can even escape the included quotes (write \' for ' if you have your argument between ').

Another solution is to use the <config> tag in the platform file. The only restriction is that this tag must occure before the first platform element (be it <AS>, <cluster>, <peer> or whatever). The <config> tag takes an id attribute, but it is currently ignored so you don't really need to pass it. The important par is that within that tag, you can pass one or several <prop> tags to specify the configuration to use. For example, setting Item to Value can be done by adding the following to the beginning of your platform file:

<config>
  <prop id="Item" value="Value"/>
</config>

A last solution is to pass your configuration directly using the C interface. If you happen to use the MSG interface, this is very easy with the MSG_config() function. If you do not use MSG, that's a bit more complex, as you have to mess with the internal configuration set directly as follows. Check the relevant page for details on all the functions you can use in this context, _sg_cfg_set being the only configuration set currently used in SimGrid.

#include <xbt/config.h>
extern xbt_cfg_t _sg_cfg_set;
int main(int argc, char *argv[]) {
SD_init(&argc, argv);
/* Prefer MSG_config() if you use MSG!! */
xbt_cfg_set_parse(_sg_cfg_set,"Item:Value");
// Rest of your code
}

Configuring the platform models

Selecting the platform models

SimGrid comes with several network and CPU models built in, and you can change the used model at runtime by changing the passed configuration. The three main configuration items are given below. For each of these items, passing the special help value gives you a short description of all possible values. Also, –help-models should provide information about all models for all existing resources.

  • network/model: specify the used network model
  • cpu/model: specify the used CPU model
  • workstation/model: specify the used workstation model

As of writting, the accepted network models are the following. Over the time new models can be added, and some experimental models can be removed; check the values on your simulators for an uptodate information. Note that the CM02 model is described in the research report A Network Model for Simulation of Grid Application while LV08 is described in Accuracy Study and Improvement of Network Simulation in the SimGrid Framework.

  • LV08 (default one): Realistic network analytic model (slow-start modeled by multiplying latency by 10.4, bandwidth by .92; bottleneck sharing uses a payload of S=8775 for evaluating RTT)
  • Constant: Simplistic network model where all communication take a constant time (one second). This model provides the lowest realism, but is (marginally) faster.
  • SMPI: Realistic network model specifically tailored for HPC settings (accurate modeling of slow start with correction factors on three intervals: < 1KiB, < 64 KiB, >= 64 KiB). See also this section for more info.
  • CM02: Legacy network analytic model (Very similar to LV08, but without corrective factors. The timings of small messages are thus poorly modeled)
  • Reno: Model from Steven H. Low using lagrange_solve instead of lmm_solve (experts only; check the code for more info).
  • Reno2: Model from Steven H. Low using lagrange_solve instead of lmm_solve (experts only; check the code for more info).
  • Vegas: Model from Steven H. Low using lagrange_solve instead of lmm_solve (experts only; check the code for more info).

If you compiled SimGrid accordingly, you can use packet-level network simulators as network models (see Packet level simulation). In that case, you have two extra models, described below, and some specificadditional configuration flags".

  • GTNets: Network pseudo-model using the GTNets simulator instead of an analytic model
  • NS3: Network pseudo-model using the NS3 tcp model instead of an analytic model

Concerning the CPU, we have only one model for now:

  • Cas01: Simplistic CPU model (time=size/power)

The workstation concept is the aggregation of a CPU with a network card. Three models exists, but actually, only 2 of them are interesting. The "compound" one is simply due to the way our internal code is organized, and can easily be ignored. So at the end, you have two workstation models: The default one allows to aggregate an existing CPU model with an existing network model, but does not allow parallel tasks because these beasts need some collaboration between the network and CPU model. That is why, ptask_07 is used by default when using SimDag.

  • default: Default workstation model. Currently, CPU:Cas01 and network:LV08 (with cross traffic enabled)
  • compound: Workstation model that is automatically chosen if you change the network and CPU models
  • ptask_L07: Workstation model somehow similar to Cas01+CM02 but allowing parallel tasks

Optimization level of the platform models

The network and CPU models that are based on lmm_solve (that is, all our analytical models) accept specific optimization configurations.

  • items network/optim and CPU/optim (both default to 'Lazy'):
    • Lazy: Lazy action management (partial invalidation in lmm + heap in action remaining).
    • TI: Trace integration. Highly optimized mode when using availability traces (only available for the Cas01 CPU model for now).
    • Full: Full update of remaining and variables. Slow but may be useful when debugging.
  • items network/maxmin_selective_update and cpu/maxmin_selective_update: configure whether the underlying should be lazily updated or not. It should have no impact on the computed timings, but should speed up the computation.

It is still possible to disable the maxmin_selective_update feature because it can reveal counter-productive in very specific scenarios where the interaction level is high. In particular, if all your communication share a given backbone link, you should disable it: without maxmin_selective_update, every communications are updated at each step through a simple loop over them. With that feature enabled, every communications will still get updated in this case (because of the dependency induced by the backbone), but through a complicated pattern aiming at following the actual dependencies.

Numerical precision of the platform models

The analytical models handle a lot of floating point values. It is possible to change the epsilon used to update and compare them through the maxmin/precision item (default value: 0.00001). Changing it may speedup the simulation by discarding very small actions, at the price of a reduced numerical precision.

Parallel threads for model updates

By default, Surf computes the analytical models sequentially to share their resources and update their actions. It is possible to run them in parallel, using the surf/nthreads item (default value: 1). If you use a negative or null value, the amount of available cores is automatically detected and used instead.

Depending on the workload of the models and their complexity, you may get a speedup or a slowdown because of the synchronization costs of threads.

Configuring the Network model

Maximal TCP window size

The analytical models need to know the maximal TCP window size to take the TCP congestion mechanism into account. This is set to 20000 by default, but can be changed using the network/TCP_gamma item.

On linux, this value can be retrieved using the following commands. Both give a set of values, and you should use the last one, which is the maximal size.

cat /proc/sys/net/ipv4/tcp_rmem # gives the sender window
cat /proc/sys/net/ipv4/tcp_wmem # gives the receiver window

Corrective simulation factors

These factors allow to betterly take the slow start into account. The corresponding values were computed through data fitting one the timings of packet-level simulators. You should not change these values unless you are really certain of what you are doing. See Accuracy Study and Improvement of Network Simulation in the SimGrid Framework for more informations about these coeficients.

If you are using the SMPI model, these correction coeficients are themselves corrected by constant values depending on the size of the exchange. Again, only hardcore experts should bother about this fact.

Simulating cross-traffic

As of SimGrid v3.7, cross-traffic effects can be taken into account in analytical simulations. It means that ongoing and incoming communication flows are treated independently. In addition, the LV08 model adds 0.05 of usage on the opposite direction for each new created flow. This can be useful to simulate some important TCP phenomena such as ack compression.

For that to work, your platform must have two links for each pair of interconnected hosts. An example of usable platform is available in examples/msg/gtnets/crosstraffic-p.xml.

This is activated through the network/crosstraffic item, that can be set to 0 (disable this feature) or 1 (enable it).

Note that with the default workstation model this option is activated by default.

Coordinated-based network models

When you want to use network coordinates, as it happens when you use an <AS> in your platform file with Vivaldi as a routing, you must set the network/coordinates to yes so that all mandatory initialization are done in the simulator.

Simulating sender gap

(this configuration item is experimental and may change or disapear)

It is possible to specify a timing gap between consecutive emission on the same network card through the network/sender_gap item. This is still under investigation as of writting, and the default value is to wait 10 microseconds (1e-5 seconds) between emissions.

Simulating asyncronous send

(this configuration item is experimental and may change or disapear)

It is possible to specify that messages below a certain size will be sent as soon as the call to MPI_Send is issued, without waiting for the correspondant receive. This threshold can be configured through the smpi/async_small_thres item. The default value is 0. This behavior can also be manually set for MSG mailboxes, by setting the receiving mode of the mailbox with a call to MSG_mailbox_set_async . For MSG, all messages sent to this mailbox will have this behavior, so consider using two mailboxes if needed.

This value needs to be smaller than or equals to the threshold set at Simulating MPI detached send , because asynchronous messages are meant to be detached as well.

Configuring packet-level pseudo-models

When using the packet-level pseudo-models, several specific configuration flags are provided to configure the associated tools. There is by far not enough such SimGrid flags to cover every aspects of the associated tools, since we only added the items that we needed ourselves. Feel free to request more items (or even better: provide patches adding more items).

When using NS3, the only existing item is ns3/TcpModel, corresponding to the ns3::TcpL4Protocol::SocketType configuration item in NS3. The only valid values (enforced on the SimGrid side) are 'NewReno' or 'Reno' or 'Tahoe'.

When using GTNeTS, two items exist:

  • gtnets/jitter, that is a double value to oscillate the link latency, uniformly in random interval [-latency*gtnets_jitter,latency*gtnets_jitter). It defaults to 0.
  • gtnets/jitter_seed, the positive seed used to reproduce jitted results. Its value must be in [1,1e8] and defaults to 10.

Configuring the Model-Checking

To enable the experimental SimGrid model-checking support the program should be executed with the command line argument

--cfg=model-check:1

Safety properties are expressed as assertions using the function

void MC_assert(int prop);

Specifying a liveness property

If you want to specify liveness properties (beware, that's experimental), you have to pass them on the command line, specifying the name of the file containing the property, as formated by the ltl2ba program.

--cfg=model-check/property:<filename>

Of course, specifying a liveness property enables the model-checking so that you don't have to give –cfg=model-check:1 in addition.

Going for stateful verification

By default, the system is backtracked to its initial state to explore another path instead of backtracking to the exact step before the fork that we want to explore (this is called stateless verification). This is done this way because saving intermediate states can rapidly exhaust the available memory. If you want, you can change the value of the model-check/checkpoint variable. For example, the following configuration will ask to take a checkpoint every step. Beware, this will certainly explode your memory. Larger values are probably better, make sure to experiment a bit to find the right setting for your specific system.

--cfg=model-check/checkpoint:1

Of course, specifying this option enables the model-checking so that you don't have to give –cfg=model-check:1 in addition.

Specifying the kind of reduction

The main issue when using the model-checking is the state space explosion. To counter that problem, several exploration reduction techniques can be used. There is unfortunately no silver bullet here, and the most efficient reduction techniques cannot be applied to any properties. In particular, the DPOR method cannot be applied on liveness properties since it may break some cycles in the exploration that are important to the property validity.

--cfg=model-check/reduction:<technique>

For now, this configuration variable can take 2 values: none: Do not apply any kind of reduction (mandatory for now for liveness properties) dpor: Apply Dynamic Partial Ordering Reduction. Only valid if you verify local safety properties.

Of course, specifying a reduction technique enables the model-checking so that you don't have to give –cfg=model-check:1 in addition.

Configuring the User Process Virtualization

Selecting the virtualization factory

In SimGrid, the user code is virtualized in a specific mecanism allowing the simulation kernel to control its execution: when a user process requires a blocking action (such as sending a message), it is interrupted, and only gets released when the simulated clock reaches the point where the blocking operation is done.

In SimGrid, the containers in which user processes are virtualized are called contexts. Several context factory are provided, and you can select the one you want to use with the contexts/factory configuration item. Some of the following may not exist on your machine because of portability issues. In any case, the default one should be the most effcient one (please report bugs if the auto-detection fails for you). They are sorted here from the slowest to the most effient:

  • thread: very slow factory using full featured threads (either pthreads or windows native threads)
  • ucontext: fast factory using System V contexts (or a portability layer of our own on top of Windows fibers)
  • raw: amazingly fast factory using a context switching mecanism of our own, directly implemented in assembly (only available for x86 and amd64 platforms for now)

The only reason to change this setting is when the debugging tools get fooled by the optimized context factories. Threads are the most debugging-friendly contextes.

Adapting the used stack size

Each virtualized used process is executed using a specific system stack. The size of this stack has a huge impact on the simulation scalability, but its default value is rather large. This is because the error messages that you get when the stack size is too small are rather disturbing: this leads to stack overflow (overwriting other stacks), leading to segfaults with corrupted stack traces.

If you want to push the scalability limits of your code, you really want to reduce the contexts/stack_size item. Its default value is 128 (in Kib), while our Chord simulation works with stacks as small as 16 Kib, for example. For the thread factory, the default value is the one of the system, if it is too large/small, it has to be set with this parameter.

Running user code in parallel

Parallel execution of the user code is only considered stable in SimGrid v3.7 and higher. It is described in INRIA RR-7653.

If you are using the ucontext or raw context factories, you can request to execute the user code in parallel. Several threads are launched, each of them handling as much user contexts at each run. To actiave this, set the contexts/nthreads item to the amount of cores that you have in your computer (or lower than 1 to have the amount of cores auto-detected).

Even if you asked several worker threads using the previous option, you can request to start the parallel execution (and pay the associated synchronization costs) only if the potential parallelism is large enough. For that, set the contexts/parallel_threshold item to the minimal amount of user contexts needed to start the parallel execution. In any given simulation round, if that amount is not reached, the contexts will be run sequentially directly by the main thread (thus saving the synchronization costs). Note that this option is mainly useful when the grain of the user code is very fine, because our synchronization is now very efficient.

When parallel execution is activated, you can choose the synchronization schema used with the contexts/synchro item, which value is either:

  • futex: ultra optimized synchronisation schema, based on futexes (fast user-mode mutexes), and thus only available on Linux systems. This is the default mode when available.
  • posix: slow but portable synchronisation using only POSIX primitives.
  • busy_wait: not really a synchronisation: the worker threads constantly request new contexts to execute. It should be the most efficient synchronisation schema, but it loads all the cores of your machine for no good reason. You probably prefer the other less eager schemas.

Configuring the tracing subsystem

The tracing subsystem can be configured in several different ways depending on the nature of the simulator (MSG, SimDag, SMPI) and the kind of traces that need to be obtained. See the Tracing Configuration Options subsection to get a detailed description of each configuration option.

We detail here a simple way to get the traces working for you, even if you never used the tracing API.

  • Any SimGrid-based simulator (MSG, SimDag, SMPI, ...) and raw traces:
    --cfg=tracing:yes --cfg=tracing/uncategorized:yes --cfg=triva/uncategorized:uncat.plist
    
    The first parameter activates the tracing subsystem, the second tells it to trace host and link utilization (without any categorization) and the third creates a graph configuration file to configure Triva when analysing the resulting trace file.
  • MSG or SimDag-based simulator and categorized traces (you need to declare categories and classify your tasks according to them)
    --cfg=tracing:yes --cfg=tracing/categorized:yes --cfg=triva/categorized:cat.plist
    
    The first parameter activates the tracing subsystem, the second tells it to trace host and link categorized utilization and the third creates a graph configuration file to configure Triva when analysing the resulting trace file.
  • SMPI simulator and traces for a space/time view:
    smpirun -trace ...
    
    The -trace parameter for the smpirun script runs the simulation with –cfg=tracing:yes and –cfg=tracing/smpi:yes. Check the smpirun's -help parameter for additional tracing options.

Sometimes you might want to put additional information on the trace to correctly identify them later, or to provide data that can be used to reproduce an experiment. You have two ways to do that:

  • Add a string on top of the trace file as comment:
    --cfg=tracing/comment:my_simulation_identifier
    
  • Add the contents of a textual file on top of the trace file as comment:
    --cfg=tracing/comment_file:my_file_with_additional_information.txt
    

Please, use these two parameters (for comments) to make reproducible simulations. For additional details about this and all tracing options, check See the Tracing configuration Options.

Configuring SMPI

The SMPI interface provides several specific configuration items. These are uneasy to see since the code is usually launched through the smiprun script directly.

Automatic benchmarking of SMPI code

In SMPI, the sequential code is automatically benchmarked, and these computations are automatically reported to the simulator. That is to say that if you have a large computation between a MPI_Recv() and a MPI_Send(), SMPI will automatically benchmark the duration of this code, and create an execution task within the simulator to take this into account. For that, the actual duration is measured on the host machine and then scaled to the power of the corresponding simulated machine. The variable smpi/running_power allows to specify the computational power of the host machine (in flop/s) to use when scaling the execution times. It defaults to 20000, but you really want to update it to get accurate simulation results.

When the code is constituted of numerous consecutive MPI calls, the previous mechanism feeds the simulation kernel with numerous tiny computations. The smpi/cpu_threshold item becomes handy when this impacts badly the simulation performance. It specify a threshold (in second) under which the execution chunks are not reported to the simulation kernel (default value: 1e-6). Please note that in some circonstances, this optimization can hinder the simulation accuracy.

Reporting simulation time

Most of the time, you run MPI code through SMPI to compute the time it would take to run it on a platform that you don't have. But since the code is run through the smpirun script, you don't have any control on the launcher code, making difficult to report the simulated time when the simulation ends. If you set the smpi/display_timing item to 1, smpirun will display this information when the simulation ends.

Simulation time: 1e3 seconds.

Simulating MPI detached send

(this configuration item is experimental and may change or disapear)

This threshold specifies the size in bytes under which the send will return immediately. This is different from the threshold detailed in Simulating asyncronous send because the message is not effectively sent when the send is posted. SMPI still waits for the correspondant receive to be posted to perform the communication operation. This threshold can be set by changing the smpi/send_is_detached item. The default value is 65536.

Simulating MPI collective algorithms

SMPI implements more than 100 different algorithms for MPI collective communication, to accurately simulate the behavior of most of the existing MPI libraries. The smpi/coll_selector item can be used to use the decision logic of either OpenMPI or MPICH libraries (values: ompi or mpich, by default SMPI uses naive version of collective operations). Each collective operation can be manually selected with a smpi/collective_name:algo_name. Available algorithms are listed in Simulating collective operations .

Configuring other aspects of SimGrid

XML file inclusion path

It is possible to specify a list of directories to search into for the <include> tag in XML files by using the path configuration item. To add several directory to the path, set the configuration item several times, as in

--cfg=path:toto --cfg=path:tutu

Behavior on Ctrl-C

By default, when Ctrl-C is pressed, the status of all existing simulated processes is displayed. This is very useful to debug your code, but it can reveal troublesome in some cases (such as when the amount of processes becomes really big). This behavior is disabled when verbose-exit is set to 0 (it is to 1 by default).

Logging Configuration

It can be done by using XBT. Go to Logging support for more details.

Index of all existing configuration items