Dakota  Version
Explore and Predict with Confidence
Public Member Functions | Private Member Functions | Private Attributes | List of all members
ParallelLibrary Class Reference

Class for partitioning multiple levels of parallelism and managing message passing within these levels. More...

Public Member Functions

 ParallelLibrary ()
 default constructor (used for dummy_lib) More...
 
 ParallelLibrary (const MPIManager &mpi_mgr, ProgramOptions &prog_opts, OutputManager &output_mgr)
 stand-alone and default library mode constructor; don't require options More...
 
 ~ParallelLibrary ()
 destructor
 
const ParallelLevelinit_iterator_communicators (int iterator_servers, int procs_per_iterator, int min_procs_per_iterator, int max_procs_per_iterator, int max_iterator_concurrency, short default_config, short iterator_scheduling, bool peer_dynamic_avail)
 split MPI_COMM_WORLD into iterator communicators
 
const ParallelLevelinit_evaluation_communicators (int evaluation_servers, int procs_per_evaluation, int min_procs_per_eval, int max_procs_per_eval, int max_evaluation_concurrency, int asynch_local_evaluation_concurrency, short default_config, short evaluation_scheduling, bool peer_dynamic_avail)
 split an iterator communicator into evaluation communicators
 
const ParallelLevelinit_analysis_communicators (int analysis_servers, int procs_per_analysis, int min_procs_per_analysis, int max_procs_per_analysis, int max_analysis_concurrency, int asynch_local_analysis_concurrency, short default_config, short analysis_scheduling, bool peer_dynamic_avail)
 split an evaluation communicator into analysis communicators
 
void print_configuration ()
 print the parallel level settings for a particular parallel configuration
 
void push_output_tag (const ParallelLevel &pl)
 conditionally append an iterator server id tag to the hierarchical output tag, manage restart, and rebind cout/cerr More...
 
void pop_output_tag (const ParallelLevel &pl)
 pop the last output tag and rebind streams as needed; pl isn't yet used, but may be in the future when we generalize to arbitrary output context switching
 
void write_restart (const ParamResponsePair &prp)
 write a parameter/response set to the restart file
 
ProgramOptionsprogram_options ()
 return programOptions reference
 
OutputManageroutput_manager ()
 return outputManager reference
 
void terminate_modelcenter ()
 terminate ModelCenter if running More...
 
void abort_helper (int code)
 finalize MPI with correct communicator for abort
 
bool command_line_check () const
 return checkFlag
 
bool command_line_pre_run () const
 return preRunFlag
 
bool command_line_run () const
 return runFlag
 
bool command_line_post_run () const
 return postRunFlag
 
bool command_line_user_modes () const
 return userModesFlag
 
const String & command_line_pre_run_input () const
 preRunInput filename
 
const String & command_line_pre_run_output () const
 preRunOutput filename
 
const String & command_line_run_input () const
 runInput filename
 
const String & command_line_run_output () const
 runOutput filename
 
const String & command_line_post_run_input () const
 postRunInput filename
 
const String & command_line_post_run_output () const
 postRunOutput fname
 
void send (MPIPackBuffer &send_buff, int dest, int tag, const ParallelLevel &parent_pl, const ParallelLevel &child_pl)
 blocking buffer send at the current communication level
 
void send (int &send_int, int dest, int tag, const ParallelLevel &parent_pl, const ParallelLevel &child_pl)
 blocking integer send at the current communication level
 
void isend (MPIPackBuffer &send_buff, int dest, int tag, MPI_Request &send_req, const ParallelLevel &parent_pl, const ParallelLevel &child_pl)
 nonblocking buffer send at the current communication level
 
void isend (int &send_int, int dest, int tag, MPI_Request &send_req, const ParallelLevel &parent_pl, const ParallelLevel &child_pl)
 nonblocking integer send at the current communication level
 
void recv (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Status &status, const ParallelLevel &parent_pl, const ParallelLevel &child_pl)
 blocking buffer receive at the current communication level
 
void recv (int &recv_int, int source, int tag, MPI_Status &status, const ParallelLevel &parent_pl, const ParallelLevel &child_pl)
 blocking integer receive at the current communication level
 
void irecv (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Request &recv_req, const ParallelLevel &parent_pl, const ParallelLevel &child_pl)
 nonblocking buffer receive at the current communication level
 
void irecv (int &recv_int, int source, int tag, MPI_Request &recv_req, const ParallelLevel &parent_pl, const ParallelLevel &child_pl)
 nonblocking integer receive at the current communication level
 
void check_mi_index (size_t &index) const
 process _NPOS default and perform error checks
 
void send_mi (int &send_int, int dest, int tag, size_t index=_NPOS)
 blocking send at the metaiterator-iterator communication level
 
void isend_mi (int &send_int, int dest, int tag, MPI_Request &send_req, size_t index=_NPOS)
 nonblocking send at the metaiterator-iterator communication level
 
void recv_mi (int &recv_int, int source, int tag, MPI_Status &status, size_t index=_NPOS)
 blocking receive at the metaiterator-iterator communication level
 
void irecv_mi (int &recv_int, int source, int tag, MPI_Request &recv_req, size_t index=_NPOS)
 nonblocking receive at the metaiterator-iterator communication level
 
void send_mi (MPIPackBuffer &send_buff, int dest, int tag, size_t index=_NPOS)
 blocking send at the metaiterator-iterator communication level
 
void isend_mi (MPIPackBuffer &send_buff, int dest, int tag, MPI_Request &send_req, size_t index=_NPOS)
 nonblocking send at the metaiterator-iterator communication level
 
void recv_mi (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Status &status, size_t index=_NPOS)
 blocking receive at the metaiterator-iterator communication level
 
void irecv_mi (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Request &recv_req, size_t index=_NPOS)
 nonblocking receive at the metaiterator-iterator communication level
 
void send_ie (int &send_int, int dest, int tag)
 blocking send at the iterator-evaluation communication level
 
void isend_ie (int &send_int, int dest, int tag, MPI_Request &send_req)
 nonblocking send at the iterator-evaluation communication level
 
void recv_ie (int &recv_int, int source, int tag, MPI_Status &status)
 blocking receive at the iterator-evaluation communication level
 
void irecv_ie (int &recv_int, int source, int tag, MPI_Request &recv_req)
 nonblocking receive at the iterator-evaluation communication level
 
void send_ie (MPIPackBuffer &send_buff, int dest, int tag)
 blocking send at the iterator-evaluation communication level
 
void isend_ie (MPIPackBuffer &send_buff, int dest, int tag, MPI_Request &send_req)
 nonblocking send at the iterator-evaluation communication level
 
void recv_ie (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Status &status)
 blocking receive at the iterator-evaluation communication level
 
void irecv_ie (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Request &recv_req)
 nonblocking receive at the iterator-evaluation communication level
 
void send_ea (int &send_int, int dest, int tag)
 blocking send at the evaluation-analysis communication level
 
void isend_ea (int &send_int, int dest, int tag, MPI_Request &send_req)
 nonblocking send at the evaluation-analysis communication level
 
void recv_ea (int &recv_int, int source, int tag, MPI_Status &status)
 blocking receive at the evaluation-analysis communication level
 
void irecv_ea (int &recv_int, int source, int tag, MPI_Request &recv_req)
 nonblocking receive at the evaluation-analysis communication level
 
void bcast (int &data, const ParallelLevel &pl)
 broadcast an integer across the serverIntraComm of a ParallelLevel
 
void bcast (short &data, const ParallelLevel &pl)
 broadcast an integer across the serverIntraComm of a ParallelLevel
 
void bcast (MPIPackBuffer &send_buff, const ParallelLevel &pl)
 broadcast a MPIPackBuffer across the serverIntraComm of a ParallelLevel
 
void bcast (MPIUnpackBuffer &recv_buff, const ParallelLevel &pl)
 broadcast a MPIUnpackBuffer across the serverIntraComm of a ParallelLevel
 
void bcast_hs (int &data, const ParallelLevel &pl)
 broadcast an integer across the hubServerIntraComm of a ParallelLevel
 
void bcast_hs (MPIPackBuffer &send_buff, const ParallelLevel &pl)
 broadcast a MPIPackBuffer across the hubServerIntraComm of a ParallelLevel
 
void bcast_hs (MPIUnpackBuffer &recv_buff, const ParallelLevel &pl)
 broadcast a MPIUnpackBuffer across the hubServerIntraComm of a ParallelLevel
 
void bcast_w (int &data)
 broadcast an integer across MPI_COMM_WORLD
 
void bcast_i (int &data, size_t index=_NPOS)
 broadcast an integer across an iterator communicator
 
void bcast_i (short &data, size_t index=_NPOS)
 broadcast a short integer across an iterator communicator
 
void bcast_e (int &data)
 broadcast an integer across an evaluation communicator
 
void bcast_a (int &data)
 broadcast an integer across an analysis communicator
 
void bcast_mi (int &data, size_t index=_NPOS)
 broadcast an integer across a metaiterator-iterator intra communicator
 
void bcast_w (MPIPackBuffer &send_buff)
 broadcast a packed buffer across MPI_COMM_WORLD
 
void bcast_i (MPIPackBuffer &send_buff, size_t index=_NPOS)
 broadcast a packed buffer across an iterator communicator
 
void bcast_e (MPIPackBuffer &send_buff)
 broadcast a packed buffer across an evaluation communicator
 
void bcast_a (MPIPackBuffer &send_buff)
 broadcast a packed buffer across an analysis communicator
 
void bcast_mi (MPIPackBuffer &send_buff, size_t index=_NPOS)
 broadcast a packed buffer across a metaiterator-iterator intra communicator
 
void bcast_w (MPIUnpackBuffer &recv_buff)
 matching receive for packed buffer broadcast across MPI_COMM_WORLD
 
void bcast_i (MPIUnpackBuffer &recv_buff, size_t index=_NPOS)
 matching receive for packed buffer bcast across an iterator communicator
 
void bcast_e (MPIUnpackBuffer &recv_buff)
 matching receive for packed buffer bcast across an evaluation communicator
 
void bcast_a (MPIUnpackBuffer &recv_buff)
 matching receive for packed buffer bcast across an analysis communicator
 
void bcast_mi (MPIUnpackBuffer &recv_buff, size_t index=_NPOS)
 matching recv for packed buffer bcast across a metaiterator-iterator intra comm
 
void barrier_w ()
 enforce MPI_Barrier on MPI_COMM_WORLD
 
void barrier_i (size_t index=_NPOS)
 enforce MPI_Barrier on an iterator communicator
 
void barrier_e ()
 enforce MPI_Barrier on an evaluation communicator
 
void barrier_a ()
 enforce MPI_Barrier on an analysis communicator
 
void reduce_sum_ea (double *local_vals, double *sum_vals, int num_vals)
 compute a sum over an eval-analysis intra-communicator using MPI_Reduce
 
void reduce_sum_a (double *local_vals, double *sum_vals, int num_vals)
 compute a sum over an analysis communicator using MPI_Reduce
 
void test (MPI_Request &request, int &test_flag, MPI_Status &status)
 test a nonblocking send/receive request for completion
 
void wait (MPI_Request &request, MPI_Status &status)
 wait for a nonblocking send/receive request to complete
 
void waitall (int num_recvs, MPI_Request *&recv_reqs)
 wait for all messages from a series of nonblocking receives
 
void waitsome (int num_sends, MPI_Request *&recv_requests, int &num_recvs, int *&index_array, MPI_Status *&status_array)
 wait for at least one message from a series of nonblocking receives but complete all that are available
 
void free (MPI_Request &request)
 free an MPI_Request
 
int world_size () const
 return MPIManager::worldSize
 
int world_rank () const
 return MPIManager::worldRank
 
bool mpirun_flag () const
 return MPIManager::mpirunFlag
 
bool is_null () const
 return dummyFlag
 
Real parallel_time () const
 returns current MPI wall clock time
 
void parallel_configuration_iterator (ParConfigLIter pc_iter)
 set the current ParallelConfiguration node
 
ParConfigLIter parallel_configuration_iterator () const
 return the current ParallelConfiguration node
 
const ParallelConfigurationparallel_configuration () const
 return the current ParallelConfiguration instance
 
size_t num_parallel_configurations () const
 returns the number of entries in parallelConfigurations
 
bool parallel_configuration_is_complete ()
 identifies if the current ParallelConfiguration has been fully populated
 
void increment_parallel_configuration (ParLevLIter mi_pl_iter)
 add a new node to parallelConfigurations and increment currPCIter; limit miPLIters within new configuration to mi_pl_iter level More...
 
void increment_parallel_configuration ()
 add a new node to parallelConfigurations and increment currPCIter; copy all of miPLIters into new configuration
 
bool w_parallel_level_defined () const
 test current parallel configuration for definition of world parallel level
 
bool mi_parallel_level_defined (size_t index=_NPOS) const
 test current parallel configuration for definition of meta-iterator-iterator parallel level
 
bool ie_parallel_level_defined () const
 test current parallel configuration for definition of iterator-evaluation parallel level
 
bool ea_parallel_level_defined () const
 test current parallel configuration for definition of evaluation-analysis parallel level
 
ParLevLIter w_parallel_level_iterator ()
 for this level, access through ParallelConfiguration is not necessary
 
size_t parallel_level_index (ParLevLIter pl_iter)
 return the index within parallelLevels corresponding to pl_iter
 
std::vector< MPI_Comm > analysis_intra_communicators ()
 return the set of analysis intra communicators for all parallel configurations (used for setting up direct simulation interfaces prior to execution time).
 

Private Member Functions

void init_mpi_comm ()
 convenience function for initializing DAKOTA's top-level MPI communicators, based on dakotaMPIComm More...
 
void initialize_timers ()
 initialize DAKOTA and UTILIB timers
 
void output_timers ()
 conditionally output timers in destructor
 
void init_communicators (const ParallelLevel &parent_pl, int num_servers, int procs_per_server, int min_procs_per_server, int max_procs_per_server, int max_concurrency, int asynch_local_concurrency, short default_config, short scheduling_override, bool peer_dynamic_avail)
 split a parent communicator into child server communicators More...
 
void split_communicator_dedicated_master (const ParallelLevel &parent_pl, ParallelLevel &child_pl)
 split a parent communicator into a dedicated master processor and num_servers child communicators
 
void split_communicator_peer_partition (const ParallelLevel &parent_pl, ParallelLevel &child_pl)
 split a parent communicator into num_servers peer child communicators (no dedicated master processor)
 
void resolve_inputs (ParallelLevel &child_pl, int avail_procs, int min_procs_per_server, int max_procs_per_server, int max_concurrency, int capacity_multiplier, short default_config, short scheduling_override, bool peer_dynamic_avail, bool print_rank)
 resolve user inputs into a sensible partitioning scheme More...
 
void bcast (int &data, const MPI_Comm &comm)
 broadcast an integer across a communicator
 
void bcast (short &data, const MPI_Comm &comm)
 broadcast a short integer across a communicator
 
void bcast (MPIPackBuffer &send_buff, const MPI_Comm &comm)
 send a packed buffer across a communicator using a broadcast
 
void bcast (MPIUnpackBuffer &recv_buff, const MPI_Comm &comm)
 matching receive for a packed buffer broadcast
 
void barrier (const MPI_Comm &comm)
 enforce MPI_Barrier on comm
 
void reduce_sum (double *local_vals, double *sum_vals, int num_vals, const MPI_Comm &comm)
 compute a sum over comm using MPI_Reduce
 
void check_error (const String &err_source, int err_code)
 check the MPI return code and abort if error
 
void alias_as_server_comm (const ParallelLevel &parent_pl, ParallelLevel &child_pl)
 convenience function for updating child serverIntraComm from parent serverIntraComm (shallow Comm copy)
 
void copy_as_server_comm (const ParallelLevel &parent_pl, ParallelLevel &child_pl)
 convenience function for updating child serverIntraComm from parent serverIntraComm (deep Comm copy)
 
void alias_as_hub_server_comm (const ParallelLevel &parent_pl, ParallelLevel &child_pl)
 convenience function for updating child hubServerIntraComm from parent serverIntraComm (shallow Comm copy)
 
void copy_as_hub_server_comm (const ParallelLevel &parent_pl, ParallelLevel &child_pl)
 convenience function for updating child hubServerIntraComm from parent serverIntraComm (deep Comm copy)
 

Private Attributes

const MPIManagermpiManager
 reference to the MPI manager with Dakota's MPI options
 
ProgramOptionsprogramOptions
 programOptions is non-const due to updates from broadcast
 
OutputManageroutputManager
 Non-const output handler to help with file redirection.
 
bool dummyFlag
 prevents multiple MPI_Finalize calls due to dummy_lib
 
bool outputTimings
 timing info only beyond help/version/check
 
Real startCPUTime
 start reference for UTILIB CPU timer
 
Real startWCTime
 start reference for UTILIB wall clock timer
 
Real startMPITime
 start reference for MPI wall clock timer
 
long startClock
 start reference for local clock() timer measuring parent+child CPU
 
std::list< ParallelLevelparallelLevels
 the complete set of parallelism levels for managing multilevel parallelism among one or more configurations
 
std::list< ParallelConfigurationparallelConfigurations
 the set of parallel configurations which manage list iterators for indexing into parallelLevels
 
ParConfigLIter currPCIter
 list iterator identifying the current node in parallelConfigurations
 

Detailed Description

Class for partitioning multiple levels of parallelism and managing message passing within these levels.

The ParallelLibrary class encapsulates all of the details of performing message passing within multiple levels of parallelism. It provides functions for partitioning of levels according to user configuration input and functions for passing messages within and across MPI communicators for each of the parallelism levels. If support for other message-passing libraries beyond MPI becomes needed (PVM, ...), then ParallelLibrary would be promoted to a base class with virtual functions to encapsulate the library-specific syntax.

Constructor & Destructor Documentation

◆ ParallelLibrary() [1/2]

default constructor (used for dummy_lib)

This constructor is used for creation of the global dummy_lib object, which is used to satisfy initialization requirements when the real ParallelLibrary object is not available.

◆ ParallelLibrary() [2/2]

ParallelLibrary ( const MPIManager mpi_mgr,
ProgramOptions prog_opts,
OutputManager output_mgr 
)

stand-alone and default library mode constructor; don't require options

library mode constructor accepting communicator

TODO: Update comment.

Same constructor is used for executable and library environments and sequencing of object construction is ordered, so no need to separately get updates off command line (programOptions)

References ParallelLibrary::init_mpi_comm(), and ParallelLibrary::initialize_timers().

Member Function Documentation

◆ push_output_tag()

void push_output_tag ( const ParallelLevel pl)

conditionally append an iterator server id tag to the hierarchical output tag, manage restart, and rebind cout/cerr

If the user has specified the use of files for DAKOTA standard output and/or standard error, then bind these filenames to the Cout/Cerr macros. In addition, if concurrent iterators are to be used, create and tag multiple output streams in order to prevent jumbled output. Manage restart file(s) by processing any incoming evaluations from an old restart file and by setting up the binary output stream for new evaluations. Only master iterator processor(s) read & write restart information. This function must follow init_iterator_communicators so that restart can be managed properly for concurrent iterator strategies. In the case of concurrent iterators, each iterator has its own restart file tagged with iterator number.

References ParallelLibrary::bcast(), ParallelLevel::dedicatedMasterFlag, OutputManager::graph2DFlag, ParallelLevel::hubServerCommRank, ParallelLevel::hubServerCommSize, ParallelLevel::hubServerIntraComm, ParallelLevel::numServers, ParallelLibrary::outputManager, ParallelLibrary::programOptions, OutputManager::push_output_tag(), OutputManager::resultsOutputFile, OutputManager::resultsOutputFlag, ParallelLevel::serverCommRank, ParallelLevel::serverId, MPIPackBuffer::size(), OutputManager::tabularDataFile, and OutputManager::tabularDataFlag.

Referenced by Environment::construct(), and IteratorScheduler::partition().

◆ terminate_modelcenter()

void terminate_modelcenter ( )

terminate ModelCenter if running

Close streams associated with manage_outputs and manage_restart and terminate any additional services that may be active.

References Dakota::abort_handler(), Dakota::dc_ptr_int, and Dakota::mc_ptr_int.

Referenced by ParallelLibrary::~ParallelLibrary().

◆ increment_parallel_configuration()

void increment_parallel_configuration ( ParLevLIter  mi_pl_iter)
inline

add a new node to parallelConfigurations and increment currPCIter; limit miPLIters within new configuration to mi_pl_iter level

Called from the ParallelLibrary ctor and from Model::init_communicators(). An increment is performed for each Model initialization except the first (which inherits the world level from the first partial configuration).

References ParallelLibrary::currPCIter, ParallelConfiguration::eaPLIter, ParallelConfiguration::endPLIter, ParallelConfiguration::iePLIter, ParallelConfiguration::miPLIters, ParallelConfiguration::numParallelLevels, ParallelLibrary::parallelConfigurations, and ParallelLibrary::parallelLevels.

Referenced by Iterator::init_communicators(), and Model::init_communicators().

◆ init_mpi_comm()

void init_mpi_comm ( )
private

◆ init_communicators()

void init_communicators ( const ParallelLevel parent_pl,
int  num_servers,
int  procs_per_server,
int  min_procs_per_server,
int  max_procs_per_server,
int  max_concurrency,
int  asynch_local_concurrency,
short  default_config,
short  scheduling_override,
bool  peer_dynamic_avail 
)
private

split a parent communicator into child server communicators

Split parent communicator into concurrent child server partitions as specified by the passed parameters. This constructs new child intra-communicators and parent-child inter-communicators. This fn is called from MetaIterators and NestedModel for the concurrent iterator level and from ApplicationInterface::init_communicators() for the concurrent evaluation and concurrent analysis levels.

References ParallelLibrary::currPCIter, ParallelLevel::dedicatedMasterFlag, ParallelLevel::messagePass, ParallelLevel::numServers, ParallelLibrary::parallelLevels, ParallelLevel::procsPerServer, ParallelLibrary::resolve_inputs(), ParallelLevel::serverCommRank, ParallelLevel::serverCommSize, ParallelLibrary::split_communicator_dedicated_master(), and ParallelLibrary::split_communicator_peer_partition().

Referenced by ParallelLibrary::init_analysis_communicators(), ParallelLibrary::init_evaluation_communicators(), and ParallelLibrary::init_iterator_communicators().

◆ resolve_inputs()

void resolve_inputs ( ParallelLevel child_pl,
int  avail_procs,
int  min_procs_per_server,
int  max_procs_per_server,
int  max_concurrency,
int  capacity_multiplier,
short  default_config,
short  scheduling_override,
bool  peer_dynamic_avail,
bool  print_rank 
)
private

resolve user inputs into a sensible partitioning scheme

This function is responsible for the "auto-configure" intelligence of DAKOTA. It resolves a variety of inputs and overrides into a sensible partitioning configuration for a particular parallelism level. It also handles the general case in which a user's specification request does not divide out evenly with the number of available processors for the level. If num_servers & procs_per_server are both nondefault, then the former takes precedence.

References Dakota::abort_handler(), ParallelLevel::dedicatedMasterFlag, ParallelLevel::numServers, ParallelLevel::procRemainder, and ParallelLevel::procsPerServer.

Referenced by ParallelLibrary::init_communicators().


The documentation for this class was generated from the following files: