Skip links

Resource Size Selection Guidance (Neo)

When running models in Cosmic Frog, users can choose the size of the resource the model’s scenario(s) will be run on in terms of available memory (RAM in Gb) and number of CPU cores. Depending on the complexity of the model and the number of elements, policies and constraints in the model, the model will need a certain amount of memory to run to completion successfully. Bigger, more complex models typically need to be run using a resource that has more memory (RAM) available as compared to smaller, less complex models. The bigger the resource that is being used, the higher the billing factor which leads to using more of the available cloud compute hours available to the customer (the total amount of cloud compute time available to the customer is part of customer’s Master License Agreement with Optilogic). Ideally, users choose a resource size that is just big enough to run their scenario(s) without the resource running out of memory, while minimizing the amount of cloud compute time used. This document guides users in choosing an initial resource size and periodically re-evaluating it to ensure optimal usage of the customer’s available cloud compute time.

Resource Size and its Components

Once a model has been built and the user is ready to run 1 or multiple scenarios, they can click on the green Run button at the right top in Cosmic Frog which opens the Run screen:

001 DOC 55

  1. User is in the Run screen that opened after clicking on the Run button at the right top in Cosmic Frog.
  2. On the left, user can configure the model run options, the first of which is selecting a Resource Size.
  3. The Resource Size drop-down shows all resource sizes available for selection, from Mini (smallest) to Overkill (biggest). A resource size is a combination of the number of CPU cores and the amount of memory (RAM) that is available when using that resource. Using more cores lead to shorter runtimes as certain tasks can be handled simultaneously instead of sequentially. In the above screenshot, Resource Size S has been selected. This resource has 4 CPU cores available and up to 16 Gb (Gigabytes) RAM. The resource size drop-down also lists the Billing Factor of the resource. For example, the 3XS resource size has a Billing Factor of 0.5. This means that if a scenario run takes 2 minutes, 1 minute is billed to the user and subtracted from the total cloud compute time still available. The 3XS resource is a small resource (1 CPU core and up to 2 Gb of RAM), and therefore cheaper to run on. As another example, the 2XL resource size has a billing factor of 2, meaning that a run taking 15 minutes will be billed as 30 minutes of cloud compute time used. This resource has 8 CPU cores, so will be able to perform quite a few tasks in parallel, and up to 128Gb of RAM, which is why it is more expensive to run.

Choosing an Initial Resource Size

There are quite a few model factors that influence how much memory a scenario needs to solve. These include the number of model elements, policies, periods, and constraints. The type(s) of constraints used may play a role too. The main factors, in order of impact on memory usage, are:

  1. The number of lanes. This is the number of source-destination-product-period combinations that are considered in the model.
  2. The number of demand records.
  3. The number of constraints.

These numbers are those after expansion of any grouped records and application of scenario items, if any.

The number of lanes can depend on the Lane Creation Rule setting in the Neo (Optimization) Parameters:

009 DOC 55

  1. After clicking on the Run button at the right top in Cosmic Frog, the Run screen opens.
  2. When Neo is selected as the Engine, the Neo (Optimization) Parameters will be applied. To view these, expand this section in case it is collapsed.
  3. One of the parameters users can select is the Lane Creation Rule and this will impact the total number of lanes that the scenario(s) will contain. There are 4 options here:
    1. Transportation Policy Lanes Only: the lanes that are set up on the Transportation Policies table will be part of the scenario(s); any records on the Sourcing Policies tables will not be used for lane creation.
    2. Sourcing Policy Lanes Only: the lanes that are set up on the Sourcing Policies tables will be part of the scenario(s); any records on the Transportation Policies tables will not be used for lane creation.
    3. Intersection: lanes that exist both in the Transportation Policies and Sourcing Policies tables will be part of the scenario(s); any lanes that only exist in either the Transportation Policies table or a Sourcing Policies table will not be used (this is like an inner join).
    4. Union: all lanes that exist in the Transportation Policies table and all lanes that exist in the Sourcing Policies tables will be used for lane creation (any duplicates will be removed).

Note that for lane creation, expansion of grouped records and application of scenario item(s) need to be taken into account too to get at the number of lanes considered in the scenario run.

It is not possible to predict exactly how much memory a scenario needs to run to completion successfully. For example, scenarios with similar footprint in terms of the number of model elements, policies, periods, and constraints can end up having differently sized branch and bound trees due to the type or combination of constraints, which can lead to quite different memory footprints. However, we can make an educated best estimate based on test models run by Optilogic, and users can use the following table to choose an initial resource size. This is intended to be a starting point from which users can finetune the resource size further by taking the steps described in the next section “Evaluating the Resource Size”. First, calculate the number of demand records multiplied with the number of lanes in your model (after expansion of grouped records and application of scenario items). Next, find the range in the first column of the table, and lookup the recommended initial resource size in the second column:

# demand records * # lanes Recommended Initial Resource Size
<1*103 Mini
~1*103 – 1*108 4XS – 3XS
~1*108 – 1*1010 2XS – S – M
~1*1010 – 1*1011 L – XL
~1*1011 – 5*1014 2XL – 3XL
~5*1014 – 5*1015 4XL
>5*1015 Overkill

 

Note that one can more quickly home in on the most appropriate resource size for a scenario by starting with one that is too large, rather than too small. A scenario run will end in an error if not enough memory is available with no indication of how much memory is required. In the logs of a successful run, maximum memory usage will be listed, which users can use to select the most appropriate resource size for their next run(s). See more on where to find this information in the next section.

Evaluating the Resource Size

After running a scenario with the initially selected resource size, users can evaluate if it is the best resource size to use or if a smaller or larger one is more appropriate. The Run Manager application on Optilogic’s platform can be used to assess resource size:

002 DOC 55

  1. Go to the Run Manager application by clicking on its icon on the left-hand side when logged in to the Optilogic platform on cosmicfrog.com. Should you not see the Run Manager here, then click on the icon with 3 dots to show all applications; the Run Manager application should now be visible too.
  2. Under Type at the top of the screen, filter for Scenario.
  3. Find the scenario you want to evaluate the resource size of in the list of jobs, click on it to select it.
  4. The Job Info for this scenario will be displayed on the right hand-side.
  5. If not already selected, click on the i-icon to show the Job Info, other details of the job can be viewed by clicking on the icons to the right of the i-icon.
  6. Part of the job information listed is what resource size the job was run on, in this case resource size 2XS was used, which has 2 CPU cores and up to 4Gb of RAM.
  7. Next, click on the bar chart icon to see the Job Usage Metrics:

003 DOC 55

  1. The legend tells us that the red bar represents the percentage of CPU that was used at peak usage and the yellow bar the percentage of memory used at peak usage.
  2. 26.6% of the available memory was used at peak usage. This is with 4Gb of RAM available, so about 0.266 * 4 Gb = 1.064 Gb of RAM was used.

Using this knowledge that the RAM required at peak usage is just over 1 Gb, we can conclude that going down to resource size 3XS, which has 2Gb of RAM available should still work OK for this scenario. The expectation is that going further down to 4XS, which has 1Gb of RAM available, will not work as the scenario will likely run out of memory. We can test this with 2 additional runs. These are the Job Usage Metrics after running with resource size 3XS:

004 DOC 55

As expected, the scenario runs fine, and the memory usage is now at about 54% (of 2Gb) at peak usage.

Trying with resource size 4XS results in an error:

005 DOC 55

  1. The State of the job now says “error” instead of “done”.
  2. View the Job Error Log by clicking on the 6th icon at the top of the right panel.
  3. The Job Error Log contains a warning that the job ran out memory, which is what we expected as the scenario requires a bit over 1Gb of RAM at peak usage and the 4XS resource size has up to 1Gb of RAM available.

Note that when a scenario runs out of memory like this one here, there are no results for it in the output tables in Cosmic Frog if it is the first time the scenario is run. If the scenario has been run successfully before, then the previous results will still be in the output tables. To ensure that a scenario has run successfully within Cosmic Frog, user can check the timestamp of the outputs in the Optimization Network Summary output table, or review the number of error jobs versus done jobs at the top of Cosmic Frog (see next screenshot). If either of these 2 indicate that the scenario may not have run, then double-check in the Run Manager and review the logs there to find the cause.

006 DOC 55

In the status bar at the top of Cosmic Frog, user can see that there were 2 error jobs and 13 done jobs within the last 24 hours.

In conclusion, for this scenario we started with a 2XS resource size. Using the Run Manager, we reviewed the percentage of memory used at peak usage in the Job Usage Metrics and concluded that a smaller 3XS resource size with 2Gb of RAM should still work fine for this scenario, but an even smaller 4XS resource size with 1Gb of RAM would be too small. Test runs using the 3XS and 4XS resource sizes confirmed this.

Summary of Resource Size Selection Guidelines

  1. When building out your model and its scenarios, be cognizant of the size of your model, especially the number of demand records and lanes you are creating. Mainly keep these in mind when you first run your model and base your initial resource size on the table in section “Choosing an Initial Resource Size” above. Remember that:
    1. If you choose a resource size that is too large, the scenario will run to completion fine and you can determine the more appropriate smaller resource size using the job usage metrics of this run.
    2. If you choose a resource size that is too small, the scenario will end in an error that will indicate that there is not enough memory available, but it will not tell you how much more it needs. In this case a good approach can be to select a resource 2 or 3 sizes bigger and then scale back again from there if the scenario runs fine and the job usage metrics indicate a smaller resource size should suffice.
    3. By using groups, it is easy to for example create all to all policies by just populating 1 record in the sourcing and/or transportation policies tables. However, these can make a model unnecessarily big and in need of more RAM to solve. Try to be aware of the number of enumerated records a grouped record will result in and ensure no more options than those realistically under consideration are included in your model.
  2. Use the Job Usage Metrics that can be found in the Run Manager to assess if the resource size used is appropriate. Based on the peak RAM usage you can determine if you could use a smaller resource size and if so, which one should be the smallest one that has sufficient RAM.
    1. You can do this for each scenario that may be re-run frequently in future to see if it is worthwhile to run different scenarios with different resource sizes.
  3. You can periodically re-assess if the resource size(s) used are still appropriate, this is especially important after making changes to your model.

Have More Questions?

Contact Support Contact Sales Visit Frogger Pond Community