Several input columns used by Throg (Simulation) will accept distributions as inputs. The following distributions, along with their syntax, are currently supported by Throg:
A user-defined Histogram is also supported as an acceptable input. These will be defined in the Histograms input table, and to reference a histogram in an allowed input column simply put in the Histogram Name.
The data for our visualizations comes from our Cosmic Frog model database. This is often stored in the cloud using a unique identifier.

We can decide if we want to use Optilogic servers to do our data computations, or if we want the computations to be performed locally.

Next, we must decide which table we want to use for our visualization. Generally, the results of running our model are stored in the “optimization” tables, so it can be helpful to use search for this to see output data. However, we can also use tables containing input data if desired.

We can also preview the data in a table using the magnifying glass button:


When downloading all of the related files for any of the sample Excel Apps that are posted to our Resource Library, you will likely encounter macro-enabled Excel workbooks (.xlsm) which might have their macros disabled following download.

To enable the macros, you can right-click on the Excel file in a File Explorer and select the option for Properties. In the Properties menu, check the box in the Security section to Unblock the file and then hit Apply.

You can then re-open the document and should see the macros are now enabled.
Some of the Excel templates also make use of add-ins to help expand the workbook’s capabilities. An example of this is the ArcGIS add-in which allows for map visuals to be created directly in the workbook. It is possible that these add-ins might be disabled by default in the Microsoft Office settings, if that is the case you should see an error message as follows:

This can be resolved by updating your Privacy Settings under your Microsoft Office Account page. To access this menu, click into File > Account > Account Privacy > Manage Settings. You will then want to make sure that in the Optional Connected Experiences, the option to Turn on optional connected experiences is enabled.

Updating this value will require a restart of Microsoft Office. Once the restart is completed, you should see that the add-ins are now enabled and working as expected.
If other issues come from any Resource Library templates please do not hesitate to reach out to support@optilogic.com.
Please find a list of model errors that might appear in the Job Error Log and their associated resolutions.
Resolution – Please check to see if a scenario item was built where an action does not reference a column name. Scenario Actions should be of the form ColumnName = …..


Resolution – Please check to see if there is a string in the scenario condition that is missing an apostrophe to start or close the string


Resolution – This indicates an intermittent drop in connection when reading / writing to the database. Re-running the scenario should resolve this error. If the error persists, please reach out to support@optilogic.com.
If you are comfortable with traditional linear programming techniques, you can select the “Write Input Solver Files” and “Write LP File” parameters to get useful output files.

After running, these files are in your file explorer.

The “input_solver” folder has a list of all the tables that are entered into the optimization solver. This is useful for:

The “model.lp” file shows the model in a more traditional MIP optimization format, including the objective function and model constraints.

When using the Infeasibility Diagnostic engine, the LP file is different than a traditional Neo run. In this case, the cost coefficients in the objective function are set to 0. Instead, a positive cost coefficient is added to each slack constraint, and the goal of the model is to minimize the slack values. This allows us to find the “edge” of infeasibility.

The validation error report table will be populated for all model solves and contains a lot of helpful information for both reviewing and troubleshooting models. A set of results will be saved for each scenario and every record will report the table and column where the issue has been identified, a description of the error, the value that was identified as the problem along with the action taken by the solver. In the instance where multiple records are generated for the same type of error, you will see an example record populated as the identified value and a count of the errors will be displayed – detailed records can be printed using debug mode. Each record will also be rated on the severity of its impact on a scale from 0 – 3 with 3 being the highest.
The validation error report can serve as a great tool to help troubleshoot an infeasible model. The best approach for utilizing this table is to first sort on the Severity column so that the highest level issues are displayed. If severity 3 issues are present, they must be addressed. If no severity 3 issues exist, the next steps would be to review any severity 2 issues to consider policy impact. It is also helpful to search for any instances where rows are being dropped as these dropped rows will likely influence other errors. To do this, filter the Action field for ‘Row Dropped’.
While the report can be helpful in trying to proactively diagnose infeasible models, it won’t always have all of the answers. To learn more about the dedicated engine for infeasibility diagnosis please check out this article: Troubleshooting With The Infeasibility Diagnostic Engine
Severity 3 records capture instances of syntax issues that are preventing the model from being built properly. This can be presented as 2 types:
Severity 3 records can also be instances where model infeasibility can be detected before the model solve begins, in the instance where a severity 3 error is detected the model solve will be cancelled immediately. There are 3 common instances of these Severity 3 errors:
Severity 2 records capture sources of potential infeasibility and while not always a definitive reason for a problem being infeasible, they will highlight potential issues with the policy structure of a model. There are 2 common instances of Severity 2 errors:
These severity 2 errors don’t necessarily indicate a problem, they can often times be caused by grouped-based policies and members overlapping from one group to another. It is still a good idea to review these gaps in policies and make sure that all of the lanes which should be considered in your solve are in fact being read into the solver.
Severity 1 records will capture issues in model data that can cause rows from input tables to be dropped when writing the solver files for a model. These can be issues on allowed numerical ranges, a negative quantity for customer demand as an example. Detailed policy linkages that do not align can also cause errors, for example a process which uses a specific workcenter that doesn’t exist at the assigned facility. There are other causes of severity 1 errors, and it is important to review these errors for any changes that are made to your input data. Be sure to check the Action column to see if a default value was applied and note the new value that was used, or if a row was being dropped.
Severity 1 records will also note other issues that have been identified in the model input data. These can be unused model elements such as a customer which is included but has no demand, or transportation modes that exist in the model but aren’t assigned to any lanes. There can also be records generated for policies that have no cost charged against them to let you know that a cost is missing. Duplicate data records will also be flagged as severity 1 errors – the last row read in by the solver will be persisted. If you have duplicated data with different detailed information these duplicates should be addressed in your input tables so that you can make sure the correct information is read in by the solver.
A point to look out for is any severity 1 issue that is related to an invalid UOM entry – these will cause records to be dropped and while the dropped records don’t always directly cause model infeasibilities themselves, they can often times be at the root of other issues identified as higher severity or through infeasibility diagnosis runs.
Severity 0 records are for informational purposes only and are generated when records have been excluded upstream, either in the input tables directly or by other corrective actions that the solver has taken. These records are letting you know that the solver read in the input records but they did not correspond to anything that was in the current solve. An example of this would be if you only included a subset of your facilities for a scenario, but still had included records for an out-of-scope MFG location. If the MFG is excluded in the facilities table but its production policies are still included, a record will be written to the table to let you know that these production policy rows were skipped.
After every Cosmic Frog run, it’s a great habit to check the OptimizationValidationErrorReport table. Even for models that run successfully, this table can have useful information on how the input data is being processed by the solver.

Key columns in the validation error report can tell you:
Validation errors that have “high” or “moderate” severity are likely to cause an infeasible model. In fact, Cosmic Frog has an “infeasibility checker” that looks for these kinds of errors and flags them before running the model to save you run time.

In this example, there are no transportation lanes that can deliver beds to CUST_Phoenix, so demand cannot be fulfilled. The infeasibility checker finds this and stops the model run before it even tries to optimize.


A couple of examples of how we could fix this error:
The OutputValidationErrorReport table is often very useful, even if a model “successfully” runs. This table will catch potential errors that do not cause infeasibility but could change the model results. In this example, there is a typo in the CustomerDemand table for “CUST_Montgomery”. Here, the model drops that row in the demand table. This will not cause an infeasible model, as it just removes that demand constraint. However, it means that no product will be sent to this customer in our optimized result.

Sometimes, the cause of infeasibility is not immediately clear. Especially as we build more and more complex models with several constraints, infeasibility could come from several different sources.
One great tool is Infeasibility Diagnostic Engine. In the “Run” menu, you can select this tool under the Neo engine dropdown.

Running the infeasibility diagnostic engine can allow the model to optimize even if the current constraints cause infeasibility. Generally, this engine adds slack variables to each of the constraints in our model, which allow it to solve.
Adding slack variables means that the infeasibility diagnostic is solving an optimization problem focused on feasibility, not cost. The goal of this augmented model is to minimize the slack variables. By minimizing the slack variables needed to solve the model, this diagnostic tool can find:
If constraint relaxation is enough to allow the model to solve, the model will show as “Done” in the Run Manager.

The “slack” results will show in the OptimizationConstraintSummary table.


In this example there are no transportation lanes to CUST_Phoenix, so we cannot fulfill that demand constraint. The model is telling us that a way to make this feasible is to change the demand constraint to be 0.
In general, infeasibility is a very complex topic, so the infeasibility diagnostic tool cannot catch or relax all potential infeasibility causes, but it is a very useful place to start.
The Anura data schema enables design in the Optilogic platform. It sits above the design algorithms to create a consistent modeling paradigm for our algorithms:

Anura is comprised of modeling categories required to build a model. Each modeling category contains tables to add pertinent detail and/or complexity to your model as well as identify structure, policies, costs, and constraints.

A complete list of the tables and fields within each table can be downloaded locally here.
‘Connection Info’ is required when connecting 3rd party tools such as Alteryx, Tableau, PowerBI, etc. to Cosmic Frog.
Anura version 2.7 makes use of Next-Gen database infrastructure. As a result, connection strings must be updated to maintain connectivity.
Severed connections will produce an error when connecting your 3rd party tool to Cosmic Frog.
Steps to restore connection:
Cosmic Frog ‘HOST’ value can be copied from Cloud Storage browser.

Watch the video to learn how to build dashboards to analyze scenarios and tell the stories behind your Cosmic Frog models:
Please feel free to download the Cosmic Frog Python Library PDF file. Please note that this library requires Python 3.11.
You can also reference the video shown below that covers an overview on scripting within Cosmic Frog.
The following instructions begin with the assumption that you have data that you wish to upload to your Cosmic Frog model.
The instructions below assume the user is adding information to the Suppliers table in the model database. This will use the same connection that was previously configured to download data from the Customers table.
Drag the “Output Data” action into the Workflow and click to select “Write to File or Database”

Select the relevant ODBC connection established earlier, in this example we called the connection “Alteryx Demo Model.”

You will be prompted to enter a valid table name to which the data will be written in the model database. In this example enter suppliers (all in lower case to match the table name in PostGres, which is case sensitive).
Click OK

Within the Options menu – edit “Output Options” to Append Existing in the drop-down list.
Within the Options menu – edit “Append Field Map” by clicking the three dots to see more options.
Select “Custom Mapping” option and then use the drop-down lists to map each field in the Destination column to the Source column. Fields of the same name, but case sensitive as it is PostGres.

Click OK
Now you can Run the Workflow to upload the data to your model. Once it has completed check the Suppliers table in Cosmic Frog to see the data.
By default, the Geocoding option in Cosmic Frog will use Mapbox as the geodata provider. Other supported providers will be Bing, Google, PTV and PC Miler. If you have an access key for an alternate provider and would like to configure this provider as the default option, follow these two steps:
1. Set up your geocoding key under your Account > Geoprovider Keys page

2. Once a new key has been created, set the key as the Default Provider to be used by clicking the Star icon next to the key. This key will now be used for Geocoding in Cosmic Frog

Adding model constraints often requires us to take advantage of Cosmic Frog’s Groups feature. Using the Groups table, we can define groups of model elements. This allows us to reference several individual elements using just the group name.
In the example below, we have added the Harlingen, Ashland, and Memphis production sites to a group named “ProductionSites”.
Now, if we want to write a constraint that applies to all production sites, we only need to write a single constraint referencing the group.

When we reference a group in a constraint, we can choose to either aggregate or enumerate the constraint across the group.
If we aggregate the constraint, that means we want the constraint to apply to some aggregate statistic representing the entire group. For example, if we wanted the total number of pens produced across all production sites to be less than a certain value, we would aggregate this constraint across the group.

Enumerating a constraint across a group applies the constraint to each individual member of the group. This is useful if we want to avoid writing repetitive constraints. For example, if each production site had a maximum production limit of 400 pens, we could either write a constraint for each site or write a single constraint for the group. Compare the two images below to see how enumeration can help simplify our constraint tables.
Without enumeration:

With enumeration:

Thank you for using the most powerful Supply Chain Design software in the galaxy (I mean, as far as we know).
To see the highlights of the software please watch the following video.
The team at Optilogic has built a powerful tool to take existing Supply Chain Guru models from either a .scgm or .mdf format and convert them to the Optilogic Anura and .frog schema.
To try this feature click the three dots on the right side of your Home Screen to select SCG Converter icon.

This will take you to the converter page:

From here you will need to name your model and select the model you wish to convert. Then you will able to click the “Convert to Anura” button. Please do not click away from the browser tab as the model is uploading. After the upload is complete you will be redirected to another page while the model is converted to the Anura schema. At this time it is safe to click away from the page as your conversion will be in process.

Once the conversion is complete you will be redirected to the SQL Editor page where you can start executing queries on your new Anura database! You will notice that there are two schemas in the database: _original_scg and anura_2_8.
You can review the data from the original database under the _original_scg tables and can see the new transformed data under the anura_2_8 tables. You will also find a new table in the anura schema called ‘scg_conversion_log’ – this table will provide logging information from the conversion process to track where data transformations outside of the default schema took place, and will also note any warnings or issues that the converter ran into. A full list of column mappings can be found here.
For a review of Conditions related to Scenarios please refer to the article here. Conditions, in Scenarios, take the form of a comparison that can create a filter in your table structure.
If you are ever doubting the name of a column that needs to be used in an action, you can type CTRL+SPACE to have the full list of columns available to you displayed. Alternatively, you can start typing and the valid column names will auto-complete.
Standard comparison operators include: =, >, <, >=, <=, !=. You can also use the LIKE or NOT LIKE operators and the % or * values for wildcards for string comparisons. The % and * wildcard operators are equivalent within Cosmic Frog (though, best practice is to use the % wildcard symbol since this is valid in the database code). If you have multiple conditions, the use of AND / OR operators are supported. The IN operator is also supported for comparing strings.
Cosmic Frog will color code syntax with the following logic:
Following are some examples of valid condition syntax:
To describe the action and conditions of our scenario, there are specific syntax rules we need to follow.
In this example, we are editing the Facilities table to define a scenario without a distribution center in Detroit.

Actions describe how we want to change a table. Actions have 2 components:
Writing actions takes the form of an equation:
Further detail on action syntax can be found here.

Conditions describe what we want to change. They are Boolean (i.e. true/false) statements describing which rows to edit. Conditions have 3 components:
Conditions take the form of a comparison, such as:
Further detail on Conditions syntax can be found here.

The Multi Time Period (MTP) tables allow for you to enter time-period specific data across many input tables. These MTP tables will act as overrides in the relevant periods for the records in their standard tables. If a record with the same key structure does not exist in the standard table, the MTP record won’t be recognized by the solver and will be dropped.
For example, we have a manufacturing facility with a throughput capacity of 1000 units. When this information is entered into the Facilities table, the throughput capacity will be used for each period in the model:


Let’s say that the manufacturing facility is expanding operations over the year and wants to show that the throughput capacity gradually increases over each quarter. This adjustment in throughput capacity can be defined in the Facilities Multi Time Period table:


We can see that the throughput capacity increased over time and given the constant outbound quantity, the throughput utilization goes down as the model progresses.
In all of the policy multi-time period tables, there are two fields which appear to be similar – Status and Policy Status.
For example, we have 2 manufacturing locations that can produce the same product at very different costs. We’ll see that the MFG location is used as the sole production option as it is far cheaper.


Now let’s say that we have to shut production down at the MFG location for maintenance in Q3, we can do this through the Production Policies Multi Time Period table:


This can be a quick way to to adjust policies over different times and is an alternative to using constraints. The use of Policy Status in place of constraints will stop the creation of extra solver variables as well as reducing the number of constraints being placed over the solution. This can help contribute to better model performance.
Similar to how Status and Policy Status work, you have the ability to change the operating status of a Facility and a Work Center over time. This would allow you to Open, Close, or Consider the use of a Facility or Work Center in any given period.
Using the same example as above, we can model the maintenance at MFG during Q3 by closing the entire location:


Closing the entire facility will do more than just limit production, the facility is then completely removed from the network for that given period and no other activities can take place.
For a review of Actions related to Scenarios please refer to the article here. Writing actions take the form of an equation:
If you are ever doubting the name of a column that needs to be used in an action, you can type CTRL+SPACE to have the full list of columns available to you displayed. Alternatively, you can start typing and the valid column names will auto-complete.
When assigning numerical values to a column, we can use mathematical operators such as +, -, *, and /. Likewise you can use parenthesis to establish a preferred order of operations. Below are some examples of the syntax.
When assigning numerical values to a column we can use mathematical operators such as +, -, *, and /. Likewise you can use parentheses to establish a preferred order of operations as shown in the following examples.
When assigning string values to a column you must use single quotation marks to reference any string that is not a column name. Some examples of changing values with a string are included in the examples below:
Scenarios allow you to run edited versions of your model in specific, controlled ways.
All scenarios have 3 components:

For an overview of the scenario features with a video please watch the video:
Once you have run a model, you can visualize your results using the Analytics tab.

In Analytics, a dashboard is a collection of visualizations. Visualizations can take on many forms, such as charts, tables or maps.
In Cosmic Frog, there are default dashboards available to help you analyze your model results.


The default dashboards highlight some common analytics and metrics. They are designed to be interacted with through a set of filters.

We can hover over visualization elements to get more information. This floating card of information is called a “Tooltip”.

We can customize existing dashboards to fit our needs. For more information see Editing Dashboards and Visualizations.

This video guides you though creating your free account and the features of the Optilogic Cosmic Frog supply chain design platform.
If you are running into issues receiving your account confirmation email, please see the troubleshooting article linked here.
For a detailed walk-through of all map features, please see the "Getting Started with Maps" Help Center article.
You can edit and filter the information shown in a map layer by utilizing the Layer Menu on the right hand side of your map screen. This menu opens when a layer is selected and contains the following tabs:
From the layer style menu we can modify how the layer will appear on the map

From the layer label menu we can add text descriptors the map layer.

The “Labels” dropdown menu allow you to add a text descriptor next to a layer item. Only one item may be selected in the label dropdown.

The “Tooltip” is a floating box that appears when hovering over a layer element. We can add/remove items displayed in the tooltip by selecting in the “Tooltips” menu.

The Condition Builder menu allows you to edit the data included in your layer. The most important item is to select the Table Name using the drop down menu that you wish to show on the map. This can include point items like Customers or Facilities or transportation arcs like OptimizationFlowSummary.

Within the Condition Builder you can use conditions to filter the table further to a subset of the data in the table. The filter syntax is similar to the syntax used when creating scenarios.