When modeling supply chains, stakeholders are often interested in understanding the cost to serve specific customers and/or products for segmentation and potential reposition purposes. In order to calculate the cost to serve, variable costs incurred all along the path through the supply chain need to be aggregated, while fixed costs that are incurred need to be apportioned to the correct customer/product. This is not always straightforward and easy to do, think for example of multi-layer BOMs.
When running a network optimization (using the Neo engine) in Cosmic Frog, these cost to serve calculations are automatically done, and the outputs are written into three output tables. In this help center article, we will cover these output tables and how the calculations underneath to populate them work.
First, we will briefly cover the example model used for screenshots for most of this help center article, then we will cover the 3 cost to serve output tables in some detail, and finally we will discuss a more complex example that uses detailed production elements too.
There are 3 network optimization output tables which contain cost to serve results; they can be found in the Output Summary Tables section of the output tables list:

We will cover the contents and calculations used for these tables by using a relatively simple US Distribution model, which does use quite a few different cost types in it. This model consists of:
Following screenshot shows the locations and flows of one of the scenarios on a map:

One additional important input to mention is the Cost To Serve Unit Basis field on the Model Settings input table:

User can select Quantity, Volume, or Weight. This basis is used when needing to allocate costs based on amounts of product (e.g. amount produced or moved). For example, we need to allocate $100,000 fixed operating cost at DC_Raleigh to 3 outbound flows. The result is different when we use a different cost to serve unit basis:
Note that in this documentation the Quantity cost to serve unit basis is used everywhere, which is the default.
We will cover the 3 output tables related to Cost To Serve using the above described model as an example in the screenshots. Further below in the document, we will also show an example of a model that uses some detailed production inputs and its cost to serve outputs. Let us start with examining the most detailed cost to serve output table, the Cost To Serve Path Segment Details table. This output table contains each segment of each path, from the most upstream product source location to customer fulfillment, which is the most downstream location. Each path segment represents an activity along the supply chain: this can be a production when product is made/supplied, inventory that is held, a flow where product is moved from one location to another, or use of a bill of materials when raw materials are used to create intermediates or finished goods.
Note that these output tables are very wide due to the number of columns they contain. In the screenshots, columns are often re-ordered, and some may be hidden if they do not contain any values or are not the topic of discussion, so they may not look exactly the same as what you see in your Cosmic Frog. Also, grids are often sorted to show records in increasing order of path ID and/or segment sequence.

Please note that several columns are not shown in the above screenshot, these include:
Removing some additional columns and scrolling right, we see a few more columns for these same paths:

We will stay with the example of the path from MFG_Detroit to DC_Jacksonville to CZ_Hartford for 707 units of P1_Bullfrog to explain the costs (Path ID = 1 in the above). Here is a visual of this path, shown on the map:

The following 4 screenshots show the 3 segments of this path in the Optimization Cost To Serve Path Segment Details output table, and the different costs that are applied to each segment. In this first screenshot, the left most column is the Segment Sequence, and a couple of fields that were not present in the screenshots above are now shown:




Let us put these on a map again, together with the calculations:

There are 2 types of costs associated with the production of 707 units of P1_Bullfrog at MFG_Detroit:
For the flow of 707 units from MFG_Detroit to DC_Jacksonville, there are 4 costs that apply:
There are 7 different costs that apply to the flow of the 707 units of P1_Bullfrog from DC_Jacksonville to CZ_Hartford where it fulfills the customer’s demand of 707 units:
There are cost fields in the Optimization Cost To Serve Path Segment Details output table that are not shown in the above screenshots as these are all blank in the example model used. These fields and their calculations should there be inputs to calculate them are as follows:
In the above examples we have seen Segment Types of production and flow. When inventory is modelled, we will also start seeing segments with Segment Type = inventories, as shown in this next screenshot (Cosmic Frog outputs of the Optimization Cost To Serve Path Segment Details table were copied into Excel from which the screenshot was then taken):

Here, 75.41 units of product P4_Polliwog were produced at MFG_Detroit in period 2025 (segment 1), then moved to DC_Jacksonville in the same period (segment 2), where they have been put into inventory (segment 3). These units are then used to fulfill customer demand at CZ_Columbia in the next period, 2026 (segment 4).
The Optimization Cost To Serve Path Segment Details table can also contain records which have Segment Type = no_activity. On these records there is no Segment Origin – Segment Destination pair, just one location which is not being used during the period (0 throughput). This record is then used to allocate fixed operating, startup, and/or closing costs to that location.
There are still a few fields in the Optimization Cost To Serve Path Segment Details table that have not been covered so far, these are:
The Optimization Cost To Serve Path Summary table is an output table which is an aggregation of the Optimization Cost To Serve Path Segment Details table, aggregated by Path ID. In other words, the table contains 1 record for each path where all costs of the individual segments of the path are rolled up into 1 number. Therefore, this table contains many of the same fields as the Path Segment Details table, just not the segment specific ones. The next screenshot shows the record for Path ID = 1 of the Optimization Cost To Serve Path Summary table:

This output table summarizes the cost to serve at the customer-product-period level and this table can be used to create reports at the customer or product level by aggregating further.

Note that all these 3 output tables can also be used in the Analytics module of Cosmic Frog to create any cost to serve dashboards. For example, this Optimization Cost To Serve Summary output table is used in 2 standard analytics dashboards that are part of any new Cosmic Frog model and will be automatically populated when this table is populated after a network optimization (Neo) run: the “CTS Unprofitable Demand” and “CTS Customer Profitability” dashboards. Here is a screenshot of the charts included in this second dashboard:

To learn more about the Analytics module in Cosmic Frog, please see the help center articles on this page.
In the above discussion of cost to serve outputs, the example model used did not contain any detailed production inputs, such as Bills Of Materials (BOMs), Work Centers, or Processes. We will now look at an example where these are used in the model and specifically focus on how to interpret the Optimization Cost To Serve Path Segment Details output table when bills of materials are included.
Consider the following records in the Bills of Materials input table for making the finished good FG_BLU (blue cheese):

In the model, these BOMs are associated through Production Policies for the BULK_BLU and FG_BLU products.
Next, we will have a look at the Optimization Cost To Serve Path Segment Details output table to understand how costs are allocated to raw material vs finished good segments:

As we have seen before, there are many more cost columns in this table, so for each segment these can be reviewed by scrolling right. Finally, let’s look at these 3 paths in the Optimization Cost To Serve Path Summary output table:

Note that the Path Product Name is FG_BLU for these 3 paths and there are no details to indicate which raw materials each of the paths pertains to. If required, this can be looked up using the Path IDs from this table in the Optimization Cost To Serve Path Segment Details output table.
Being able to assess the Risk associated with your supply chain has increasingly become more important in a quickly changing world with high levels of volatility. Not only does Cosmic Frog calculate an overall supply chain risk score for each scenario that is run, but it also gives you details about the risk at the location and flow level, so you can easily identify the highest and lowest risk components of your supply chain and use that knowledge to quickly set up new scenarios to reduce the risk in your network.
By default, any Neo optimization, Triad greenfield or Throg simulation model run will have the default risk settings, called OptiRisk, applied using the DART risk engine. See also the Getting Started with the Optilogic Risk Engine documentation. Here we will cover how a Cosmic Frog user can set up their own risk profile(s) to rate the risk of the locations and flows in the network and that of the overall network. Inputs and outputs are covered and in the last section notes & tips & additional resources are listed.
The following diagram shows the Cosmic Frog risk categories, their components, and subcomponents.

A description of these risk components and subcomponents follows here:
Custom risk profiles are set up and configured using the following 9 tables in the Risk Inputs section of Cosmic Frog’s input tables. These 9 input tables can be divided into 5 categories:

Following is a summary of these table categories; more details on individual tables will be discussed below:
We will cover some of the individual Risk Input tables in more detail now, starting with the Risk Rating Configurations table:

In the Risk Summary Configurations table, we can set the weights for the 4 different risk components that will be used to calculate the overall Risk Score of the supply chain. The 4 components are: customers, facilities, suppliers, and network. In the screenshot below, customer and supplier risk are contributing 20% each to the overall risk score while facility and network risk are contributing 30% each to the overall risk score.

These 4 weights should add up to 1 (=100%). If they do not add up to 1, Cosmic Frog will still run and automatically scale the Risk Score up or down as needed. For example, if the weights add up to 0.9, the final Risk Score that is calculated based on these 4 risk categories and their weights will be divided by 0.9 to scale it up to 100%. In other words, the weight of each risk category is multiplied by 1/0.9 = 1.11 so that the weights then add up to 100% instead of 90%. If you do not want to use a certain risk category in the Risk Score calculation, you can set its weight to 0. Note that you cannot leave a weight field blank. These rules around automatically scaling weights up or down to add up to 1 and setting a weight to 0 if you do not want to use that specific risk component or subcomponent also apply to the other “… Risk Configurations” tables.
Following are 2 screenshots of the Facility Risk Configurations table on which the weights and risk bands to calculate the Risk Score for individual facility locations are specified. A subset of these same risk components is also used for customers (Customer Risk Configurations table) and suppliers (Supplier Risk Configurations table). We will not discuss those 2 tables in detail in this documentation since they work in the same way as described here for facilities.



The first band is from 0.0 (first Band Lower Value) to 0.2 (the next Band Lower Value), meaning between 0% and 20% of total network throughput at an individual facility. The risk score for 0% of total network throughput is 1.0 and goes up to 2.0 when total network throughput at the facility goes up to 20%. For facilities with a concentration (= % of total network throughput) between 0 and 20%, the Risk Score will be linearly interpolated from the lower risk score of 1.0 to the higher risk score of 2.0. For example, a facility that has 5% of total network throughput will have a concentration risk score of 1.25. The next band is for 20%-30% of total network throughput, with an associated risk between 2.0 and 3.5, etc. Finally, if all network throughput is at only 1 location (Band Lower Value = 1.0), the risk score for that facility is 10.0. The risk scores for any band run from 1, lowest risk, to 10, highest risk.
Following screenshot shows the Geographic Risk Configurations table with 2 of its risk subcomponents, biolab distance and economic:

As an example here in the red outline, the Biolab Distance Risk is specified by setting its weight to 0.05 or 5% and specifying which band definition on the Risk Band Definitions table should be used, which is “BioLab and Nuclear Distance Band Template”. The Definition of this band template is as follows when looked up in the Risk Band Definitions table:

This band definition says that if a location is within 0-10 miles to a Biolab of Safety Level 4, the Risk Score is 10.0. A distance of 10-20 miles has an associated Risk Score between 10 and 7.75, etc. If a location is 130 miles or farther from a biolab of safety Level 4, the Risk Score is 1.0.
The other 5 subcomponents of Geographic Risk are defined in a similar manner on this table: with a Risk Weight field and a Risk Band field that specifies which Band Definition on the Risk Band Definitions table is to be used for that risk subcomponent. The following table summarizes the names of the Band Definitions used for these geographic risk subcomponents and what the unit of measure is for the Band Values with an example:
Similar to the Geographic Risk component, the Utilization Risk component also has its own table, Utilization Risk Configurations, where its 3 risk subcomponents are configured. Again, each of the subcomponents has a Risk Weight field and a Risk Band field associated with it. The following table summarizes the names of the Band Definitions used for these utilization risk subcomponents and what the unit of measure is for the Band Values with an example:
Lastly on the Risk Inputs side, the Network Risk Configurations table specifies the components of Network Risk in a similar manner: with a Risk Weight and a Risk Band field for each risk component. The following table summarizes the names of the Band Definitions used for these network risk subcomponents and what the unit of measure is for the Band Values with an example:
Risk outputs can be found in some of the standard Output Summary Tables and in the risk specific Output Risk Tables:

The following screenshot shows the Optimization Risk Metrics Summary output table for a scenario called “Include Opt Risk Profile”. It shows both the OptiRisk and Risk Rating Template Optimization risk score outputs:

On the Optimization Customer Risk Metrics, Optimization Facility Risk Metrics, and Optimization Supplier Risk Metrics tables, the overall risk score for each customer, facility, and supplier can be found, respectively. They also show the risk scores of each risk component, e.g. for customers these components are Concentration Risk, Source Count Risk, and Geographic risk. The subcomponents for Geographic risk are further detailed in the Optimization Geographic Risk Metrics output table, where for each customer, facility, and supplier the overall geographic risk score and the risk scores of each of the geographic risk subcomponents are listed. Similarly, on the Facility Risk Metrics and Supplier Risk Metrics tables, the Utilization Risk score will be listed for each location, whilst the Optimization Utilization Risk Metrics table will detail the risk scores of the subcomponents of this risk (throughput utilization, storage utilization, and work center utilization).
Let’s walk through an example of how the risk score for the facility Plant_France_Paris_9904000 was calculated using a few screenshots of input and output tables. This first screenshot shows the Optimization Geographic Risk Metrics output table for this facility:

The geographic risk score of this facility is calculated as 4.0 and the values for all the geographic risk subcomponents are listed here too, for example 8.2 for biolab distance risk and 3.6 for political risk. The overall geographic risk of 4.0 was calculated using the risk score of each geographic risk subcomponent and the weights that are set on the Geographic Risk Configurations input table:

Geographic Risk Score = (biolab distance risk * biolab distance risk weight + economic risk * economic risk weight + natural disaster risk * natural disaster risk weight + nuclear distance risk * nuclear distance risk weight + political risk * political risk weight + epidemic risk * epidemic risk weight) / 0.8 = (8.2 * 0.05 + 5.3 * 0.2 + 2.8 * 0.3 + 2.3 * 0.05 + 3.6 * 0.1 + 4.5 * 0.1) / 0.8 = 4.0. (We need to divide by 0.8 since the weights do not add up to 1, but only to 0.8).
Next, in the Optimization Facility Risk Metrics output table we can see that the overall facility risk score is calculated as 6.8, which is the result of combining the concentration risk score, source count risk score, and geographic risk score using the weights set on the Facility Risk Configurations input table.

This screenshot shows the Facility Risk Configurations input table and the weights for the different risk components:

Facility Risk Score = (geographic risk * geographic risk weight + concentration risk * concentration risk weight + source count risk * source count risk weight) / 0.6 = 4.0 * 0.3 + 9.3 * 0.2 + 10.0 * 0.1) / 0.6 = 6.8. (We need to divide by 0.6 since the weights do not add up to 1, but only to 0.6).
A few things about Risk in Cosmic Frog that are good to keep in mind:
This documentation covers which geo providers one can use with Cosmic Frog, and how they can be used for geocoding and distance and time calculations.
Currently, there are 5 geo providers that can be used for geocoding locations in Cosmic Frog: MapBox, Bing, Google, PTV, and PC*Miler. MapBox is the default provider and comes free of cost with Cosmic Frog. To use any of the other 4 providers, you will need to obtain a license key from the company and add this to Cosmic Frog through your Account. The steps to do so are described in this help article “Using Alternate Geocoding Providers”.
Different geocoding providers may specialize in different geographies; refer to your provider for guidelines.
Geocoding a location (e.g. a customer, facility or supplier) means finding the latitude and longitude for it. Once a location is geocoded it can be shown on a map in the correct location which helps with visualizing the network itself and building a visual story using model inputs and outputs that are shown on maps.

To geocode a location:

For costs and capacities to be calculated correctly, it may be needed to add transport distances and transport times to Cosmic Frog models. There are defaults that will be used if nothing is entered into the model, or users can populate these fields, either themselves or by using a Distance Lookup Utility. Here the tables where distances and times can be entered, what happens if nothing has been entered, and how users can utilize the Distance Lookup Utility will be explained.
There are multiple Cosmic Frog input tables that have input fields related to Transport Distance, and Transport Time, including Speed which can also be used to calculate transport time from a Transport Distance (time = distance / speed). These all have their own accompanying UOM (unit of measure) field. Here is an overview of the tables which contain Distance, Time and/or Speed fields:
For Optimization (Neo), this is the order of precedence that is applied when multiple tables and fields are used:

For Transportation (Hopper) models, this is the order of precedence when multiple tables and fields are being used:
To populate these input tables and their pertinent fields, user has following options:
Cosmic Frog users can find multiple handy utilities in the Utilities section of Cosmic Frog - here we will cover the Distance Lookup utility. This utility looks up transportation distances and times for origin-destination pairs and populates the Transit Matrix table. As Geo Providers, Bing, PC Miler and Azure can be used if the user has a license key for these. In addition, there is a new free PC Miler-UltraFast option which can look up accurate road distances within Europe and North America without needing a license key. This is also a very fast way to lookup distances. Lastly, the Great Circle Geo Provider option calculates the straight-line distance for origin-destination pairs based on latitudes & longitudes. We will look at the configuration options of the utility using the next 2 screenshots:


Note that when using the Great Circle geo provider for Distance calculations, only the Transport Distance field in the Transit Matrix table will be populated. The Transport Time will be calculated at run time using the Average Speed on the Model Settings table.
To finish up, we will walk through an example of using the Distance Lookup utility on a simple model with 3 customers (CZs) and 2 distribution centers (DCs), which are shown in the following 2 screenshots of the Customers and Facilities tables:


We can use Groups and/or Named Table Filters in the Transportation Policies table if we want to make 1 policy that represents all possible lanes from the DCs to the customers:

Next, we run the Distance Lookup utility with following settings:
This results in the following 6 records being added to the Transit Matrix table - 1 for each possible DC-CZ origin-destination pair:

Cosmic Frog for Excel Applications provide alternative interfaces for specific use cases as companion applications to the full Cosmic Frog Supply chain design product. For example, they can be used to access a subset of the Cosmic Frog functionality in a simplified manner or provide specific users who are not experienced in working with Cosmic Frog models access to a subset of inputs and/or outputs of a full-blown Cosmic Frog model that are relevant to their position.
Several example use cases are:
It is recommended to review the Cosmic Frog for Excel App Builder before diving into this documentation, as basic applications can quickly and easily be built with it rather than having to edit/write code, which is what will be explained in this help article. The Cosmic Frog for Excel App Builder can be found in the Resource Library and is also explained in the “Getting Started with the Cosmic Frog for Excel App Builder” help article.
Here we will discuss how one can set up and use a Cosmic Frog for Excel Application, which will include steps that use VBA (Visual Basic for Applications) in Excel and scripting using the programming language Python. This may sound daunting at first if you have little or no experience using these. However, by following along with this resource and the ones referenced in this document, most users will be able to set up their own App in about a day or 2 by copy-pasting from these resources and updating the parts that are specific to their use case. Generative AI engines like Chat GPT and perplexity can be very helpful as well to get a start on VBA and Python code. Cosmic Frog functionality will not be explained much in this documentation, the assumption is that users are familiar with the basics of building, running, and analyzing outputs of Cosmic Frog models.
In this documentation we are mainly following along with the Greenfield App that is part of the Resource Library resource “Building a Cosmic Frog for Excel Application”. Once we have gone through this Greenfield app in detail, we will discuss how other common functionality that the Greenfield App does not use can be added to your own Apps.
There are several Cosmic Frog for Excel Applications that have been developed by Optilogic available in the Resource Library. Links to these and a short description of each of them can be found in the penultimate section “Apps Available in the Resource Library” of this documentation.
Throughout the documentation links to other resources are included; in the last section “List of All Resources” a complete list of all resources mentioned is provided.
The following screenshot shows at a high-level what happens when a typical Cosmic Frog for Excel App is used. The left side represents what happens in Excel, and on the right side what happens on the Optilogic platform.

A typical Cosmic Frog for Excel Application will contain at least several worksheets that each serve a specific purpose. As mentioned before, we are using the MicroAPP_Greenfield_v3.xlsm App from the Building a Cosmic Frog for Excel Application resource as an example. The screenshots in this section are of this .xlsm file. Depending on the purpose of the App, users will name and organize worksheets differently, and add/remove worksheets as needed too:




To set up and configure Cosmic Frog for Excel Applications, we mostly use .xlsm Excel files, which are macro-enabled Excel workbooks. When opening an .xlsm file that for example has been shared with you by someone else or has been downloaded from the Optilogic Resource Library (Help Article on How To Use the Resource Library), you may find that you see either a message about a Protected View where editing needs to be enabled or a Security Warning that Macros have been disabled. Please see the Troubleshooting section towards the end of this documentation on how to resolve these warnings.
To set up Macros using Visual Basic for Applications (VBA), go to the Developer tab of the Excel ribbon:

If the Developer option is not available in the ribbon, then go to File > Options > Customize Ribbon, select Developer from the list on the left and click on the Add >> button, then click on OK. Should you not see Options when clicking on File, then click on “More…” instead, which will then show you Options too.
Now that you are set up to start building Macros using VBA: go to the Developer tab, enable Design Mode and add controls to your sheets by clicking on Insert, and selecting any controls to insert from the drop-down menu. For example, add a button and assign a Macro to it by right clicking on the button and selecting Assign Macro from the right-click menu:



To learn more about Visual Basic for Applications, see this Microsoft help article Getting started with VBA in Office, it also has an entire section on VBA in Excel.
It is possible to add custom modules to VBA in which Sub procedures (“Subs”) and functions to perform specific tasks have been pre-defined and can be called in the rest of the VBA code used in the workbook where the module has been imported into. Optilogic has created such a module, called Optilogic.bas. This module provides 8 standard functions for integration into the Optilogic platform.
You can download Optilogic.bas from the Building a Cosmic Frog for Excel Application resource in the Resource Library:

You can then import it into the workbook you want to use it in:

Right click on Modules in the VBA Project of the workbook you are working in and then select Import File…. Browse to where you have saved Opitlogic.bas and select it. Once done, it will appear in the Modules section, and you can double click on it to open it up:


These Optilogic specific Sub procedures and the standard VBA for Excel functionality enable users to create the Macros they require for their Cosmic Frog for Excel Applications.
App Keys are used to authenticate the user from the Excel App on the Optilogic platform. To get an App Key that you can enter into your Excel Apps, see this Help Center Article on Generating App and API Keys. During the first run of an App, the App Key will be copied from the cell it is entered into to an app.key file in the same folder as the Excel .xlsm file, and it will be removed from the worksheet. This is done by using the Manage_App_Key Sub procedure described in the “Optilogic.bas VBA Module” section above. User can then keep running the App without having to enter the App Key again unless the workbook or app.key file is moved elsewhere.
It is important to emphasize that App Keys should not be saved into Excel Apps as they can easily be accidentally shared when the Excel App itself is shared. Individual users need to authenticate with their own App Key.
When sharing an App with someone else, one easy way to do so is to share all contents of the folder where the Excel App is saved (optionally, zipped up). However, one needs to make sure to remove the app.key file from this folder before doing so.
A Python Job file in the context of Cosmic Frog for Excel Applications is the file that contains the instructions (in Python script format) for the operations of the App that take place on the Optilogic Platform.
Notes on Job files:
For Cosmic Frog for Excel Apps, a .job file is typically created and saved in the same folder as the Macro-enabled Excel workbook. As part of the Run Macro in that Excel workbook, the .job file will be uploaded to the Optilogic platform too (together with any input & settings data). Once uploaded, the Python code in the .job file will be executed, which may do things like loading the data from any uploaded CSV files into a Cosmic Frog model, run that Cosmic Frog model (a Greenfield run in our example), and retrieve certain outputs of interest from the Cosmic Frog model once the run is done.
For a Python job that uses functionality from the cosmicfrog library to run, a requirements.txt file that just contains the text “cosmicfrog” (without the quotes) needs to be placed in the same folder as the .job file. Therefore, this file is typically created by the Excel Macro and uploaded together with any exported data & settings worksheets, the app.key file, and the .job file itself so they all land in the same working folder on the Optilogic platform. Note that the Optilogic platform will soon be updated so that using a requirements.txt file will not be needed anymore and the cosmicfrog library will be available by default.
Like VBA, users and creators of Cosmic Frog for Excel Apps do not need to be experts in Python code, and will mostly be able to do the things they want to by copy-pasting from existing Apps and updating only the parts that are different for their App. In the greenfield.job section further below we will go through the code of the python Job for the Greenfield App in more detail, which can be a starting point for users to start making changes to for their own Apps. Next, we will provide some more details and references to quickly equip you with some basic knowledge, including what you can do with the cosmicfrog Python library.
There are a lot of helpful resources and communities online where users can learn everything there is to know about using & writing Python code. A great place to start is on the Python for Beginners page on python.org. This page also mentions how more experienced coders can get started with Python.
Working locally on any Python scripts/Jobs has the advantage that you can make use of code completion features which helps with things like auto-completion, showing what arguments functions need, catch incorrect syntax/names, etc. An example set up to achieve this is for example one where Python, Visual Studio Code, and an IntelliSense extension package for Python for Visual Studio Code are installed locally:
Once you are set up locally and are starting to work with Python files in Visual Studio Code, you will need to install the pandas and cosmicfrog libraries to have access to their functionality. You do this by typing following in a terminal in Visual Studio Code:
More experienced users may start using additional Python libraries in their scripts and will need to similarly install them when working locally to have access to their functionality.
If you want to access items on the Optilogic platform (like Cosmic Frog models) while working locally, you will likely need to whitelist your IP address on the platform, so the connections are not blocked by a firewall. You can do this yourself on the Optilogic platform:

A great resource on how to write Python scripts for Cosmic Frog models is this “Scripting with Cosmic Frog” video. In this video, the cosmicfrog Python library, which adds specific functionality to the existing Python features to work with Cosmic Frog models, is covered in some detail already. The next set of screenshots will show an example using a Python script named testing123.py on our local set-up. The first screenshot shows a list of functions available from the cosmicfrog Python library:

When you continue typing after you have typed “model.” the code completion feature will auto-generate a list of functions you may be getting at. In the next screenshot ones that start with or contain a “g” as I have only typed a “g” so far. This list will auto-update the more you type. You can select from the list with your cursor or arrow up/down keys and hitting the Tab key to auto-complete:

When you have completed typing the function name and next type a parenthesis ‘(‘ to start entering arguments, a pop-up will come up which contains information about the function and its arguments:

As you type the arguments for the function, the argument that you are on and the expected format (e.g. bool for a Boolean, str for string, etc.) will be in blue font and a description of this specific argument appears above the function description (e.g. above box 1 in the above screenshot). In the screenshot above we are on the first argument input_only which requires a Boolean as input and will be set to False by default if the argument is not specified. In the screenshot below we are on the fourth argument (original_names) which is now in blue font; its default is also False, and the argument description above the function description has changed now to reflect the fourth argument:

The next screenshot shows 2 examples of using the get_tablelist function of the FrogModel module:

As mentioned above, you can also use Atlas on the Optilogic platform to create and run Python scripts. One drawback here is that it currently does not have code completion features like IntelliSense in Visual Studio Code.
The following simple test.py Python script on Atlas will print the first Hopper output table name and its column names:


After running the Greenfield App, we can see the following files together in the same folder on our local machine:

On the Optilogic platform, a working folder is created by the Run Greenfield Macro. This folder is called “z Working Folder for Excel Greenfield App”. After running the Greenfield App, we can see following files in here:

Parts of the Excel Macro and Python .job file will be different from App to App based on the App’s purpose, but a lot of the content will be the same or similar. In this section we will step through the Macro that is behind the Run Greenfield button in the Cosmic Frog for Excel Greenfield App that is included in the “Building a Cosmic Frog for Excel Application” resource, where it will be explained what is happening at a high level each step of the way and mention if this part is likely to be different and in need of editing for other Apps or if it would typically stay the same across most Apps. After stepping through this Excel Macro in this section, we will the same for the Greenfield.job file in the next section.
The next screenshot shows the first part of the VBA code of the Run Greenfield Macro:

Note that throughout the Macro you will see text in green font. These are comments to describe what the code is doing and are not code that is executed when running the Macro. You can add comments by simply starting the line with a single quote and then typing your comment. Comments can be very helpful for less experienced users to understand what the VBA code is doing.
Next, the file path to the workbook is retrieved:

This piece of code uses the Get_Workbook_File_Path function of the Optilogic.bas VBA module to get the file path of the current workbook. This function first tries to get the path without user input. If it finds that the path looks like the Excel workbook is stored online in for example a Cloud folder, it will use user input in cell B3 on the Admin worksheet to get the file path instead. Note that specifying the file path is not necessary if the App runs fine without it, which means it could get the path without the user input. Only if user gets the message “Local file path to this Excel workbook is invalid. It is possible the Excel workbook is in a cloud drive, or you have provided an invalid local path. Please review setup step 4 on Admin sheet.”, the local file path should be entered into cell B3 on the Admin worksheet.
This code can be left as is for other Apps if there is an Admin worksheet (the variable pathsheetName indicated with 1 in screenshot above) where in cell B3 the file path (the variable pathCell indicated with 2 in screenshot above) can be specified. Of course, the worksheet name and cell can be updated if these are located elsewhere in the App. The message the user gets in this case (set as pathusrMsg indicated with 3 in the screenshot above) may need to be edited accordingly too.
The following code takes care of the App Key management:

The Manage_App_Key function from the Optilogic.bas VBA module is used here to retrieve the App Key from cell B2 on the Admin worksheet and put it into a file named app.key which is saved in the same location as the workbook when the App is run for the first time. The key is then removed from cell B2 and replaced with the text “app key has been saved; you can keep running the App”. As long as the app.key file and the workbook are kept together in the same location, the App will keep working.
Like the previous code on getting the local file path of the workbook, this code can be left as is for other Apps. Only if the location of where the App Key needs to be entered before the first run is different from cell B2 on the worksheet named Admin, the keysheetName and keyCell variables (indicated with 1 and 2 in the screenshot above) need to be updated accordingly.
This App has a greenfield.job file associated with it that contains the Python script which will be run on the Optilogic platform when the App is run. The next piece of code checks that this greenfield.job file is saved in the same location as the Excel App, and it also sets the name of the folder to be created on the Optilogic platform where files will get uploaded to:

This code can be left as is for other Cosmic Frog for Excel Apps, except following will likely need updating:
The Greenfield settings are set in the next step. The ones the user can set on the Settings worksheet are taken from there and others are set to a default value:

Next, the Greenfield Settings and the other input data are written into .csv files:


The firstSpaceIndex variable is set to the location of the first space in the resource size string.
Looking in the Greenfield App on the Customers worksheet we see that this means that the Customer Name (column A), Latitude (column B), Longitude (column C), and Quantity (column D) columns will be exported. The Customers.csv file will contain the column names on the first row, plus 96 rows with data as the last populated row is row 97. Here follows a screenshot showing the Customers worksheet in the Excel App (rows 6-93 hidden) and the first 11 lines in the Customers.csv file that was exported while running the Greenfield App:

Other Cosmic Frog for Excel Applications will often contain data to be exported and uploaded to the Optilogic platform to refresh model data; the Export_CSV_File function can be used in the same way to export similar and other tabular data.
As mentioned in the “Python Job File and requirements.txt” section earlier, a requirements.txt file placed in the same folder as the .job file that contains the Python script is needed so the Python script can run using functionality from the cosmicfrog Python library. The next code snippet checks if this file already exists in the same location as the Excel App, and if not creates it there, plus writes the text cosmicfrog into it.

This code can be used as is by other Excel Apps.
The next step is to upload all the files needed to the Optilogic platform:

Besides updating the local/platform file names and paths as appropriate, the Upload_File_To_Optilogic Sub procedure will be used by most if not all Excel Apps: even if the App is only looking at outputs from model runs and not modifying any input data or settings, the function is still required to upload the .job, app.key, and requirements.txt files.
The next bit of code uses 2 more of the Optilogic.bas VBA module functions to run and monitor the Python job on the Optilogic platform:

This piece of code can stay as is for most Apps, just make sure to update the following if needed:
The last piece of code before some error handling downloads the results (2 .csv files) from the Optilogic platform using the Download_File_From_Optilogic function from the Optilogic.bas VBA module:

This piece of code can be used as is with the appropriate updates for worksheet names, cell references, file names, path names, and text of status updates and user messages. Depending on the number of files to be downloaded, the part of the code setting the names of the output files and doing the actual download (bullet 2 above) can be copy-pasted and updated as needed.
The last piece of VBA code of the Macro shown in the screenshot below has some error handling. Specifically, when the Macro tries to retrieve the local path of the Macro-enabled .xlsm workbook and it finds it looks like it is online, an error will pop up and the user will be requested to put the file path name in cell B3 on the Admin worksheet. If the Macro hits any other errors, a message saying “An unexpected error occurred: <error number> <error description>” will pop up. This piece of code can be left as is for other Cosmic Frog for Excel Applications.

We have used version 3 of the Greenfield App which is part of the Building a Cosmic Frog for Excel Application resource in the above. There is also a stand-alone newer version (v6) of the Cosmic Frog for Excel – Greenfield application available in the Resource Library. In addition to all of the above, this App also:
This functionality is likely helpful for a lot of other Cosmic Frog for Excel Apps and will be discussed in section “Additional Common App Functionality” further below. We especially recommend using the functionality to prevent Excel from locking up in all your Apps.
Now we will go through the greenfield.job file that contains the Python script to be run on the Optilogic platform in detail.

This first piece of code takes care of importing several python libraries and modules (optilogic, pandas, time; lines 1, 2, and 5). There is another library, cosmicfrog, that is imported through the requirements.txt file that has been discussed before in the section titled “Python Job File and requirements.txt”. Modules from these libraries are imported here as well (FrogModel from cosmicfrog on line 3 and pioneer.API from optilogic on line 4). Now the functionality of these libraries and their modules can be used throughout the code of the script that follows. The optilogic and cosmicfrog libraries are developed by Optilogic and contain specific functionality to work with Cosmic Frog models (e.g. the functions discussed in the section titled “Working with Python Locally” above and on the Optilogic platform.
For reference:
This first piece of code can be left as is in the script files (.job files locally, .py files on the Optilogic platform) for most Cosmic Frog for Excel Applications. More advanced users may import different libraries and modules to use functionality beyond what the standard Python functionality plus the optilogic, cosmicfrog, pandas, and time libraries & modules together offer.
Next, a check_job_status function is defined that will keep checking a job until it is completed. This will be used when running a job to know if the job is done and ready to move onto the next step, which will often be downloading the results of the run. This piece of code can be kept as is for other Cosmic Frog for Excel Applications.

The following screenshot shows the next snippet of code that defines a function called wait_for_jobs_to_complete. It uses the check_job_status to periodically check if the job is done, and once done, moves onto the next piece of code. Again, this can be kept as is for other Apps.

Now it is time to create and/or connect to the Cosmic Frog model we want to use in our App:

Note that like the VBA code in the Excel Macro, we can add comments describing what the code is doing to our Python script too. In Python, comments need to start with the number (/hash) sign # and the font of comments automatically becomes green in the editor that is being used here (Visual Studio Code using the default Dark Modern color theme).
After clearing the tables, we will now populate them with the date from the Excel workbook. First, the uploaded Customers.csv file that contain the columns Customer Name, Latitude, Longitude, and Quantity is used to update both the Customers and the CustomerDemand tables:

It is very dependent on the App that you are building how much of the above code you can use as is, but the concepts of reading csv files, renaming, and dropping columns as needed and writing tables into the Cosmic Frog model will be frequently used. The following piece of code also writes the Facilities and Suppliers data into the Cosmic Frog tables. Again, the concepts used here will be useful for other Apps too, it may just not be exactly the same depending on the App and the tables that are being written to:

Next up, the Settings.csv file is used to populate the Greenfield Settings table in Cosmic Frog and to set 2 variables for resource size and scenario name:

Now that the Greenfield App Cosmic Frog model is populated with all the data needed, it is time to kick off the model and run a Greenfield analysis:

Besides updating any tags as desired (bullet 2b above), this code can be kept exactly as is for other Excel Apps.
Lastly, once the model is done running, the results are retrieved from the model and written into .csv files, which will then be downloaded by the Excel Macro:

When the greenfield_job.py file starts running on the Optilogic platform, we can monitor and see the progress of the job in the Run Manager App:

The Greenfield App (version 3) that is part of the Building a Cosmic Frog for Excel Application resource covers a lot of common features users will want to use in their own Apps. In this section we will discuss some additional functionality users may also wish to add to their own Apps. This includes:
A newer version of the Greenfield App (version 6) can be found here in the Resource Library. This App has all the functionality version 3 has, plus: 1) it has an updated look with some worksheets renamed and some items moved around, 2) has the option to cancel a Run after it has been kicked off and has not completed yet, 3) it prevents locking up of Excel while the App is running, 4) reads a few CSV output files back into worksheets in the same workbook, and 5) uses a Python library called folium to create Maps that a user can open from the Excel workbook, which will then open the map in the user’s default browser. Please download this newer Greenfield App if you want to follow along with the screenshots in this section. First, we will cover how a user can prevent locking of Excel during a run and how to add a cancel button which can stop a run that has not yet completed.
The screenshots call out what is different as compared to version 3 of the App discussed above. VBA code that is the same is not covered here. The first screenshot is of the beginning of the RunGreenfield_Click Macro that runs when the user hits the Run Greenfield button in the App:

The next screenshot shows the addition of code to enable the Cancel button once the Job has been uploaded to the Optilogic platform:

If everything completes successfully, a user message pops up, and the same 3 lines of code are added here too to enable the Run Greenfield buttons, disable the Cancel button, and keep other applications accessible:

Finally, a new Sub procedure CancelRun is added that is assigned to the Cancel button and will be executed when the Cancel button is clicked on:

This code gets the Job Key (unique identifier of the Job) from cell C9 on the Start worksheet and then uses a new function added to the Optilogic.bas VBA module that is named Cancel_Job_On_Optilogic. This function takes 2 arguments: the Job Key to identify the run that needs to be cancelled and the App Key to authenticate the user on the Optilogic platform.
Version 6 of the Greenfield App reads results from the Facility Summary, Customer Summary, and Flow Summary back into 3 worksheets in the workbook. A new Sub procedure named ImportCSVDataToExistingSheet (which can be found at the bottom of the RunGreenfield Macro code) is used to do this:

The function is used 3 times: to import 1 csv file into 1 worksheet at a time. The function takes 3 arguments:
We will discuss a few possible options on how to visualize your supply chain and model outputs on maps when using/building Cosmic Frog for Excel Applications.
This table summarizes 3 of the mapping options: their pros, cons, and example use cases:
There is standard functionality in Excel to create 3D Maps. You can find this on the Insert tab, in the Tours groups (next to Charts):

Documentation on how to get started with 3D Maps in Excel can be found here. Should your 3D Maps icon be greyed out in your Excel workbook, then this thread on the Microsoft Community forum may help troubleshoot this.
How to create an Excel 3D Map in a nutshell:
With Excel 3D Maps you can visualize locations on the map and for example base their size on characteristics like demand quantity. You can also create heat maps and show how location data changes over time. Flow maps that show lines between source and destination locations cannot be created with Excel 3D Maps. Refer to the Microsoft documentation to get a deeper understanding of what is possible with Excel 3D Maps.
The Cosmic Frog for Excel – Geocoding App in the Resource Library uses Excel 3D Maps to visualize customer locations that the App has geocoded on a map:

Here, the geocoded customers are shown as purple circles which are sized based on their total demand.
A good option to for example visualize Hopper (= transportation optimization) routes on a map is the ArcGIS Excel Add-in. If you do not have the add-in, you can get it from within Excel as follows:

You may be asked to log into your Microsoft account when adding this in and/or when starting to use the Add-in. Should you experience any issues while trying to get the Add-in added to Excel, we recommend closing all Office applications and then only open one Excel workbook through which you add the Add-in.
To start using the add-in and create ArcGIS maps in Excel:

Excel will automatically select all data in the worksheet that you are on. You can ensure the mapping of the data is correct or otherwise edit it:

After adding a layer, you can further configure it through the other icons at the top of the Layers window:

The other configuration options for the Map are found on the left-hand side of the Map configuration pane:

As an example, consider the following map showing the stops on routes created by the Hopper engine (Cosmic Frog’s transportation optimization technology). The data in this worksheet is from the Transportation Stop Summary Hopper output table:

As a next step we can add another layer to the map based on the Transportation Segment Summary Hopper output table to connect the source-destination pairs with each other using flow lines. For this we need to use the Esri JSON Geometry Location types mentioned earlier. An example Excel file containing the format needed for drawing polylines can be found in the last answer of this thread on the Esri community website: https://community.esri.com/t5/arcgis-for-office-questions/json-formatting-in-arcgis-for-excel/td-p/1130208, on the PolylinesExample1 worksheet. From this Excel file we can see that the format needed to draw a line connecting 2 locations:
{“paths”: [[<point1_longitude>,<point1latitude>],[<point2_longitude>,<point2_latitude>]],”spatialReference”: {“wkid”: 4326}}
Where wkid indicates the well-known ID of the spatial reference to be used on the map (see above for a brief explanation and a link to a more elaborate explanation of spatial references). Here it is set to 4326, which is WGS 1984.
The next 2 screenshots show the data from a Segments Summary and an added layer to the map to show the lines from the stops on the route:


Note that for Hopper outputs with multiple routes, we now need to filter both the worksheet with the Stops information and the worksheet with the Segments information for the same route(s) to synchronize them. A better solution is to bring the stopID and Delivered Quantity information from the Stops output into the Segments output, so we only have 1 worksheet with all the information needed and both layers are generated from the same data. Then filtering this set of data will update both layers simultaneously.
Here, we will discuss a Python library called folium, which gives users the ability to create maps that can show flows, tooltips, and has options to customize/auto-size location shapes and flow lines. We will use the example of the Cosmic Frog for Excel – Greenfield App (version 6) again where maps are created as .html files as part of the greenfield_job.py Python script that runs on the Optilogic platform. They are then downloaded as part of the results and from within Excel, users can click on buttons to show flows or customers which then opens the .html files in user’s default browser. We will focus on the differences with version 3 of the Greenfield App that are related to maps and folium. We will discuss the changes/addition to both the VBA code in the Excel Run Greenfield Macro and the additions to greenfield_job.py. First up in the VBA code, we need to add folium to the requirements.txt file so that the Python script can make use of the library once it is uploaded to the Optilogic platform:

To do so, a line to the VBA code is added to write “folium” into requirements.txt.
As part of downloading all the results from the Optilogic platform after the Greenfield run has completed, we need to add downloading the .html map files that were created:

In this version of the Greenfield app, there is a new Results Summary worksheet that has 3 buttons at the top:

Each of these buttons has a Sub procedure assigned to it, let’s look at the one for the “Show Flows with Customers” button:

The map that is opened will look something like this, where a tooltip comes up when hovering over a flow line. (How to create and configure the map using folium will be discussed next.)

The additions to the greenfield.job file to make use of folium and creating the 3 maps will now be covered:

First, at the beginning of the script, we need to add “import folium” (line 6), so that the library’s functionality can be used throughout the script. Next, the 3 Greenfield output tables that are used to create the 3 maps are read in, and a few data type changes are made to get the data ready for mapping:

This is repeated twice, once for the Optimization Greenfield Customer Summary output table and once for the Optimization Greenfield Flow Summary output table.
The next screenshot shows the code where the map to show Facilities is created and the Markers of them are configured based on if the facility is an Existing Facility or a Greenfield Facility:

In the next bit of code, df_Res_Flows dataframe is used to draw lines on the map between origin and destination locations:

Lastly, the customers from the Optimization Greenfield Customer Summary output table are added to the map that already contains facilities and flow lines, and is saved as greenfield_flows_customers_map.html:

Here are some additional pointers that may be useful when building your own Cosmic Frog for Excel applications:


You may run into issues where Macros or scripts are not running as expected. Here we cover some common problems you may come across and their solutions.
When opening an Excel .xslm file you may find that you see following message about the view being protected, you can click on Enable Editing if you trust the source:

Enabling content is not necessarily sufficient to also be able to run any Macros contained in the .xlsm file, and you may see following message after clicking on the Enable Editing button:

Closing this message box and then trying to run a Macro will result in the following message.

To resolve this, it is not always sufficient to just close and reopen the workbook and enable macros as the message suggests. Rather, go to the folder where the .xlsm file is saved in File Explorer, right click on it, and select Properties:

At the bottom in the General tab, check the Unblock checkbox and then click on OK.

Now, when you open the .xlsm file again, you have the option to Enable Macros, do so by clicking on the button. From now on, you will not need to repeat any of these steps when closing and reopening the .xlsm file; Macros will work fine.

It is also possible that instead of the Enable Editing warning and warnings around Macros not running discussed above, you will see a message that Macros have been disabled, as in the following screenshot. In this case, please click on the Enable Content button:

Depending on your anti-virus software and its settings, it is possible that the Macros in your Cosmic Frog for Excel Apps will not run as they are blocked by the anti-virus software. If you get “An unexpected error occurred: 13 Type mismatch”, this may be indicative of the anti-virus software blocking the Macro. Work with your IT department to allow the running of Macros.
If you are running Python scripts locally (say from Visual Studio Code) that are connecting to Cosmic Frog models and/or uploading files to the Optilogic platform, you may be unsuccessful and get warnings with the text “WARNING – create_engine_with_retry: Database not ready, retrying”. In this case, the likely cause is that your IP address needs to be added to the list of firewall exceptions within the Optilogic platform, see the instructions on how to do this in the “Working with Python Locally” section further above.
You will find that if you export cells that contain formulas from Excel to CSV that these are exported as 0’s and not as the calculated value. Possible solutions for this are 1) to export to a format other than CSV, possibly .xslx, or 2) to create an extra column in your data where the results of the cells with formulas are copy-pasted as values into and export this column instead of the one with the formulas (this way the formulas stay intact for a next run of the App). You could use the record Macro option to get a start on the VBA code for copy-pasting values from a certain column into a certain column so that you do not have to manually do this each time you run the App, but it becomes part of the Macro that runs when the App runs. An example of VBA code that copy-pastes values can be seen in this screenshot:

When running an App that has been run previously, there are likely output files in the folder where the App is located, for example CSV files that are opened by the user to view the results or are read back into a worksheet in the App. When running the App again, it is important to not have these output files open, otherwise an error will be thrown when the App gets to the stage of downloading the output files since open files cannot be overwritten.
There are currently several Cosmic Frog for Excel Applications available in the Resource Library, with more being added over time. Check back frequently and search for “Cosmic Frog for Excel” in the search bar to find all available Apps. A short description for each App that is available follows here:
As this documentation contains many links to references and resources, we will list them all here in one place:
Hopper is the Transportation Optimization algorithm within Cosmic Frog. It designs optimal multi-stop routes to deliver/pickup a given set of shipments to/from customer locations at the lowest cost. Fleet sizing and balancing weekly demand can be achieved with Hopper too. Example business questions Hopper can answer are:
Hopper’s transportation optimization capabilities can be used in combination with network design to test out what a new network design means in terms of the last-mile delivery configuration. For example, questions that can be looked at are:
With ever increasing transportation costs, getting the last-mile delivery part of your supply chain right can make a big impact on the overall supply chain costs!
It is recommended to watch this short Getting Started with Hopper video before diving into the details of this documentation. The video gives a nice, concise overview of the basic inputs, process, and outputs of a Hopper model.
In this documentation we will first cover some general Cosmic Frog functionality that is used extensively in Hopper, next we go through how to build a Hopper model which discusses required and optional inputs, how to run a Hopper model is explained, Hopper outputs in tables, on maps and analytics are covered as well, and finally references to a few additional Hopper resources are listed. Note that the use of user-defined variables, costs and constraints for Hopper models is covered in a separate help article.
To not make this document too repetitive we will cover some general Cosmic Frog functionality here that applies to all Cosmic Frog technologies and is used extensively for Hopper too.
To only show tables and fields in them that can be used by the Hopper transportation optimization algorithm, disable all icons except the 4th (“Transportation”) in the Technologies Selector from the toolbar at the top in Cosmic Frog. This will hide any tables and fields that are not used by Hopper and therefore simplifies the user interface:

Many Hopper related fields in the input and output tables will be discussed in this document. Keep in mind however that a lot of this information can also be found in the tooltips that are shown when you hover over the column name in a table, see following screenshot for an example. The column name, technology/technologies that use this field, a description of how this field is used by those algorithm(s), its default value, and whether it is part of the table’s primary key are listed in the tooltip.

There are a lot of fields with names that end in “…UOM” throughout the input tables. How they work will be explained here so that individual UOM fields across the tables do not need to be explained further in this documentation as they all work similarly. These UOM fields are unit of measure fields and often appear to the immediate right of the field that they apply to, like for example Distance Cost and Distance Cost UOM in the screenshot above. In these UOM fields you can type the Symbol of a unit of measure that is of the required Type from the ones specified in the Units Of Measure table. For example, in the screenshot above, the unit of measure Type for the Distance Cost UOM field is Distance. Looking in the Units of Measure table, we see there are multiple of these specified, like for example Mile (Symbol = MI), Yard (Symbol = YD) and Kilometer (Symbol = KM), so we can use any of these in this UOM field. If we leave a UOM field blank, then the Primary UOM for that UOM Type specified in the Model Settings table will be used. For example, for the Distance Cost UOM field in the screenshot above the tooltip says Default Value = {Primary Distance UOM}. Looking this up in the Model Settings table shows us that this is set to MI (= mile) in our current model. Let’s illustrate this with the following screenshots of 1) the tooltip for the Distance Cost UOM field (located on the Transportation Assets table), 2) units of measure of Type = Distance in the Units Of Measure table and 3) checking what the Primary Distance UOM is set to in the Model Settings table, respectively:



Note that only hours (Symbol = HR) is currently allowed as the Primary Time UOM in the Model Settings table. This means that if another Time UOM, like for example minutes (MIN) or days (DAY), is to be used, the individual UOM fields need to be used to set these. Leaving them blank would mean HR is used by default.
With few exceptions, all tables in Cosmic Frog contain both a Status field and a Notes field. These are often used extensively to add elements to a model that are not currently part of the supply chain (commonly referred to as the “Baseline”), but are to be included in scenarios in case they will definitely become part of the future supply chain or to see whether there are benefits to optionally include these going forward. In these cases, the Status in the input table is set to Exclude and the Notes field often contains a description along the lines of ‘New Market’, ‘New Product’, ‘Box truck for Scenarios 2-4’, ‘Depot for scenario 5’, ‘Include S6’, etc. When creating scenario items for setting up scenarios, the table can then be filtered for Notes = ‘New Market’ while setting Status = ‘Include’ for those filtered records. We will not call out these Status and Notes fields in each individual input table in the remainder of this document, but we definitely do encourage users to use these extensively as they make creating scenarios very easy. When exploring any Cosmic Frog models in the Resource Library, you will notice the extensive use of these fields too. The following 2 screenshots illustrate the use of the Status and Notes fields for scenario creation: 1) shows several customers on the Customers table where CZ_Secondary_1 and CZ_Secondary_2 are not currently customers that are being served but we want to explore what it takes to serve them in future. Their Status is set to Exclude and the Notes field contains ‘New Market’; 2) a scenario item called ‘Include New Market’ shows that the Status of Customers where Notes = ‘New Market’ is changed to ‘Include’.


The Status and Notes fields are also often used for the opposite where existing elements of the current supply chain are excluded in scenarios in cases where for example locations, products or assets are going to go offline in the future. To learn more about scenario creation, please see this short Scenarios Overview video, this Scenario Creation and Maps and Analytics training session video, this Creating Scenarios in Cosmic Frog help article, and this Writing Scenario Syntax help article.
A subset of Cosmic Frog’s input tables needs to be populated in order to run Transportation Optimization, whereas several other tables can be used optionally based on the type of network that is being modelled, and the questions the model needs to answer. The required tables are indicated with a green check mark in the screenshot below, whereas the optional tables have an orange circle in front of them. The Units Of Measure and Model Settings tables are general Cosmic Frog tables, not only used by Hopper and will always be populated with default settings already; these can be added to and changed as needed.

We will first discuss the tables that are required to be populated to set up a basic Hopper model and then cover what can be achieved by also using the optional tables and fields. Note that the screenshots of all input and output tables mostly contain the fields in the order they appear in in the Cosmic Frog user interface, however on occasion the order of the fields was rearranged manually. So, if you do not see a specific field in the same location as in a screenshot, then please scroll through the table to find it.
The Customers table contains what for purposes of modelling are considered the customers: the locations that we need to deliver a certain amount of certain product(s) to or pick a certain amount of product(s) up from. The customers need to have their latitudes and longitudes specified so that distances and transport times of route segments can be calculated, and routes can be visualized on a map. Alternatively, users can enter location information like address, city, state, postal code, country and use Cosmic Frog’s built in geocoding tool to populate the latitude and longitude fields. If the customer’s business hours are important to take into account in the Hopper run, its operating schedule can be specified here too, along with customer specific variable and fixed pickup & delivery times. Following screenshot shows an example of several populated records in the Customers table:

The pickup & delivery time input fields can be seen when scrolling right in the Customers table (the accompanying UOM fields are omitted in this screenshot):

Finally, scrolling even more right, there are 3 additional Hopper-specific fields in the Customers table:

The Facilities table needs to be populated with the location(s) the transportation routes start from and end at; they are the domicile locations for vehicles (assets). The table is otherwise identical to the customers table, where location information can again be used by the geocoding tool to populate the latitude and longitude fields if they are not yet specified. And like other tables, the Status and Notes field are often used to set up scenarios. This screenshot shows the Facilities table populated with 2 depots, 1 current one in Atlanta, GA, and 1 new one in Jacksonville, FL:

Scrolling further right in the Facilities table shows almost all the same fields as those to the right on the Customers table: Operating Schedule, Operating Calendar, and Fixed & Unit Pickup & Delivery Times plus their UOM fields. These all work the same as those on the Customers table, please refer to the descriptions of them in the previous section.
The item(s) that are to be delivered to the customers from the facilities are entered into the Products table. It contains the Product Name, and again a Status and Notes fields for ease of scenario creation. Details around the Volume and Weight of the product are entered here too, which are further explained below this screenshot of the Products table where just one product “PRODUCT” has been specified:

On the Transportation Assets table, the vehicles to be used in the Hopper baseline and any scenario runs are specified. There are a lot of fields around capacities, route and stop details, delivery & pickup times, and driver breaks that can be used on this table, but there is no requirement to use all of them. Use only those that are relevant to your network and the questions you are trying to answer with your model. We will discuss most of them through multiple screenshots. Note that the UOM fields have been omitted in these screenshots. Let’s start with this screenshot showing basic asset details like name, number of units, domicile locations, and rate information:

The following screenshot shows the fields where the operating schedule of the asset, any fixed costs, and capacity of the vehicles can be entered:

Note that if all 3 of these capacities are specified, the most restrictive one will be used. If you for example know that a certain type of vehicle always cubes out, then you could just populate the Volume Capacity and Volume Capacity UOM fields and leave the other capacity fields blank.
If you scroll further right, you will see the following fields that can be used to set limits on route distance and time when using this type of vehicle. Where applicable, you will notice their UOM fields too (omitted in the screenshot):

Limits on the amount of stops per route can be set too:

A tour is defined as all the routes a specific unit of a vehicle is used on during the model horizon. Limits around routes, time, and distance for tours can be added if required:

Scrolling still further right you will see the following fields that can be used to add details around how long pickup and delivery take when using this type of vehicle. These all have their own UOM fields too (omitted in the screenshot):

The next 2 screenshots shows the fields on the Transportation Assets table where rules around driver duty, shift, and break times can be entered. Note that these fields each have a UOM field that is not shown in the screenshot:


Limits around out of route distance can be set too. Plus details regarding the weight of the asset itself and the level of CO2 emissions:


Lastly, a default cost, fixed times for admin, and an operating calendar can be specified for a vehicle in the following fields on the transportation assets table:

As a reference, these are the department of transportation driver regulations in the US and the EU. They have been somewhat simplified from these sources: US DoT Regulations and EU DoT Regulations:
Consider this route that starts from the DC, then goes to CZ1 & CZ2, and then returns to the DC:

The activities on this route can be thought of as follows, where the start of the Rest is the end of Shift 1 and Shift 2 starts at the end of the Rest:

Notes on Driver Breaks:
Except for asset fixed costs, which are set on the Transportation Assets table, and any Direct Costs which are set on the Shipments table, all costs that can be associated with a multi-stop route can be specified in the Transportation Rates table. The following screenshot shows how a transportation rate is set up with a name, a destination name and the first several cost fields. Note that UOM fields have been omitted in this screenshot, but that each cost field has its own UOM field to specify how the costs should be applied:

Scrolling further right in the Transportation Rates table we see the remaining cost fields:

Finally, a minimum charge and fuel surcharge can be specified as part of a transportation rate too:

The amount of product that needs to be delivered from which source facility/supplier to which destination customer or picked up from which customer is specified on the Shipments table. Optionally, details around pickup and delivery times, direct costs, and fixed template routes can be set on this table too. Note that the Shipments table is Transportation Asset agnostic, meaning that the Hopper transportation optimization algorithm will choose the optimal one to use from the vehicles domiciled at the source location. This first screenshot of the Shipments table shows the basic shipment details:

Here is an example of a subset of Shipments for a model that will route both pickups and deliveries:

To the right in the Shipments table we find the fields where details around shipment windows can be entered:

Still further right on the Shipments table are the fields where details around pickup and delivery times can be specified:

Finally, furthest right on the Shipments table are fields where Direct Costs, details around Template Routes and decompositions can be configured:

Note that there are multiple ways to switching between forcing Shipments and the order of stops onto a template route and letting Hopper optimize which shipments will be put on a route together and in which order. Two example approaches are:
The tables and their input fields that can optionally be populated for their inputs to be used by Hopper will now be covered. Where applicable, it will also be mentioned how Hopper will behave when these are not populated.
In the Transit Matrix table, the transport distance and time for any source-destination-asset combination that could be considered as a segment of a route by Hopper can be specified. Note that the UOM fields in this table are omitted in following screenshot:

The transport distances for any source-destination pairs that are not specified in this table will be calculated based on the latitudes and longitudes of the source and destination and the Circuity Factor that is set in the Model Settings table. Transport times for these pairs will be calculated based on the transport distance and the vehicle’s Speed as set on the Transportation Assets table or, if Speed is not defined on the Transportation Assets table, the Average Speed in the Model Settings table.
Costs that need to be applied on a stop basis can be specified in the Transportation Stop Rates table:

If Template Routes are specified on the Shipments table by using the Template Route Name and Template Route Stop Sequence fields, then the Template Routes table can be used to specify if and how insertions of other Shipments can be made into these template routes:

If a template route is set up by using the Template Route Name and Template Route Stop Sequence fields in the Shipments table and this route is not specified in the Template Routes table, it means that no insertions can be made into this template route.
In addition to routing shipments with a fixed amount of product to be delivered to a customer location, Hopper can also solve problems where routes throughout a week need to be designed to balance out weekly demand while achieving the lowest overall routing costs. The Load Balancing Demand and Load Balancing Schedules tables can be used to set this up. If both the Shipments table and the Load Balancing Demand/Schedules tables are populated, by default the Shipments table will be used and the Load Balancing Demand/Schedules tables will be ignored. To switch to using the Load Balancing Demand/Schedules tables (and ignoring the Shipments) table, the Run Load Balancing toggle in the Hopper (Transportation Optimization) Parameters section on the Run screen needs to be switched to on (toggle to the left and grey is off; to the right and blue is on):

The weekly demand, the number of deliveries per week, and, optionally, a balancing schedule can be specified in the Load Balancing Demand table:

To balance demand over a week according to a schedule, these schedules can be specified in the Load Balancing Schedules table:


In the screenshots above, the 3 load balancing schedules that have been set up will spread the demand out as follows:
In the Relationship Constraints table, we can tell Hopper what combinations of entities are not allowed on the same route. For example, in the screenshot below we are saying that customers that make up the Primary Market cannot be served on the same route as customers from the Secondary Market:

A few examples of common Relationship Constraints are shown in the following screenshot where the Notes field explains what the constraint does:

To set the availability of customers, facilities, and assets to certain start and end times by day of the week, the Business Hours table can be used. The Schedule Name specified on this table can then be used in the Operating Schedule fields on the Customers, Facilities and Transportation Assets tables. Note that the Wednesday – Saturday Open Time and Close Time fields are omitted in the following screenshot:

To schedule closure of customers, facilities, and assets on certain days, the Business Calendars table can be used. The Calendar Name specified on this table can then be used in the Operating Calendar fields on the Customers, Facilities and Transportation Assets tables:

Groups are a general Cosmic Frog feature to make modelling quicker and easier. By grouping elements that behave the same together in a group we can reduce the number of records we need to populate in certain tables since we can use the Group names to populate the fields instead of setting up multiple records for each individual element which will all have the same information otherwise. Underneath the hood, when a model that uses Groups is run, these Groups are enumerated into the individual members of the group. We have for example already seen that groups of Type = Customers were used in the Relationship Constraints table in the previous section to prevent customers in the Primary Market being served on the same route as customers in the Secondary Market. Looking in the Groups table we can see which customers are part (‘members’) of each of these groups:

Examples of other Hopper input tables where use of Groups can be convenient are:
Note that in addition to Groups, Named Filters can be used in these instances too. Learn more about Named Filters in this help center article.
The Step Costs table is a general table in Cosmic Frog used by multiple technologies. It is used to specify costs that change based on the throughput level. For Hopper, all cost fields on the Transportation Rates table, the Transportation Stop Rates table, and the Fixed Cost on the Transportation Assets table can be set up to use Step Costs. We will go through an example of how Step Costs are set up, associated with the correct cost field, and how to understand outputs using the following 3 screenshots of the Step Costs table, Transportation Rates table and Transportation Route Summary output table, respectively. The latter will also be discussed in more detail in the next section on Hopper outputs.

In this example, the per unit cost for units 0 through 20 is $1, $0.9 for units 21 through 40, and $0.85 for all units over 40. Had the Step Cost Behavior field been set to All Item, then the per unit cost for all items is $1 if the throughput is between 0 and 20 units, the per unit cost for all items is $0.9 if the throughput is between 21 and 40 units, and the per unit cost for all items is $0.85 if the throughput is over 41 units.
In this screenshot of the Transportation Rates table, it is shown that the Unit Cost field is set to UnitCost_1 which is the stepped cost with 3 throughput levels that we just discussed in the screenshot above:

Lastly, this is a screenshot of the Transportation Route Summary output table where we see that the Delivered Quantity on Route 1 is 78. With the stepped cost structure as explained above for UnitCost_1, the Unit Cost in the output is calculated as follows: 20 * $1 (for units 1-20) + 20 * $0.9 (for units 21-40) + 38 * $0.85 (for units 41-78) = $20 + $18 + $32.30 = $70.30.

When the input tables have been populated and scenarios are created (several resources explaining how to set up and configure scenarios are listed in the “2.4 Status and Notes fields” section further above), one can start a Hopper run by clicking on the Run button at the top right in Cosmic Frog:

The Run screen will come up:

Once a Hopper run is completed, the Hopper output tables will contain the outputs of the run.
As with other Cosmic Frog algorithms, we can look at Hopper outputs in output tables, on maps and analytics dashboards. We will discuss each of these in the next 3 sections. Often scenarios will be compared to each other in the outputs to determine which changes need to be made to the last-mile delivery part of the supply chain.
In the Output Summary Tables section of the Output Tables are 8 Hopper specific tables, they start with “Transportation…”. Plus, there is also the Hopper specific detailed Transportation Activity Report table in the Output Report Tables section:

Switch from viewing Input Tables to Output Tables by clicking on the round grid at the top right of the tables list. The Transportation Summary table gives a high-level summary of each Hopper scenario that has been run and the next 6 Summary output tables contain the detailed outputs at the route, asset, shipment, stop, segment, and tour level. The Transportation Load Balancing Summary output table is populated when a Load Balancing scenario has been run, and summarizes outputs at the daily level. The Transportation Activity Report is especially useful to understand when Rests and Breaks are required on a route. All these output tables will be covered individually in the following sections.
The Transportation Summary table contains outputs for each scenario run that include Hopper run details, cost details, how much product was delivered and how, total distance and time, and how many routes, stops and shipments there were in total.

The Hopper run details that are listed for each scenario include:
The next 2 screenshots show the Hopper cost outputs, summarized by scenario:


Scrolling further right in the Transportation Summary table shows the details around how much product was delivered in each scenario:

For the Quantity UOM that is shown in the farthest right column in this screenshot (eaches here), the Total Delivered Quantity, Total Direct Quantity and Total Undelivered Quantity are listed in these columns. If the Total Direct Quantity is greater than 0, details around which shipments were delivered directly to the customer can be found in the Transportation Shipment Summary output table where the Shipment Status = Direct Shipping. Similarly, if the total undelivered quantity is greater than 0, then more details on which shipments were not delivered and why are detailed in the Unrouted Reason field of the Transportation Shipment Summary output table where the Shipment Status = Unrouted.
The next set of output columns when scrolling further right repeat these delivered, direct and undelivered amounts by scenario, but in terms of volume and weight.
Still further to the right we find the outputs that summarize the total distance and time by scenario:


Lastly, the fields furthest right on the Transportation Summary output table contain details around the number of routes, assets and shipments, and CO2 emissions:

A few columns contained in this table are not shown in any of the above screenshots, these are:
The Transportation Route Summary table contains details for each route in each scenario that include cost, distance & time, number of stops & shipments, and the amount of product delivered on the route.

The costs that together make up the total Route Cost are listed in the next 11 fields shown in the next 2 screenshots:


The next set of output fields show the distance and time for each route:


Finally, the fields furthest right in the Transportation Route Summary table list the amount of product that was delivered on the routes, and the number of stops and delivered shipments on each route.

The Transportation Asset Summary output table contains the details of each type of asset used in each scenario. These details include costs, amount of product delivered, distance & time, and the number of delivered shipments.

The costs that together make up the Total Cost are listed in the next 12 fields:


The next set of fields in the Transportation Asset Summary summarize the distances and times by asset type for the scenario:


Furthest to the right on the Transportation Asset Summary output table we find the outputs that list the total amount of product that was delivered, the number of delivered shipments, and the total CO2 emissions:

The Transportation Shipment Summary output table lists for each included Shipment of the scenario the details of which asset type it is served by, which stop on which route it is, the amount of product delivered, the allocated cost, and its status.

The next set of fields in the Transportation Shipment Summary table list the total amount of product that was delivered to this stop.

The next screenshot of the Transportation Shipment Summary shows the outputs that detail the status of the shipment, costs, and a reason in case the shipment was unrouted.

Lastly, the outputs furthest to the right on the Transportation Shipment Summary output table list the pickup and delivery time and dates, the allocation of CO2 emissions and associated costs, and the Decomposition Name if used:

The Transportation Stop Summary output table lists for each route all the individual stops and their details around amount of product delivered, allocated cost, service time, and stop location information.
This first screenshot shows the basic details of the stops in terms of route name, stop ID, location, stop type, and how much product was delivered:

Somewhat further right on the Transportation Stop Summary table we find the outputs that detail the route cost allocation and the different types of time spent at the stop:

Lastly, farthest right on the Transportation Stop Summary table, arrival, service, and departure dates are listed, along with the stop’s latitude and longitude:

The Transportation Segment Summary output table contains distance, time, and source and destination location details for each segment (or “leg”) of each route.
The basic details of each segment are shown in the following screenshot of the Transportation Segment Summary table:

Further right on the Transportation Segment Summary output table, the time details of each segment are shown:

Next on the Transportation Segment Summary table are the latitudes and longitudes of the segment’s origin and destination locations:

And farthest right on the Transportation Segment Summary output table details around the start and end date and time of the segment are listed, plus CO2 emissions and the associated CO2 cost:

For each Tour (= asset schedule) the Transportation Tour Summary output table summarizes the costs, distances, times, and CO2 details.
The next 3 screenshots show the basic tour details and all costs associated with a tour:



The next screenshot shows the distance outputs available for each tour on the Transportation Tour Summary output table:

Scrolling further right on the Transportation Tour Summary table, the outputs available for tour times are listed:


If a load balancing scenario has been run (see the Load Balancing Demand input table further above for more details on how to run this), then the Transportation Load Balancing Summary output table will be populated too. Details on amount of product delivered, plus the number of routes, assets and delivered shipments by day of the week can be found in this output table; see the following 2 screenshots:


For each route, the Transportation Activity Report details all the activities that happen in chronological order with details around distance and time and it breaks down how far along the duty and drive times are at each point in the route, which is very helpful to understand when rests and short breaks are happening.
This first screenshot of the Transportation Activity Report shows the basic details of the activities:

Next, the distance, time, and delivered amount of product are detailed on the Transportation Activity Report:

Finally, the last several fields on the Transportation Activity Report details cost, and the thus far accumulated duty and drive times:

As with other algorithms within Cosmic Frog, Maps are very helpful in visualizing baseline and scenario outputs. Here, we will do a step by step walk through of setting up a Hopper specific Map and not cover all the ins and outs of maps. If desired, you can review these resources on Maps in general first:
We will first cover the basics of what we need to know to set up a Hopper specific map:


Click on the Map drop-down to view all options in the list:
After adding a new Map or when selecting an existing Map in the Maps list, the following view will be shown on the right-hand side of the map:

After adding a new Layer to a Map or when selecting an existing Layer in a Map, the following view will be shown on the right-hand side of the map:

By default, the Condition Builder view is shown:
There is also a Conditions text field which is not shown in the screenshot as it is covered by the Table Name drop-down. A filter (“condition”) can be typed into the Conditions text field to only show the records of the table that match the filter. For example, typing “CustomerName like ‘%Secondary%’” in the Conditions field, will only show customers where the Customer Name contains the text ‘Secondary’ anywhere in the name. You can learn more about building conditions in this Writing Syntax for Conditions help article.
Switching from Condition Builder to Layer Style shows the following:

Here, following is shown / configurable:
Switching from Layer Style to Layer Labels shows the following:

Using what we have discussed above, we can create the following map quite easily and quickly (the model used here is one from the Resource Library, named Transportation Optimization):

The steps taken to create this map are:
Let’s also cover 2 maps of a model where both pickups and deliveries are being made, from “backhaul” and to “linehaul” customers, respectively. When setting the LIFO (Is Last In First Out) field on the Transportation Assets table to True, this leads to routes that contain both pickup and delivery stops, but all the pickups are made at the end (e.g. modeling backhaul):

Two example routes are being shown in the screenshot above and we can see that all deliveries are first made to the linehaul customers which have blue icons. Then, pickups are made at the backhaul customers which have orange icons. If we want to design interleaved routes where pickups and deliveries can be mixed, we need to set the LIFO field to False. The following screenshot shows 2 of these interleaved routes:

In the Analytics module of Cosmic Frog, dashboards that show graphs of scenario outputs, sliced and diced to the user’s preferences, can quickly be configured. Like Maps, this functionality is not Hopper specific and other Cosmic Frog technologies use these extensively too. We will cover setting up a Hopper specific visualization, but not all the details of configuring dashboards. Please review these resources on Analytics in Cosmic Frog first if you are not yet familiar with these:
We will do a quick step by step walk through of how to set up a visualization of comparing scenario costs by cost type in a new dashboard:

The steps to set this up are detailed here, note that the first 4 bullet points are not shown in the screenshot above:
There are several models in the Resource Library that transportation optimization users may find helpful to review. How to use resources in the Resource Library is described in the help center article “How to Use the Resource Library”.
Several input columns used by Throg (Simulation) will accept distributions as inputs. The following distributions, along with their syntax, are currently supported by Throg:
A user-defined Histogram is also supported as an acceptable input. These will be defined in the Histograms input table, and to reference a histogram in an allowed input column simply put in the Histogram Name.
Here we will cover the options a Cosmic Frog user has for modeling transportation costs when using the Neo Optimization engine. The different fields that can be populated and how the calculations under the hood work will be explained in detail.
There are many ways in which transportation can be costed in real life supply chains. The Transportation Policies table contains 4 cost fields to help users model costs as close as possible to reality. These fields are: Unit Cost, Fixed Cost, Duty Rate and Inventory Carrying Cost Percentage. Not all these costs need to be used: the one(s) that are applicable should be populated and the others can be left blank. The way some of these costs work depends on additional information specified in other fields, which will be explained as well.
Note that in the screenshots throughout this documentation some fields in the Cosmic Frog tables have been moved so they could be shown together in a screenshot. You may need to scroll right to see the same fields in your Cosmic Frog model tables and they may be in a different order.
We will first discuss the input fields with the calculations and some examples; at the end of the document an overview is given of how the cost inputs translate to outputs in the optimization output tables.
This field is used for transportation costs that increase when the amount of product being transported increases and/or the transportation distance or time increases. As there are quite a few different measures based on which costs can depend on the amount of product that is transported (e.g. $2 per each, or $0.01 per each per mile, or $10 per mile for a whole shipment of 1000 units, etc.) there is a Unit Cost UOM field that specifies how the cost specified in the Unit Cost field should be applied. In a couple of cases, the Average Shipment Size and Average Shipment Size UOM fields must be specified too as we need to know the total number of shipments for the total Unit Cost calculation. The following table provides an overview of the Unit Cost UOM options and explains how the total Unit Costs are calculated for each UOM:

With the settings as in the screenshot above, total Unit Costs will be calculated as follows for beds, pillows, and alarm clocks going from DC_Reno to CUST_Phoenix:
The Unit Cost field can contain a single numeric value (as in the examples above), a step cost specified in the Step Costs table, a rate specified in the Transportation Rates table, or a custom cost function.
If stepped costs are used as the Unit Cost for Transportation Policies that use Groups in the Product Name field, then the Product Name Group Behavior field determines how these stepped costs are applied:
See following screenshots for an example of using stepped costs in the Unit Cost field and the difference in cost calculations for when Product Name Group Behavior is set to Enumerate vs Aggregate:

On the Step Costs table (screenshot above), the stepped costs we will be using in the Unit Cost field on the Transportation policies table are specified. All records with the same Step Cost Name (TransportUnitCost_2 here) make up 1 set of stepped costs. The Step Cost Behavior is set to Incremental here, meaning that discounted costs apply from the specified throughput level only, not to all items once we go over a certain throughput. So, in this example, the per unit cost for units 0 through 10,000 is $1.75, $1.68 for units 10,001 through 25,000, $1.57 for units 25,001 through 50,000, and $1.40 for all units over 50,000.
The configuration in the Transportation Policies table looks as follows:

The following screenshot shows the outputs on the Optimization Flow Summary table of 2 scenarios that were run with these stepped costs, 1 scenario used the Enumerate option for the Product Name Group Behavior and the other 1 used the Aggregate option. The cost calculations are explained below the screenshot.

The Fixed Cost field can be used to apply a fixed cost to each shipment for the specified origin-destination-product-mode combination. An average shipment size needs to be specified to be able to calculate the number of shipments from the amount of product that is being transported. When calculating the number of shipments, the result can contain fractions of shipments, e.g. 2.8 or 5.2. If desirable, these can be rounded up to the next integer (e.g. 3 and 6 respectively) by setting the Fixed Cost Rule field to Treat As Full. Note however that using this setting can increase model runtimes, and using the default Prorate setting is recommended in most cases.
In summary, The Fixed Cost field therefore works together with the Fixed Cost Rule, Average Shipment Size, and Average Shipment Size UOM fields. The following table shows how the calculations work:
The Fixed Cost field can contain a single numeric value, or a step cost specified in the Step Costs table.
Following example shows how Fixed Costs are calculated on the DC_Scranton – CUST_Augusta lane and illustrates the difference between setting the Fixed Cost Rule to Prorate vs Treat As Full

This setup in the Transportation Policies table means that the cost for 1 shipment with on average 1,000 units on it is $100. 2 scenarios were run with this cost setup, 1 where Fixed Cost Rule was set to Prorate and 1 where it was set to Treat As Full. Following screenshot shows the outputs of these 2 scenarios:

For Fixed Costs on Transportation Policies that use Groups in the Product Name field, the Product Name Group Behavior field determines how these fixed costs are applied:
See following screenshots for an example of using Fixed Costs where the Fixed Cost Rule is set to Treat As Full and the difference in cost calculations for when Product Name Group Behavior is set to Enumerate vs Aggregate:

The transportation policy from DC_Birmingham to CUST_Baton Rouge uses the AllProducts group as the ProductName. This Group contains all 3 products being modelled: beds, pillows, and alarm clocks. The costs on this policy are a fixed cost of $100 per shipment, where an average shipment contains 1,000 units. The Fixed Cost Rule is set to Treat As Full meaning that the number of shipments will be rounded up to the next integer. Depending on the Product Name Group Behavior field this is done for the flow of each product individually (when set to Enumerate) or done for the flow of all 3 products together (when set to Aggregate):

When products are imported or exported from/to different countries, there may be cases where duties need to be paid. Cosmic Frog enables you to capture these costs by using the Duty Rate field on the Transportation Policies table. In this field you can specify the percentage of the Product Value (as specified on the Products table) that will be incurred as duty. If this percentage is for example 9%, you need to enter a value of 9 into the Duty Rate field. The calculation of total duties on a lane is as follows: Flow Quantity * Product Value * Duty Rate.
The following screenshots show the Product Value of beds, pillows and alarm clocks in the Products table, the Duty Rate set to 10% on the DC_Birmingham to CUST_Nashville lane in the Transportation Policies table, and the resulting Duty Costs in the Optimization Flow Summary table, respectively.



Alarm clocks have a Product Value of $30. With a Duty Rate of 10% and moving 24,049 from DC_Birmingham to CUST_Nashville, the resulting Duty Cost = 24,049 * $30 * 0.1 = $72,147.
If in transit inventory holding costs need to be calculated, the Inventory Carrying Cost Percentage field on the Transportation Policies table can be used. The value entered here will be used as the percentage of product value (specified on the Products table) to incur the in transit holding costs. If the Inventory Carrying Cost Percentage is 13%, then enter a value of 13 into this field. This percentage is interpreted as an annual percentage, so the in transit holding cost is then prorated based on transit time. The calculation of the in transit holding costs becomes: Flow Quantity * Product Value * Inventory Carrying Cost Percentage * Transit Time (in days) / 365.
Note that there is also an Inventory Carrying Cost Percentage field in the Model Settings table. If this is set to a value greater than 0 and there is no value specified in the Transportation Policies table, the value from the Model Settings table is automatically used for inventory carrying cost calculations, including in transit holding costs. If there are values specified in both tables, the one(s) in the Transportation Policies table take precedence for In Transit Holding Cost calculations.
The following screenshots show the Inventory Carrying Cost Percentage set to 20% on the DC_Birmingham to CUST_Nashville lane in the Transportation Policies table, and the resulting In Transit Holding Costs in the Optimization Flow Summary table, respectively. The Product Values are as shown in the screenshot of the Products table in the previous section on Duty Rates.


For Pillows, the Product Value set on the Products Table is $100. When 120,245 units are moved from DC_Birmingham to CUST_Nashville, which takes 3.8909 hours (214 MI / 55 MPH), the In Transit Holding Costs are calculated as follows: 120,245 (units) * $100 (product value) * 0.2 (Carrying Cost Percentage) * (3.8909 HR (transport time) / 24 (HRs in a day)) / 365 (days in a year) = $1,068.18.
The following table gives an overview of how the inputs into the 4 cost fields on the Transportation Policies table translate to outputs in multiple optimization output tables. The table contains the field names in the output tables and shows from which input field they result.
Note that the 4 different types of transportation costs are also included in the Landed Cost (Optimization Demand Summary table) and Parent Node Cost (Optimization Cost To Serve Parent Information Report table) calculations.
For a review of Conditions related to Scenarios please refer to the article here. Conditions, in Scenarios, take the form of a comparison that can create a filter in your table structure.
If you are ever doubting the name of a column that needs to be used in an action, you can type CTRL+SPACE to have the full list of columns available to you displayed. Alternatively, you can start typing and the valid column names will auto-complete.
Standard comparison operators include: =, >, <, >=, <=, !=. You can also use the LIKE or NOT LIKE operators and the % or * values for wildcards for string comparisons. The % and * wildcard operators are equivalent within Cosmic Frog (though, best practice is to use the % wildcard symbol since this is valid in the database code). If you have multiple conditions, the use of AND / OR operators are supported. The IN operator is also supported for comparing strings.
Cosmic Frog will color code syntax with the following logic:
Following are some examples of valid condition syntax:
Business Goal: Condition Syntax
For a review of Actions related to Scenarios please refer to the article here. Writing actions take the form of an equation:
If you are ever doubting the name of a column that needs to be used in an action, you can type CTRL+SPACE to have the full list of columns available to you displayed. Alternatively, you can start typing and the valid column names will auto-complete.
When assigning numerical values to a column, we can use mathematical operators such as +, -, *, and /. Likewise you can use parenthesis to establish a preferred order of operations. Below are some examples of the syntax.
When assigning numerical values to a column we can use mathematical operators such as +, -, *, and /. Likewise you can use parentheses to establish a preferred order of operations as shown in the following examples.
Business Goal: Action Syntax
When assigning string values to a column you must use single quotation marks to reference any string that is not a column name. Some examples of changing values with a string are included in the examples below:
Business Goal: Action Syntax
When running models in Cosmic Frog, users can choose the size of the resource the model’s scenario(s) will be run on in terms of available memory (RAM in Gb) and number of CPU cores. Depending on the complexity of the model and the number of elements, policies and constraints in the model, the model will need a certain amount of memory to run to completion successfully. Bigger, more complex models typically need to be run using a resource that has more memory (RAM) available as compared to smaller, less complex models. The bigger the resource that is being used, the higher the billing factor which leads to using more of the available cloud compute hours available to the customer (the total amount of cloud compute time available to the customer is part of customer’s Master License Agreement with Optilogic). Ideally, users choose a resource size that is just big enough to run their scenario(s) without the resource running out of memory, while minimizing the amount of cloud compute time used. This document guides users in choosing an initial resource size and periodically re-evaluating it to ensure optimal usage of the customer’s available cloud compute time.
Once a model has been built and the user is ready to run 1 or multiple scenarios, they can click on the green Run button at the right top in Cosmic Frog which opens the Run Settings screen. The Run Settings screen is documented in the Running Models & Scenarios in Cosmic Frog Help Center article. On the right-hand side of the Run Settings screen, user can select the Resource Size that will be used for the scenario(s) that are being kicked off to run:


In this section, we will guide users on choosing an initial resource size for the different engines in Cosmic Frog, based on some model properties. Before diving in, please keep following in mind:
There are quite a few model factors that influence how much memory a scenario needs to solve a Neo run. These include the number of model elements, policies, periods, and constraints. The type(s) of constraints used may play a role too. The main factors, in order of impact on memory usage, are:
These numbers are those after expansion of any grouped records and application of scenario items, if any.
The number of lanes can depend on the Lane Creation Rule setting in the Neo (Optimization) Parameters:

Note that for lane creation, expansion of grouped records and application of scenario item(s) need to be taken into account too to get at the number of lanes considered in the scenario run.
Users can use the following list to choose an initial resource size for Neo runs. First, calculate the number of demand records multiplied with the number of lanes in your model (after expansion of grouped records and application of scenario items). Next, find the range in the list, and use the associated recommended initial resource size:
# demand records * # lanes: Recommended Initial Resource Size
A good indicator for Throg and Dendro runs to base the initial resource size selection on is the order of magnitude of the total number of policies in the model. To estimate the total number of policies in the model, add up the number of policies contained in all policies tables. There are 5 policies tables in the Sourcing category (Customer Fulfillment Policies, Replenishment Policies, Production Policies, Procurement Policies, and Return Policies), 4 in the Inventory category (Inventory Policies, Warehousing Policies, Order Fulfillment Policies, and Inventory Policies Advanced), and the Transportation Policies table in the Transportation category. The policy counts of each table should be those after expansion of any grouped records and application of scenario items, if any. The list below shows the minimum recommended initial resource size based on the total number of policies in the model to solve models using the Throg or Dendro engine.
Number of Policies: Minimum Resource
For Hopper runs, memory is the most important factor in choosing the right resource, and the main driver of memory requirements is the number of origin-destination (OD) pairs in the model. OD pairs are determined primarily by all possible facility-to-customer, facility-to-facility, and customer-to-customer lane combinations.
Most Hopper models have many more customers compared to facilities, and so we often can use the number of customers in a model as a guide for resource size. The list below shows the minimum recommended initial resource size to solve models using Hopper.
Customers: Minimum Resource Size
Most Triad models should solve very quickly, typically under 10 minutes. Still, choosing the right resource size will ensure your Triad model solves successfully, without paying for unneeded compute resources.
As with Hopper, memory is the most important factor in resource selection. In Triad, the main driver of memory requirements is the number of customers, with a smaller secondary effect from the number of greenfield facilities.
The list below shows the minimum recommended initial resource size to solve models using Triad where the number of facilities is assumed to be between 1 and 10:
Customers: Minimum Resource Size
Please note:
After running a scenario with the initially selected resource size, users can evaluate if it is the best resource size to use or if a smaller or larger one is more appropriate. The Run Manager application on Optilogic’s platform can be used to assess resource size:


Using this knowledge that the RAM required at peak usage is just over 1 Gb, we can conclude that going down to resource size 3XS, which has 2Gb of RAM available should still work OK for this scenario. The expectation is that going further down to 4XS, which has 1Gb of RAM available, will not work as the scenario will likely run out of memory. We can test this with 2 additional runs. These are the Job Usage Metrics after running with resource size 3XS:

As expected, the scenario runs fine, and the memory usage is now at about 54% (of 2Gb) at peak usage.
Trying with resource size 4XS results in an error:

Note that when a scenario runs out of memory like this one here, there are no results for it in the output tables in Cosmic Frog if it is the first time the scenario is run. If the scenario has been run successfully before, then the previous results will still be in the output tables. To ensure that a scenario has run successfully within Cosmic Frog, user can check the timestamp of the outputs in the Optimization Network Summary (Neo), Transportation Summary (Hopper), or Optimization Greenfield Output Summary (Triad) output tables, or review the number of error jobs versus done jobs at the top of Cosmic Frog (see next screenshot). If either of these 2 indicate that the scenario may not have run, then double-check in the Run Manager and review the logs there to find the cause.

In the status bar at the top of Cosmic Frog, user can see that there were 2 error jobs and 13 done jobs within the last 24 hours.
In conclusion, for this scenario we started with a 2XS resource size. Using the Run Manager, we reviewed the percentage of memory used at peak usage in the Job Usage Metrics and concluded that a smaller 3XS resource size with 2Gb of RAM should still work fine for this scenario, but an even smaller 4XS resource size with 1Gb of RAM would be too small. Test runs using the 3XS and 4XS resource sizes confirmed this.
You can create a free account on the Optilogic platform, which includes Cosmic Frog, in just a few clicks. This document shows you two ways in which you can do this. Use the first option if you have Single Sign On (SSO) enabled for your Google (Gmail) or Microsoft account and you want to use this to log into the Optilogic platform.
You can check here to see what is included in your free account (lots!).
This video posted on the Optilogic Training website also covers account creation and then goes into how to navigate Cosmic Frog, a good starting point for new users.
To create your free account, go to signup.optilogic.app. This will automatically re-direct you to a Cosmic Frog Log In page:

First, we will walk through the steps of continuing with Microsoft where the user has Single Sign On enabled for their Microsoft account and has clicked on “Continue with Microsoft”. In the next section we will similarly go through the steps for using SSO with a Google account.
After the user has clicked on “Continue with Microsoft”, the following page will be brought up. Click on Accept to continue if the information displayed is correct.

You will see the following message about linking your Microsoft account to your Optilogic account:

Go into your email and find the email with subject “Link Microsoft”, and click on the Link Account button at the bottom of this email:

Should you not have received this email, you can click on “Resend Email”. If you did receive it and you have clicked on the Link Account button, you will be immediately logged into www.optilogic.com and will see the Home screen within the platform, which will look similar to the below screenshot:

From now on, you have 2 options when logging into the Optilogic platform via cosmicfrog.com (see the first screenshot in this documentation): you can log in by clicking on the “Continue with Microsoft” option which will immediately log you in or you can type your credentials into the username / email and password fields to manually log in.
After the user has clicked on “Continue with Google”, the following page will be brought up. If you have multiple Google email addresses, click on the one you want to use for logging into the Optilogic platform. If the email you want to use is not listed, you can click on “Use another account” and then enter the email address.

If the email you choose to use is not signed in on the device you are on currently, you will be asked for your password next. Please provide it and continue. If it is the first time you are using the email address to log into the Optilogic platform, you will be asked to verify it in the next step:

The default verification method associated with the Google account will be suggested, which in the example screenshot above is to send the verification code to a phone number. If other ways to verify the Google account have been set up, you can click on “More ways to verify” to change the verification method. If you are happy with the suggested method, click on Send. Once you have hit Send, the following form will come up:

You can again switch to another verification method in this screen by clicking on “More ways to verify”, or, if you have received the verification code, you can just enter it into the “Enter the code” field and click on Next. This will log you into the Optilogic platform and you will now see the Home screen within the platform, which will look similar to the last screenshot in the previous section (“Steps for the “Continue with Microsoft” Option”).
From now on, you have 2 options when logging into the Optilogic platform via cosmicfrog.com (see the first screenshot in this documentation): you can log in by clicking on the “Continue with Google” option which will immediately log you in after you have selected the Google email address to use, or you can type your credentials into the username / email and password fields to manually log in.
To create your free account, go to www.optilogic.com and click on the yellow “Create a Free Account” button.

The following form will be brought up, please fill out your First Name, Last Name, Email Address, and Phone Number. Then click on Next Step.

Your entered information will be shown back to you, and you can just click on Next Step again. Next, a form where you can set your Username and Password will come up. Click on Next Step again once this form is filled out.

In the final step you will be asked to fill out your Company Name, Role, Industry, and Company Size. Click on Submit after you have filled out these details.

A submission confirmation will pop up with instructions to verify your email address. Once you have verified your email address you can immediately start using your free account!
To help familiarize you with our product we have created the following video.
If you find that the standard constraints or costs in a model don’t quite capture your specific needs, you can create and define your own variables to use with costs and constraints.
To help in framing this discussion, let’s start with a simple example that fits into the standard input tables.
Objectives:
We wouldn’t need to do anything special in this instance, just create policies as normal and attach a Unit Cost of 5 to the MFG > DC transportation policy. To apply the constraint, we would create a Flow Constraint that sets a Max flow of 1000 units. While the input requirements are straightforward in this instance, let’s define both objectives in terms of variables as the solver would see them.
Flow Variable: MFG_CZ_Product_1_Flow
This example is simple, but it is important to think about costs and constraints in terms of the variables that they are applied over. This becomes even more important when we want to craft our own variables.
Let’s modify the constraint in the example above to now restrict the flow of Product_1 between MFG and DC to be no more than the flow of Product_2. Again, we will represent this in terms of variables as the solver will see them.
Flow Variables: MFG_CZ_Product_1_Flow, MFG_CZ_Product_2_Flow
We no longer have a constant on the right-hand side of our constraint – this is an issue as we have no way to input this type of a constraint requirement into the Flow Constraints table. Whenever we find ourselves expressing constraints or costs in terms of other variables that will be determined based on the model solve, we will need to make use of User Defined Variables.
Continuing with the constraint above, let’s modify the inequality statement so that we do in fact have a constant on the right-hand side. We can do this by subtracting one of the variables from both sides of the statement – this will then leave the right-hand side as 0.
We now have a constraint that can be modelled but we need to be able to define the left-hand side through the User Defined Variables table. User Defined Variables are defined as a series of Terms which are all linked to the same Variable Name. Each Term can be thought of as a solver variable as we have defined them in the examples above. For each Term, we will also need to enter a Coefficient, the Type of behavior we want to be capturing, and all of the needed information in the columns that follow depending on the Type that was just selected. All of these columns are based off of the individual constraint tables, so it is helpful to think about data as if you were entering a row in the specific constraint table.
Here is how the inputs for our example would look set up as a User Defined Variable:

We can see that by using the coefficients of 1 and -1, we have now accurately built the left-hand side of our inequality statement. All that’s left is to link this to a User Defined Constraint.
User Defined Constraints can be used to add restrictions to the values captured by the User Defined Variables. All that is needed is to enter the corresponding Variable Name and then select the appropriate constraint type and value.
Revisiting our inequality statement once more, we can see how the User Defined Constraint should be built:
MFG_CZ_Product_1_Flow – MFG_CZ_Product_2_Flow <= 0

Thank you for using the most powerful supply chain design software in the galaxy (I mean, as far as we know).
To see the highlights of the software please watch the following video.
Optimization (NEO) will read from all 5 of input tables in the Sourcing section of Cosmic Frog.
We are able to use these tables to define the sourcing logic that describes costs and where a product can be introduced into the network through production at a Facility (Production Policies) or by way of a Supplier (Supplier Capabilities). We can also define additional rules around how a product must be sourced using the Max Sourcing Range and Optimization Policy fields in the Customer Fulfillment, Replenishment, and Procurement Policies tables.
The Max Sourcing Range field can be used to specify the maximum flow distance allowed for a listed location / product combination. If flow distances are not specified in the Distance field of the Transportation Policies table, a straight-line distance will be calculated based on the Origin / Destination geocoordinates. This will take into account the Circuity Factor specified in the Model Settings as a multiplication factor to estimate real road distances. Any transportation distances that exceed the Max Sourcing Range will result in the arcs being dropped from consideration.
There are 4 allowable entries for Optimization Policy. For any given Destination / Product combination, only a single Optimization Policy entry will be supported meaning you can not have one source listed with a policy of Single Source and another as By Ratio (Auto Scale).
This is the default entry that will be used if nothing is specified. To Optimize places no additional logic onto the sourcing requirement and will use the least cost option available.
For the listed destination / product combination, only one of the possible sources can be selected.
This option allows for sources to be split by the defined ratios that are entered into the Optimization Policy Value field. All of the entries into this Policy Value field will be automatically scaled, and the flow ratios will be followed for all inbound flow to the listed destination / product combination.
For example, there are 3 potential sources for a single Customer location. There is a flow split enforced of 50-30-20 from DC_1, DC_2, DC_3 respectively. This can be entered as Policy Values of 50, 30, and 20:

The same sourcing logic could be achieved by entering values of 5, 3, 2 or even 15, 9, 6. All values will be automatically scaled for each valid source that has been defined for a destination / product combination.
Similar to the Auto Scale option, By Ratio (No Scale) allows for sources to be split by the defined ratios entered into the Optimization Policy Value field. However, no scaling will be performed and the Optimization Policy Value fields will be treated as absolute sourcing percentages where an entry of 50 means that exactly 50% of the inbound flow will come from the listed source.
For example, there are 3 possible sources for a single Customer location and we want to enforce that DC_1 will account for exactly 50% of the flow while the remainder can come from any valid location. We can specify that DC_1 will have a Policy Value of 50 while leaving our other options open for the model to optimize.

If Policy Values add up to less than 100 for a listed destination / product combination, another sourcing option must be available to fulfill the remaining percentage.
If Policy Values add up to more than 100 for a listed destination / product combination, the percentages will be scaled to 100 and used as the only possible sources.
As you continue to work with the tool you might find yourself asking, what do these engine names mean and how the heck did they come up with them?
Anura, the name of our data schema, comes from the biological root where Anura is the order that all frogs and toads fall into.

NEO, our optimization engine, takes its name from a suborder of Anura – Neobatrachia. Frogs in this suborder are known to be the most advanced of any other suborder.

THROG, our simulation engine, takes its name from a superhero hybrid that crosses Thor with a frog (yes, this actually exists)

Triad, our greenfield engine, takes its name from the oldest known species of frogs – Triadobatrachus. You can think of it as the starting point for the evolution of all frogs, and it serves as a great starting point for modeling projects too!

Dart, our risk engine, takes its name from the infamous poison dart frog. Just as you would want to be aware of a poisonous frog’s presence, you’ll also want to be sure to evaluate any opportunities for risks to present themselves.

Dendro, our simulation evolutionary algorithm, takes its name from what is known to be the smartest species of frog – Dendrobates auratus. These frogs have the ability to create complex mental maps to evaluate and better navigate their surroundings.

Hopper, our transportation routing engine, doesn’t have a name rooted in frog-related biology but rather a visual of a frog hopping along from stop to stop just as you might see with a multi-drop transportation route.

Tax systems can be complex, like for example those in Greece, Colombia, Italy, Turkey, and Brazil are considered to be among the most complex ones. It can however be important to include taxes, whether as a cost or benefit or both, in supply chain modeling as they can have a big impact on sourcing decisions and therefore overall costs. Here we will showcase an example of how Cosmic Frog’s User Defined Variables and User Defined Costs can be used to model Brazilian ICMS tax benefits and take these into account when optimizing a supply chain.
The model that is covered in this documentation is the “Brazil Tax Model Example” which was put together by Optilogic’s partner 7D Analytics. It can be downloaded from the the Resource Library. Besides the Cosmic Frog model, the Resource Library content also links to this “Cosmic Frog – BR Tax Model Video” which was also put together by 7D Analytics.
A helpful additional resource for those unfamiliar with Cosmic Frog’s user defined variables, costs, and constraints is this “How to use user defined variables” help article.
In this documentation the setup of the example model will first be briefly explained. Next, the ICMS tax in Brazil will be discussed at a high level, including a simplified example calculation. In the third section, we will cover how ICMS tax benefits can be modelled in Cosmic Frog. And finally, we will look at the impact of including these ICMS tax benefits on the flows and overall network costs.
One quick note upfront is that the screenshots of Cosmic Frog tables used throughout this help article may look different when comparing to the same model in user’s account after taking it from the Resource Library. This is due to columns having been moved or hidden and grids being filtered/sorted in specific ways to show only the most relevant information in these screenshots.
In this example model, 2 products are included: Prod_National to represent products that are made within Brazil at the MK_PousoAlegre_MG factory and Prod_Imported to represent products that are imported, which is supplied from SUP_Itajai_SC within the model, representing the seaport where imported products would arrive. There are 6 customer locations which are in the biggest cities in Brazil; their names start with CLI_. There are also 3 distribution centers (DCs): DC_Barueri_SP, DC_Contagem_MG, and DC_FeiraDeSantana_BA. Note that the 2 letter postfixes in the location names are the abbreviations of the states these locations are in. Please see the next screenshot where all model locations are shown on a map of Brazil:

The model’s horizon is all of 2024 and the 6 customers each have demand for both products, ranging from 100 to 600 units. The SUP_ location (for Prod_Imported) and MK_ location (for Prod_National) replenish the DCs with the products. Between the DCs, some transfers are allowed too. The demand at the customer locations can be fulfilled by 1, 2 or all 3 DCs, depending on the customer. The next screenshot of the Transportation Policies table (filtered for Prod_National) shows which procurement, replenishment, and customer fulfillment flows are allowed:


For the other product modelled, Prod_Imported, the same customer fulfillment, DC-DC transfer, and supply options are available, except:
In Brazil, the ICMS tax (Imposto sobre Circulaçao de Mercadorias e Serviços, or Tax on Commerce and Services) is levied by the states. It applies to movement of goods, transportation services between several states or municipalities, and telecommunication services. The rate varies and depends on the state and product.
When a company sells a product, the sales price includes ICMS, which results in an ICMS debit for the company (the company owes this to the state). Likewise, when purchasing or transferring product, the ICMS is included in what the company pays the supplier. This creates ICMS credit for the company. The difference between the ICMS debits and credits is what the company will pay as ICMS tax.
The next diagram shows an ICMS tax calculation example, where company also has a 55% tax benefit which is a discount on the ICMS it needs to pay.

In order to include ICMS tax benefits in a model, we need to be able to calculate ICMS debits and credits based on the amount of flow between locations in different states for both national and imported products. As different states and different products can have different ICMS rates, we need to define these individual flow lanes as variables and apply the appropriate rate to each. This can be done by utilizing the User Defined Variables and User Defined Costs input tables, which can be found in the “Constraints” section of the Cosmic Frog input tables, shown in the below screenshot (here user entered a search term of “userdef” to filter out these 2 tables):

In the User Defined Variables table, we will define 3 variables related to DC_Contagem_MG: one that represents the ICMS Debits, one that represents the ICMS Credits, and one that represents the ICMS Balance (= ICMS Debits – ICMS Credits) for this DC. The ICMS Debits and ICMS Credits variables have multiple terms that each represents a flow out of or a flow into the Contagem DC, respectively. Let us first look at the ICMS Debits variable:

Still looking at the same top records that define the DC_Contagem_MG|ICMS_Debit variable, but freezing the Variable Name and Term Name columns and scrolling right, we can see more of the columns in the User Defined Variables table:

Note that there are quite a few custom columns in this table (not shown in the screenshots; can be added through Grid > Table > Create Custom Column), which were used to calculate the ICMS rates outside of the model. These are helpful to keep in the model, should changes need to be made to the calculations.
Next, we will have a look at the ICMS Credit variable, which is made up of 3 terms, where each term represents a possible supply/replenishment flow into the Contagem DC:

The last step on the User Defined Variables table is to combine the ICMS Credit and ICMS Debit variables to calculate the ICMS balance:

To finalize the setup, we need to add 1 record to the User Defined Costs table, where we will specify that the company has a 55% discount (tax incentive) for the ICMS it pays relating to the Contagem DC:

As mentioned in the previous section, all records in the User Defined Variables and User Defined Costs tables have their Status set to Exclude. This way, when the Baseline scenario is run, the ICMS tax incentive is not included, and the network will be optimized just based on the costs included in the model (in this case only transportation costs). We want to include the ICMS tax incentive in a scenario and then compare the outputs with the Baseline scenario. This “IncludeDCMGTaxBenefit” scenario is set up as follows:

Next, we have a look at the second scenario item that is part of this scenario:

With the scenario set up, we run a network optimization (using the Neo engine) on both scenarios and then first look in the Optimization Network Summary output table:

Notice that the Baseline scenario as expected only contains transportation costs, while the IncludeDCMGTaxBenefits scenario also contains user defined costs, which represent the calculated ICMS tax benefit and have a negative value. So, overall, the IncludeDCMGTaxBenefit scenario has about R$ 331k lower total cost as compared to the Baseline scenario, even though the transportation costs are close to R$ 47k higher. Since the transportation costs are different between the 2 scenarios, we expect some of the flows have changed.
There are 3 network optimization output tables that contain the outputs related to User Defined Variables and Costs:

We will first discuss the Optimization User Defined Variable Term Summary output table:

The Optimization User Defined Variable Summary output table contains the outputs at the variable level (e.g. the individual terms of the variables have been aggregated):

Finally, the Optimization User Defined Cost Summary output table shows the cost based on the 55% benefit that was set:

The DC_Contagem_MG_TaxIncentive benefit is calculated from the DC_Contagem_MG|ICMS_Balance variable, where the Variable Value of R$ 686,980 is multiplied by -0.55 to arrive at the Cost value of R$ -377,839.
Now that we understand at a high level the cost impact of the ICMS tax incentive and the details of how this was calculated, let us look at more granular outputs, starting with looking at the flows between locations. Navigate to the Maps module within Cosmic Frog and open the maps named Baseline and Include DC MG Tax Benefit, which show outputs from the Baseline and IncludeDCMGTaxBenefit scenarios, respectively. The next 2 screenshots show the flows from DCs to customer locations: Baseline flows in the top screenshot and scenario “Include DC MG Tax Benefit” flows in the bottom screenshot:


We see that in the Baseline the customer in Rio de Janeiro is served by the DC in Sao Paulo. This changes in the scenario where the tax benefit is included: now the Rio de Janeiro customer is served by the Contagem DC (located close to Belo Horizonte). The other customer fulfillment flows are the same between the 2 scenarios.
This model also has 2 custom dashboards set up in the Analytics module; the 1. Scenarios Overview dashboard contains 2 graphs:

This Summary graph shows the cost buckets for each scenario as a bar chart. As discussed when looking at the Optimization Network Summary output table, the IncludeDCMGTaxBenefit scenario has an overall lower cost due to the tax benefit, which offsets the increased transportation costs as compared to the Baseline scenario.

This Site Summary bar chart shows the total outbound quantity for each DC / Factory / Supplier by scenario. We see that the outbound flow for the DC in Barueri is reduced by 500 units in the IncludeDCMGTaxBenefit scenario as compared to the Baseline scenario, whereas the Contagem DC has an increased outbound flow, from 1,000 to 2,500 units. We can examine these shifts in further detail in the second custom dashboard named 2. Outbound Flows by Site, as shown in the next 2 screenshots:

This first screenshot of the dashboard shows the amount of flow from the 3 DCs and the factory to the 6 customer locations. As we already noticed on the map, the only shift here is that the Rio De Janeiro customer is served by the Barueri DC in the Baseline scenario and this changes to it being served by the Contagem DC in the IncludeDCMGTaxBenefit scenario.

Scrolling further right in this table, we see the replenishment flows from the 3 DCs and the Factory to the 3 DCs. There are some more changes here where we see that the flow from the factory to the Barueri DC is reduced by 500 units in the scenario, whereas the flow from the factory to the Contagem DC is increased by 500 units. In the Baseline, the Barueri DC transferred a total of 1,000 units to the other 2 DCs (500 each to the Contagem and Feira de Santana DCs), and the other 2 DCs did not make DC transfers. In the Tax Benefit scenario, the Barueri DC only transfers to the Contagem DC, but now for 1,500 units. We also see that the Contagem DC now transfers 500 units to the Feira de Santana DC, whereas it did not make any transfers in the Baseline scenario.
We hope this gives you a good idea of how taxes and tax incentives can be considered in Cosmic Frog models. Give it a go and let us know of any feedback and/or questions!
The use of non alpha-numeric characters can present the possibilities of data issues when running scenarios. The only special characters that will be officially supported are periods, dashes, parentheses and underscores.
Please note that while other special characters in input data or scenario names can still function as expected, we can not guarantee that they will always work. If you encounter any issues or have questions about the data being used, please feel free to contact support at support@optilogic.com.
The data for our visualizations comes from our Cosmic Frog model database. This is often stored in the cloud using a unique identifier.

We can decide if we want to use Optilogic servers to do our data computations, or if we want the computations to be performed locally.

Next, we must decide which table we want to use for our visualization. Generally, the results of running our model are stored in the “optimization” tables, so it can be helpful to use search for this to see output data. However, we can also use tables containing input data if desired.

We can also preview the data in a table using the magnifying glass button:


If you are comfortable with traditional linear programming techniques, you can select the “Write Input Solver Files” and “Write LP File” parameters to get useful output files.

After running, these files are in your file explorer.

The “input_solver” folder has a list of all the tables that are entered into the optimization solver. This is useful for:

The “model.lp” file shows the model in a more traditional MIP optimization format, including the objective function and model constraints.

When using the Infeasibility Diagnostic engine, the LP file is different than a traditional Neo run. In this case, the cost coefficients in the objective function are set to 0. Instead, a positive cost coefficient is added to each slack constraint, and the goal of the model is to minimize the slack values. This allows us to find the “edge” of infeasibility.

After every Cosmic Frog run, it’s a great habit to check the OptimizationValidationErrorReport table. Even for models that run successfully, this table can have useful information on how the input data is being processed by the solver.

Key columns in the validation error report can tell you:
Validation errors that have “high” or “moderate” severity are likely to cause an infeasible model. In fact, Cosmic Frog has an “infeasibility checker” that looks for these kinds of errors and flags them before running the model to save you run time.

In this example, there are no transportation lanes that can deliver beds to CUST_Phoenix, so demand cannot be fulfilled. The infeasibility checker finds this and stops the model run before it even tries to optimize.


A couple of examples of how we could fix this error:
The OutputValidationErrorReport table is often very useful, even if a model “successfully” runs. This table will catch potential errors that do not cause infeasibility but could change the model results. In this example, there is a typo in the CustomerDemand table for “CUST_Montgomery”. Here, the model drops that row in the demand table. This will not cause an infeasible model, as it just removes that demand constraint. However, it means that no product will be sent to this customer in our optimized result.

The Resource Library is the application within the Optilogic platform where you can find files that will help facilitate, accelerate, and customize your model building and running. These include Cosmic Frog template models, Python scripts, Reference Data and more.

There are several ways to search, sort and filter the list of resources:

When a file in the list is selected by clicking on it, a Preview is shown on the right-hand side of the screen. This preview contains a short description of the resource, may contain a Video Introduction explaining for example the business problem or Cosmic Frog features covered in the resource, lists any Related (= included) Files, and lists any Tags that are associated with the resource so users can quickly understand what materials are covered by this resource.

You may want to add a new resource to the Resource Library if you think other users may be interested in the contents, for example to showcase a certain business problem modelled in Cosmic Frog, or to add Reference Data that can be helpful for others too, or to share the automation of a common model building/analysis step. On other occasions you may want to replace a resource in the library with an updated version. If you want to add or replace a resource in the library, you can do so by clicking on the Contribute button (found in the top-left corner of the Resource Library), and then follow the steps as explained in the next sections.
After clicking the Contribute button the following submission form will be shown:



The steps to contribute a Python Module to the Resource Library are very similar to those described above for adding a Cosmic Frog database:
Watch the video for an introduction to the features and functions of the Optilogic Intelligent Greenfield (Triad) engine.
The nature of LTL freight rating is complex and multi-faceted. The RateWare® XL LTL rating engine of SMC3 enables customers to manage parcel pricing and LTL rating complexity, for both class and density rates, with comprehensive rating solutions.
Cosmic Frog users that hold a license to the RateWare XL LTL Rating Engine of SMC3 can easily use it within Cosmic Frog to lookup LTL rates and use them in their models. In this documentation we will explain where to find the SMC3 RateWare utility within Cosmic Frog and how to use it. First, we will cover the basic inputs needed in Cosmic Frog models, then how to configure and run the SMC3 RateWare Utility to look up LTL rates, and finally how to add those rates for usage in the model.
Before running the SMC3 RateWare Utility to retrieve LTL rates, we need to set up our model, including the origin-destination pairs for which we want to look up the LTL rates. Here, we will cover the inputs of a basic demo model showing how to use the SMC3 RateWare Utility. Of course, models using this utility may be much more complex in setup as compared to what is shown here.
The next 3 screenshots show:



To facilitate model building, this model uses groups for Customer and DCs, as shown in the next screenshot. All DCs are members of the DCs group and all CZs are members of the Customers group:

Using these groups, the Transportation Policies table is easily set up with 1 record from the DCs group to the CZs group as shown in the next screenshot. At runtime this 1 record is expanded into all possible OriginName-DestinationName combinations of the group members. So, this is an all DCs to all Customers policy that covers 9 possible origin-destination combinations.

Besides these tables, this simple model also has several other tables that are populated:
The SMC3 RateWare Utility is available by default from the Utilities module in Cosmic Frog, see next screenshot. Click on the Module Menu icon with 3 horizontal bars at the top left in Cosmic Frog, then click on Utilities in the menu that pops up:

You will now see a list of Utilities, similar to the one shown in this next screenshot (your Utilities are likely all expanded, whereas most are collapsed in this screenshot):

The rest of the inputs that can be configured are specific to the RateWare XL LTL Rating Engine and we refer user to SMC3’s documentation for the configuration of these settings.

When the utility is finished running, we can have a look at the smc3_inputs and smc3_outputs tables (if the option of All was used for Select Operation). First, here is a screenshot of the smc3_inputs table:

The next screenshot shows the smc3_outputs table in Optilogic’s SQL Editor. This is also a table with many columns as it contains origin and destination information, repeats the inputs of the utility, and contains details of the retrieved rates. Here, we are only showing the 3 most relevant columns: originname (the source DC), destinationname (the customer), and detailrate_1 which is the retrieved LTL rate:

Now that we have used the SMC3 RateWare Utility to retrieve the LTL rates for our 9 origin-destination pairs, we need to configure the model to use them as they are currently only listed in the smc3_outputs custom table. We use the Lookups table (an Input table in the Functional Tables section) to create a lookup link between the smc3_outputs custom table and the Transportation Policies input table, as follows:

To finalize setting up the Lookup, we now apply it to the UnitCost field on the TransportationPolicies table, where we set the field to the name of the Lookup, see screenshot below. Now, when the model is run, the 1 transportation policy is expanded into the 9 origin-destination pairs it represents and the Unit Cost field is populated with the detailrate_1 value of the smc3_outputs table based on matching origin name, destination name, and product name between the 2 tables.

Lastly, we run a network optimization (NEO engine) on our small example model and look at the Optimization Flow Summary output table:

The optimization has correctly used DC2 as the source for CZ1 as it has the lowest rate for going to CZ1 of the 3 DCs (see screenshot further above of the smc3_outputs table). The rate is 23.18 and for 10 units moved (FlowQuantity) this results in a TransportationCost of 231.80. Similarly, we can double-check that indeed DC2 has the cheapest rate for going to CZ3 as well, DC3 has the cheapest rate for going to CZ2, and the Transportation Costs are correctly calculated as the LTL rate * flow quantity.