To enable users to build basic Cosmic Frog for Excel Applications to interact directly with Cosmic Frog from within Excel without needing to write any code, Optilogic has developed the Cosmic Frog for Excel Application Builder (also referred to as App Builder in this documentation). In this App Builder, users can build their own workflows using common actions like creating a new model, connecting to an existing model, importing & exporting data, creating & running scenarios, and reviewing outputs. Once a workflow has been established, the App can be deployed so it can be shared with other users. These other users do not need to build the workflow of the App again, they can just use the App as is. In this documentation we will take a user through the steps of a complete workflow build, including App deployment.
You can download the Cosmic Frog for Excel – App Builder from the Resource Library. A video showing how the App Builder is used in a nutshell is included; this video is recommended viewing before reading further. After downloading the .zip file from the Resource Library and unzipping it on your local computer, you will find there are 2 folders included: 1) Cosmic_Frog_For_Excel_App_Builder, which contains the App Builder itself and this is what this documentation will focus on, and 2) Cosmic_Frog_For_Excel_Examples, which contains 3 examples of how the App Builder can be used. This documentation will not discuss these examples in detail; users are however encouraged to browse through them to get an idea of the types of workflows one can build with the App Builder.
The Cosmic_Frog_For_Excel_App_Builder folder contains 1 subfolder and 1 Macro-enabled Excel file (.xlsm):

When ready to start building your first own basic App, open the Cosmic_Frog_For_Excel_Builder_v1.xlsm file; the next section will describe the steps a user needs to take to start building.
When you open the Cosmic_Frog_For_Excel_App_Builder_v1.xlsm file in Excel, you will find there are 2 worksheets present in the workbook, Start and Workflow. The top of the Start worksheet looks like this:

Going to the Workflow worksheet and clicking on the Cosmic Frog tab in the ribbon, we can see the actions that are available to us to create our basic Cosmic Frog for Excel Applications:

We will now walk through building and deploying a simple App to illustrate the different Actions and their configurations. This workflow will: connect to a Greenfield model in my Optilogic account, add records to the Customer and CustomerDemand tables, create a new scenario with 2 new scenario items in it, run this new scenario, and then export the Greenfield Facility Summary output table from the Cosmic Frog model into a worksheet of the App. As a last step we will also deploy the App.
On the Workflow worksheet, we will start building the workflow by first connecting to an existing model in my Optilogic account:

The following screenshot shows the Help tab of the “Connect To Or Create Model Action”:

In the remainder of the documentation, we will not show the Help tab of each action. Users are however encouraged to use these to understand what the action does and how to configure it.
After creating an action, the details of it will be added to 2 columns in the Workflow worksheet, see screenshot below. The first action of the workflow will use columns A & B, the next action C & D, etc. When adding actions, the placement on the Workflow worksheet is automatic and user does not need to do or change anything. Blue fields contain data that cannot be changed, white fields are user inputs when setting up the action and can be changed in the worksheet itself too.

The United States Greenfield Facility Selection model we are connecting to contains about 1.3k customer locations in the US which have demand for 3 products: Rockets, Space Suits, and Consumables. As part of this workflow, we will add 10 customers located in the state of Ontario in Canada to the Customers table and add demand for each of these customers for each product to the CustomerDemand table. The next 2 screenshots show the customer and customer demand data that will be added to this existing model.


First, we will use an Import Data action to append the new customers to the Customers table in the model we are connecting to:

Next, use the Import Data Action again to upsert the data contained in the New_CustomerDemand worksheet to the CustomerDemand table in the Cosmic Frog model, which will be added to columns E & F. After these 2 Import Data actions have been added, our workflow now looks like this:

Now that the new customers and their demand have been imported into the model, we will add several actions to create a new scenario where the new customers will be included. In this scenario, we will also remove the Max Number of New Facilities value, so the Greenfield algorithm can optimize the number of new facilities just based on the costs specified in the model. After setting up the scenario, an action will be added to run it.
Use the Create Scenario action to add a new scenario to the model:

Then, use 2 Create Item Actions to 1) include the Ontario customers and 2) remove the Max Number Of New Facilities value:


After setting up the scenario and its 2 items, the next step of the workflow will be to run it. We add a Run Scenario action to the workflow to do so:

The configuration of this action takes following inputs:
We now have a workflow that connects to an existing US Greenfield model, adds Ontario customers and their demand to this model, then creates and runs a new scenario with 2 items in this Cosmic Frog model. After running the scenario, we want to export the Optimization Greenfield Facility Summary output table from the Cosmic Frog model and load it into a new worksheet in the App. We do so by adding an Export Data Action to the workflow:

After adding the above actions to the workflow, the workflow worksheet now looks like the following 2 screenshots from column G onwards (columns A-F contain the first 3 actions as shown in a screenshot further above):

Columns G-H contain the details of the action that created the new ON Customers Cost Optimized scenario, and columns I-J & K-L contain the details of the actions that added the 2 scenario items to this scenario.

Columns M-N contain the details of the action that will run the scenario that was added and columns O-P those of the action that will export the selected output table (Optimization Greenfield Facility Summary) into the GF_Facility_Summary worksheet of the App.
To run the completed Workflow, all we need to do is click on the Run Workflow action and confirm we want to run it:

After kicking off the workflow, if we switch to the Start worksheet, details of the run and its progress are shown in rows 9-11:

Looking on the Optilogic Platform, we can also check the progress of the App run and the Cosmic Frog model changes:

Once the run is done all 3 jobs will have their State changed to Done, unless an error occurred in which case the State will say Error.
Checking the United Stated Greenfield Facility Selection model itself in the Cosmic Frog application on cosmicfrog.com:

Once the App is finished running, we see that a worksheet named GF_Facility_Summary was added to the App Builder:

There are several other actions that users of the App Builder can incorporate into a workflow or use to facilitate workflow building. We will cover these now. Feel free to skip ahead to the “Deploying the App” section if your workflow is complete at this stage.
Additional actions that can be incorporated into workflows are the Run Utility, Upload File, and Download File actions. The Run Utility action can be used to run a Cosmic Frog Utility (a Python script), which currently can be a Utility downloaded from the Resource Library or a Utility specifically built for the App.
There are currently 4 Utilities available in the Resource Library:

After downloading the Python file of the Utility you want to use in your workflow, you need to copy it into the working_files_do_not_change folder that is located in the same folder as where you saved the App Builder. Now you can start using it as part of the Run Utility action. In the below example, we will use the Python script from the Copy Map to a Model Resource Library Utility to copy a map and all its settings from one model (“United States Greenfield Facility Selection”, the model connected to in a previous action) to another (“European Greenfield Facility Selection”):

The parameters of the Copy Dashboard to a Model Utility are the same as those of the Copy Map to a Model Utility:
The Orders to Demand and Delete SaS Scenarios utilities do not have any parameters that need to be set, so the Utility Params part of the Run Utility action can be left blank when using these utilities.
The Upload File action can be used to take a worksheet in the App Builder and upload it as a .csv file to the Optilogic platform:

Files that get uploaded to the Optilogic platform are placed in a specific working folder related to the App Builder, the name and location of which are shown in this screenshot:

The Download File action can be used to download a .txt file from the Optilogic platform and load it into a worksheet in the App:

Other actions that facilitate workflow building are the Move an Action, Delete an Action, and Run Actions actions, which will be discussed now. If the order of some actions needs to be changed, you do not need to remove and re-add them, you can use the Move an Action action to move them around:

It is also possible that an action needs to be removed from a Workflow. For this, the “Delete an Action” action can be used, rather than manually deleting it from the Workflow worksheet and trying to move other actions in its place:

Instead of running a complete workflow, it is also possible to only run a subset of the actions that are part of the workflow:

Once a workflow has been completed in the Cosmic Frog for Excel App Builder, it can be deployed so other users can run the same workflow without having to build it first. This section covers the Deployment steps.

The following message will come up after the app had been deployed:

Looking in the folder mentioned in this message, we see the following contents:


Congratulations on building & deploying your own Cosmic Frog for Excel App!
If you want to build Apps that go beyond what can be done using the App Builder, you can do so too. This may require some coding using Excel VBA, Python, and/or SQL. Detailed documentation walking through this can be found in this Getting Started with Cosmic Frog for Excel Applications article on Optilogic’s Help Center.
Teams is an exciting new feature set designed to enhance collaboration within Supply Chain Design, enabling companies to foster a more connected and efficient working environment. With Teams, users can join a shared workspace where all team members have seamless access to collective models and files. This ensures that every piece of work remains synchronized, providing a single source of truth for your data. When one team member updates a file, those changes instantly reflect for all other members, eliminating inconsistencies and ensuring that everyone stays aligned.
Beyond simply improving collaboration, Teams offers a structured and flexible way to organize your projects. Instead of keeping all your files and models confined to a personal account, you can now create distinct teams tailored to different projects, departments, or business functions. This means greater clarity and easier navigation between workspaces, ensuring that the right content is always at your fingertips.
Consider the possibilities:
Teams introduces a more intuitive and structured way to collaborate, organize, and access your work—ensuring that your team members always have the latest updates and a streamlined experience. Get started today and transform the way you work together!
This documentation contains a high-level overview of the Teams feature set, details the steps to get started, gives examples of how Teams can be structured, and covers best practices. More detailed documentation for Organization Administrators and Teams Users is available in the following help center articles:
The diagram below highlights the main building blocks of the Teams feature set:

At a high-level, these are the steps to start using the Teams feature set:
Here follow 5 examples of how teams can be structured, including an example for each and an explanation of why such a setup works well.
Please keep following best practices in mind to ensure optimal use of the Teams feature set:
Once you have set up your teams and added content, you are ready to start collaborating and unlocking the full potential of Teams within Optilogic!
Let us know if you need help along the way—our support team (support@optilogic.com) has your back.
Depending on the type of supply chain one is modelling in Cosmic Frog and the questions being asked of it, it may be necessary to utilize some or all the features that enable detailed production modelling. A few business case examples that will often include some level of detailed production modelling include:
In comparison, modelling a retailer who buys all its products from suppliers as finished goods, does not require any production details to be added to its Cosmic Frog model. Hybrid models are also possible, think for example of a supermarket chain which manufactures its own branded products and buys other brands from its suppliers. Depending on the modelling scope, the production of the own branded products may require using some of the detailed production features.
The following diagram shows a generalized example of production related activities at a manufacturing plant, all of which can be modelled in Cosmic Frog:

In this help article we will cover the inputs & outputs of Cosmic Frog’s production modelling features, while also giving some examples of how to model certain business questions. The model in Optilogic’s Resource Library that is used mainly for the screenshots in this article is the Multi-Year Capacity Planning. There is a 20-minute video available with this model in the Resource Library, which covers the business case that is modelled and some detail of the production setup too.
To not make this document too repetitive we will cover some general Cosmic Frog functionality here that applies to all Cosmic Frog technologies and is used extensively for production modelling in Neo too.
To only show tables and fields in them that can be used by the Neo network optimization algorithm, select Optimization in the Technologies Filter from the toolbar at the top in Cosmic Frog. This will hide any tables and fields that are not used by Neo and therefore simplifies the user interface.

Quite a few Neo related fields in the input and output tables will be discussed in this document. Keep in mind however that a lot of this information can also be found in the tooltips that are shown when you hover over the column name in a table, see following screenshot for an example. The column name, technology/technologies that use this field, a description of how this field is used by those algorithm(s), its default value, and whether it is part of the table’s primary key are listed in the tooltip.

There are a lot of fields with names that end in “…UOM” throughout the input tables. How they work will be explained here so that individual UOM fields across the tables do not need to be explained further in this documentation as they all work similarly. These UOM fields are unit of measure fields and often appear to the immediate right of the field that they apply to, like for example Unit Value and Unit Value UOM in the screenshot above. In these UOM fields you can type the Symbol of a unit of measure that is of the required Type from the ones specified in the Units Of Measure input table. For example, in the screenshot above, the unit of measure Type for the Unit Value UOM field is Quantity. Looking in the Units Of Measure input table, we see there are 2 of these specified: Each and Pallet, with Symbol = EA and PLT, respectively. We can use either of these in this UOM field. If we leave a UOM field blank, then the Primary UOM for that UOM Type specified in the Model Settings input table will be used. For example, for the Unit Value UOM field in the screenshot above the tooltip says Default Value = {Primary Quantity UOM}. Looking this up in the Model Settings table shows us that this is set to EA (= each) in our current model. Let’s illustrate this with the following screenshots of 1) the tooltip for the Unit Value UOM field (located on the Products input table), 2) units of measure of Type = Quantity in the Units Of Measure input table and 3) checking what the Primary Quantity UOM is set to in the Model Settings input table, respectively:



Note that only hours (Symbol = HR) is currently allowed as the Primary Time UOM in the Model Settings table. This means that if another Time UOM, like for example minutes (MIN) or days (DAY), is to be used, the individual UOM fields need to be utilized to set these. Leaving these blank would mean HR is used by default.
With few exceptions, all tables in Cosmic Frog contain both a Status field and a Notes field. These are often used extensively to add elements to a model that are not currently part of the supply chain (commonly referred to as the “Baseline”), but are to be included in scenarios in case they will definitely become part of the future supply chain or to see whether there are benefits to optionally include these going forward. In these cases, the Status in the input table is set to Exclude and the Notes field often contains a description along the lines of ‘New Market’, ‘New Line 2026’, ‘Alternative Recipe Scenario 3, ‘Faster Bottling Plant5 China’, ‘Include S6’, etc. When creating scenario items for setting up scenarios, the table can then be filtered for Notes = ‘New Market’ while setting Status = ‘Include’ for those filtered records. We will not call out these Status and Notes fields in each individual input table in the remainder of this document, but we do encourage users to use these extensively as they make creating scenarios very easy. When exploring any Cosmic Frog models in the Resource Library, you will notice the extensive use of these fields too. The following 2 screenshots illustrate the use of the Status and Notes fields for scenario creation: 1) shows several customers on the Customers table where CZ_Secondary_1 and CZ_Secondary_2 are not currently customers that are being served but we want to explore what it takes to serve them in future. Their Status is set to Exclude and the Notes field contains ‘New Market’; 2) a scenario item called ‘Include New Market’ shows that the Status of Customers where Notes = ‘New Market’ is changed to ‘Include’.


The Status and Notes fields are also often used for the opposite where existing elements of the current supply chain are excluded in scenarios in cases where for example manufacturing locations, products or lines are going to go offline in the future. To learn more about scenario creation, please see this short Scenarios Overview video, this Scenario Creation and Maps and Analytics training session video, this Creating Scenarios in Cosmic Frog help article, and this Writing Scenario Syntax help article.
The model that is mostly used for screenshots throughout this help article is as mentioned above the Multi-Year Capacity Planning model that can be found here in the Resource Library. This model represents a European cheese supply chain which is used to make investment decisions around the growth of a non-mature market in Eastern Europe over a 5-year modelling horizon. New candidate DCs are considered to serve the growing demand in Eastern Europe, the model optimizes which are optimal to open and during which of the 5 years of the modelling horizon. The production setup in the model uses quite a few of the detailed modelling features which will be discussed in detail in this document:
Note that in the screenshots of this model, the columns have been re-ordered sometimes, so you may see a different order in your Cosmic Frog UI when opening the same tables of this model.
The 2 screenshots below show the Products and Facilities input tables of this model in Cosmic Frog:

Note that the naming convention of the products lends itself to easy filtering of the table for the raw materials, bulk materials, and finished goods due to the RAW_, BULK_, and FG_ prefixes. This makes the creation of groups and setting up of scenarios quick and easy.

Note that similar to the naming convention of the products, the facilities are also named with prefixes that facilitate filtering of the facilities so groups and scenarios can quickly be created.
Here is a visual representation of the model with all facilities and customers on the map:

The specific features in Cosmic Frog that allow users to model and optimize production processes of varying levels of complexity while using the network optimization engine (Neo) include the following input tables:

We will cover all these production related input tables to some extent in this article, starting with a short description of each of the basic single-period input tables:
These 4 tables feed into each other as follows:

A couple of notes on how these tables work together:
For all products that are explicitly modelled in a Cosmic Frog model, there needs to be at least 1 policy specified on the Production Policies table or the Supplier Capabilities table so there is at least 1 origin location for each. This applies to for example raw materials, intermediates, bulk materials, and finished goods. The only exception is if by-products are being modelled, these can have Production Policies associated with them, but do not necessarily need to (more on this when discussing Bills of Materials further below). From the 2 screenshots below of the Production Policies table, it becomes clear that depending on the type of product and the level of detail that is needed for the production elements of the supply chain, production policies can be set up quite differently: some use only a few of the fields, while others use more/different fields.

A couple of notes as follows:
Next, we will look at a few other records on the Production Policies input table:

We will take a closer look at the BOMs and Processes specified on these records when discussing the Bills of Materials and Processes tables further below.
Note that the above screenshot was just for PLT_1 and mozzarella, there are similar records in this model for the other 4 cheeses which can also be made at PLT_1, plus similar records for all 5 cheeses at PLT_2, which includes a new potential production line for future expansion too.
Other fields on the Production Policies table that are not shown in the above 2 screenshots are:
The recipes of how materials/products of different stages convert into each other are specified on the Bills of Materials (BOMs) table. Here the BOMs for the blue cheese (_BLU) products are shown:

Note that the above specified BOMs are both location and end-product agnostic. Their names suggest that they are specific for making the BULK_BLU and FG_BLU products, but only associating these BOMs on a Production Policy which has Product Name set to these makes this connection. We can use these BOMs at any location that they apply. Filtering the Production Policies table for the BULK_BLU and FG_BLU products we can see that 1) BOM_BULK_BLU is indeed used to make BULK_BLU and BOM_FG_BLU to make FG_BLU, and 2) the same BOMs are used at PLT_1 and PLT_2:

It is of course possible that the same product uses a different BOM at a different location. In this case, users can set up multiple BOMs for this product on the BOMs table and associate the correct one at the correct location in the Production Policies table. Choosing a naming convention for the BOM Names that includes the location name (or a code to indicate it) is recommended.
The screenshot above of the Bills of Materials table only shows records with Product Type = Component. Components are input into a BOM and are consumed by it when producing the end-product. Besides Component, Product Type can also be set to End Product or Byproduct. We will explain these 2 product types through the examples in this following screenshot:

Notes:
On the Processes table production processes of varying levels of complexity can be set up, from simple 1 step processes without using any work centers, to multi-step ones that specify costs, processing rates, and use different work centers for each step. The processes specified in the Multi-Year Capacity Planning model are relatively straightforward:

Let us also look at an example in a different model which contains somewhat more complex processes for a car manufacturer where the production process can roughly be divided into 3 steps:

Note that, like BOMs, Processes can in theory be both location and end-product agnostic. However:
Other fields on the Processes table that are not shown in the above 2 screenshots are:
If it is important to capture costs and/or capacities of equipment like production lines, tools, machines that are used in the production process, these can be modelled by using work centers to represent the equipment:

In the above screenshot, 2 work centers are set up at each plant: 1 existing work center and 1 new potential work center. The new work centers (PLT_1_NewLine and PLT_2_NewLine) have Work Center Status set to Closed, so they will not be considered for inclusion in the network when running the Baseline scenario. In some of the scenarios in the model, the Work Center Status of these 2 lines is changed to Consider and in these scenarios one of the new lines or both can be opened and used if it is optimal to do so. The scenario item that makes this change looks like this:

Next, we will also look at a few other fields on the Work Centers table that the Multi-Year Capacity Planning model utilizes:

In theory, it can be optimal for a model to open a considered potential work center in one period of the model (say 2024 in this model), close it again in a later period (e.g. 2025), for it then to open it again later (e.g. 2026), etc. In this case Fixed Startup or Fixed Closing Costs would be applied each time the work center was opened or closed, respectively. This type of behavior can be undesirable and is by default prevented by a Neo Run Parameter called “Open Close At Most Once”, as shown in this screenshot:

After clicking on the Run button, the Run screen comes up. The “Open Close At Most Once” parameter can be found in the Neo (Optimization) Parameters section. By default, it is turned on, meaning that a work center or facility is only allowed to change state once during the model’s horizon, i.e. once from closed to open if the Initial State = Potential or once from open to closed if the Initial State = Existing. There may however be situations where opening and/or closing of work centers and facilities multiple times during the model horizon is allowable. In that case, the Open Close At Most Once parameter can be turned off.
Other fields on the Work Centers table that are not shown in the above screenshots are:
Fixed Operating, Fixed Startup, and Fixed Closing Costs can be stepped costs. These can be entered into the fields on the Work Centers input table directly or can be specified on the Step Costs input table and then used on the Work Centers table in those cost fields. An example of stepped costs set up in the Step Costs input table is shown in the screenshot below where the costs are set up to capture the weekly shift cost for 1 person (note that these stepped costs are not in the Multi-Year Capacity Planning model in the Resource Library, they are shown here as an additional example):

To set for example the Fixed Operating Cost to use this stepped cost, type “WC_Shifts” into the Fixed Operating Cost field on the Work Centers input table.
Many of the input tables in Cosmic Frog have a Multi-Time Period equivalent, which can be used in models that have more than 1 period. These tables enable users to make changes that only apply to specific periods of the model. For example, to:
The multi-time period tables are copies of their single-period equivalents, with a few columns added and removed (we will see examples of these in screenshots further below):
Notes on switching status of records through the multi-period tables and updating records partially:

Three of the 4 production specific input tables that have been discussed above have a multi-time period equivalent: Production Policies, Processes, and Work Centers. There is no equivalent for the Bills Of Materials input table, as BOMs are only used if they are associated on records in the Production Policies table. Using different BOMs during different periods can be achieved by associating those BOMs on the Production Policies single-period table and setting the Status of them to Include for those to be used for most of the periods and to Exclude if they are to be included for certain periods / scenarios. Then add those records for which the Status needs to be switched to the Production Policies multi-period input table (we will walk through an example of this using screenshots in the next section).
The 3 production specific multi-time period input tables do have all of the same fields as their single-period equivalents, with the addition of the Period Name field and additional Status field. We will not discuss each multi-time period table and all its fields in detail here, but rather give a few examples of how each can be used.
Note that from this point onwards the Multi-Year Capacity Planning model was modified and added to for purposes of this help article, the version in the Resource Library does not contain the same data in the Multi-Time Period input tables and production specific Constraint tables that is shown in the screenshots below.
This first example on the Production Policies Multi-Time Period input table shows how the production of the cheddar finished good (FG_CHE) is prevented at plant 1 (PLT_1) in years 4 and 5 of the model:

In the following example, an alternative BOM to make feta (FG_FET) is added and set to be used at Plant 2 (PLT_2) during all periods instead of the original BOM. This is set up to be used in a scenario, so the original records need to be kept intact for the Baseline and other scenarios. To set this up, we need to update the Bills Of Materials, Production Policies, and Production Policies Multi-Time Period table, see the following screenshots and explanations:

On the Bills Of Materials input table, all we need to do is add the records for the new BOM that results in FG_FET. It has 2 records, both named ALTBOM_FG_FET, and instead of using only BULK_FET as the component which is what the original BOM uses, it uses a mix of BULK_FET and BULK_BLU as its components.
Next, we first need to associate this new BOM through the Production Policies table:

Lastly, the records that need to be added to the Production Policies Multi-Time Period table are the following 4 which have all the same values for the key columns as the 4 records in the above screenshot of the Production Policies single-period input table, which contain all the possible ways to produce FG_FET at PLT_2:


In the following example, we want to change the unit cost on 2 of the processes: at Plant 1 (PLT_1), the cost on the new potential line needs to be decreased to 0.005 for cheddar cheese (CHE) and increased to 0.015 for Swiss cheese (SWI). This can be done by using the Processes Multi-Time Period input table:

Note that there is also a Work Center Name field on the Processes Multi-Time Period table (not shown in the screenshot). As this is not a key field on the Processes input tables, it can be left blank here on the multi-time period table. This field will not be changed and the value from the single-time period table Work Center Name field will be used for these 2 records.
In the following example, we want to evaluate if upgrading the existing production lines at both plants from the 3rd year of the modelling horizon onwards, so they have a higher throughput capacity at a somewhat higher fixed operating cost, is a good alternative to opening one of the potential new lines at either plant. First, we add a new periods group to the model to set this up:

On the Groups table, we set up a new group named YEARS3-5 (Group Name) that is of Group Type = Periods and has 3 members: YEAR3, YEAR4 and YEAR5 (Member Name).

Cosmic Frog contains multiple tables through which different types of constraints can be added to network optimization (Neo) models. A constraint limits the model in a certain part of the network. These limits can for example be lower or upper limits in terms of the amount of flow between certain locations or certain echelons, the amount of inventory of a certain product or product group at a specific location or network wide, the amount of production of a certain product or product group at a specific location or network wide, etc. In this section the 3 constraints tables that are production specific will be covered: Production Constraints, Production Count Constraints, and Work Center Count Constraints.
A couple of general notes on all constraints tables:
In this example, we want to add constraints to the model that limit the production of all 5 finished goods together to 90,000 units. Both plants have this same upper production limit across the finished goods, and the limit applies to each year of the modelling horizon (5 yearly periods).

Note that there are more fields on the Production Constraints input table which are not shown in the above screenshot. These are:
In this example, we want to limit the number of products that are produced at PLT_1 to a maximum of 3 (out of the 5 finished goods). This limit applies over the whole 5-year modelling period, meaning that in total PLT_1 can produce no more than 3 finished goods:

Again, note there are more fields on the Production Count Constraints input table which are not shown in the above screenshot. These are:
Next, we will show an example of how to open at least 3 work centers, but no more than 5 out of 8 candidate work centers. These limits apply to all 5 yearly periods in the model together and over all facilities present in the model.

Again, there are more fields on the Work Center Count Constraints table that are not shown in the above screenshot:
After running a network optimization using Cosmic Frog’s Neo technology, production specific outputs can be found in several of the more general output tables, like the Optimization Network Summary, and the Optimization Constraints Summary (if any constraints were applied). Outputs more focused on just production can be found in 4 production specific output tables: the Optimization Production Summary, the Optimization Bills Of Material Summary, the Optimization Process Summary, and the Optimization Work Center Summary. We will cover these tables here, starting with the Optimization Network Summary.
The following screenshot shows the production specific outputs that are contained in the Optimization Network Summary output table:

Other production related fields on this table which are not shown in the screenshot above are:
The Optimization Production Summary output table has a record with the production details for each product that was produced as part of the model run:

Other fields on this output table which are not shown in the screenshot are:
The details of how many components were used and how much by-product produced as a result of any bills of materials that were used as part of the production process can be found on the Optimization Bills Of Material Summary output table:

Note that aside from possibly knowing based on the BOM Name, it is not listed in the Bills Of Material Summary output table what the end product is and how much of it is produced as a result of a BOM. Those details are contained in the Optimization Production Summary output table discussed above.
Other fields on this output table which are not shown in the screenshot are:
The details of all the steps of any processes used as part of the production in the Neo network optimization run can be found in the Optimization Process Summary, see these next 2 screenshots:


Other fields on this output table which are not shown in the screenshots are:
For each Work Center that has its Status set to Include or Consider, a record for each period of the model can be found in the Optimization Work Center Summary output table. It summarizes if the Work Center was used during that period, and, if so, how much and at what cost:

The following screenshot shows a few more output fields on the Optimization Work Center Summary output tables that have non-0 values in this model:

Other fields on this output table which are not shown in the screenshots are:
For all constraints in the model, the Optimization Constraint Summary can be a very handy table to check if any constraints are close to their maximum (or minimum, etc.) value to understand where the current and future bottlenecks are and likely will be. The screenshot below shows the outputs on this table for a production constraint that is applied at each of the 3 suppliers, where neither can produce more than 1 million units of RAW_MILK in any 1 year. In the screenshot we specifically look at the Supplier named SUP_3:

Other fields on this output table which are not shown in the screenshots are:
There are a few other output tables of which the main outputs are not related to production, but still contain several fields that result from productions. These are:
In this help article we have covered how to set up alternative Work Centers at existing locations and use the Work Center Status and Initial State fields to evaluate if including these, and from what period onwards if so, will be optimal. We have also covered how Work Center Count Constraints can be used to pick a certain amount of Work Centers to be opened/used from a set of multiple candidates, either at 1 location or multiple. Here we also want to mention that Facility Count Constraints can be used when making decisions at the plant level. Say that based on market growth in a certain region, a manufacturer decides a new plant needs to be built. There are 3 candidate locations for the plant from which the optimal needs to be picked. This can be set up as follows in Cosmic Frog:
A couple of alternative approaches to this are:
As mentioned above in the section on the Bill Of Materials input table, it is possible to set up a model where there is demand for a product that is the By-product resulting from a BOM. This does require some additional set up, and the below walks through this, while also showcasing how the model can be used to determine how much of any flexible demand for this by-product to fulfill. The screenshots show the set-up of a very simple example model built for this specific purpose.

On the Products table, besides the component (for which there also is demand in this model) that goes into any BOM, we also specify:

The demand for the 3 products is set up on the Customer Demand table and we notice that 1) there is demand for the Component, the End Product, and the By-Product, and 2) the Demand Status for ByProduct_1 is set to Consider, which means it does not need to be fulfilled, it will be (partially) fulfilled if it is optimal to do so. (For Component_1 and EndProduct_1 the Demand Status field is left blank, which means the default value of Include will be used.)

The EndProduct_1 is made through a BOM which consumes Component_1 and also make ByProduct_1 as a Byproduct. For this we need to set up a BOM:

Next, on the Production Policies table, we see that Component_1 can be created without a BOM, and:
In reality, these 2 production policies result in the same consumption of Component_1 and same production amounts of EndProduct_1 and ByProduct_1. Both need to be present however in order to be able to also have demand for ByProduct_1 in the model.
Other model elements that need to be set up are:
Three scenarios were run for this simple example model with the only difference between them the Unit Price for ByProduct_1: Baseline (price of ByProduct_1 = 3), PriceByproduct1 (Unit Price of ByProduct_1 = 1), PriceByproduct2 (Unit Price of ByProduct_1 = 2). Let’s review some of the outputs to understand how this Unit Price affects the fulfillment of the flexible demand for ByProduct_1:

The high-level costs, revenues, profit and served/unserved demand outputs by scenario can be found on the Optimization Network Summary output table:

On the Optimization Production Summary output table, we see that all 3 scenarios used BYP_BOM for the production of EndProduct_1 and ByProduct_1, it could have also picked the other BOM (FG_BOM) and the overall results would have been the same.
As the Optimization Production Summary only shows the production of the end products, we will also have a look at the Optimization Bills Of Material Summary output table:

Lastly, we will have a look at the Optimization Inventory Summary output table:

Note that had the demand for Byproduct_1 been set to Include rather than Consider in this example model, all 3 scenarios would have produced 100 units of it to fulfill the demand, and as a result have produced 200 units of EndProduct_1. 100 of those would have been used to fulfill the demand for EndProduct_1 and the other 100 would have stayed in inventory, like we saw in the Baseline scenario above.
Finding problems with any Cosmic Frog model’s data has just become easier with the release of the Integrity Checker. This tool scans all tables or a selected table in a model and flags any records with potential issues. Field level checks to ensure fields contain the right type of data or a valid value from a drop-down list are included, as are referential integrity checks to ensure the consistency and validity of data relationships across the model’s input tables.
In this documentation we will first cover the Integrity Checker tool’s scope, how to run it, and how to review its results. Next, we will compare the Integrity Checker to other Cosmic Frog data validation tools, and we will wrap up with several tips & tricks to help users make optimal use of the tool.
The Integrity Checker extends cell validation and data entry helper capabilities to support users identify a range of issues relating to referential integrity and data types before running a model. The following types of data and referential integrity issues are being checked for when the Integrity Checker is run:

Here, we provide a high-level description for each of these 4 categories; in the appendix at the end of this help center article more details and examples for each type of check are given. From left to right:
The Integrity Checker can be accessed in two ways while in Cosmic Frog’s Data module: from the pane on the right-hand side that also contains Model Assistant and Scenario Errors or from the Grid drop-down menu. The latter is shown in the next screenshot:

*Please note that in this first version of the Integrity Checker, the Inventory Policies and Inventory Policies Multi-Time Period tables are not included in any checks the Integrity Checker performs. All other tables are.
The second way to access the Integrity Checker is, as mentioned above, from the pane on the right-hand side in Cosmic Frog:

If the Integrity Checker has been run previously on a model, opening it again will show the previous results and gives user the option to re-run it by clicking on a “Rerun Check” button which we will see in screenshots further below.
After starting the Integrity Checker in one of the 2 ways described above, a message indicating it is starting will appear in the Integrity Checker pane on the right-hand side:

While the Integrity Checker is running, the status of the run will be continuously updated, while results will be added underneath as checks on individual tables complete. Only tables which have errors in them will be listed in the results.

Once the Integrity Checker run is finished, its status changes to Completed:

Users can see the errors identified by the Integrity Checker by clicking on one of the table cards which will open the table and the Integrity Checker Errors table beneath it:

Clicking on a record in the Integrity Checker Errors table will filter the table above (here the Transportation Policies table) down to the record(s) with that error:

User can go through each record in the Integrity Checker Errors table at the bottom and filter out the associated records with the errors in the table above to review the errors and possibly fix them. In the next screenshot, user has moved onto the second record in the Integrity Checker Errors table:

We will look at one more error, the one that was found on the Products table:

Finally, the following screenshot shows what it looks like when the Integrity Checker was run on an individual table and in the case no errors are found:

There are additional tools in Cosmic Frog which can help with finding problems in the model’s data and overall construction, the table below gives an overview of how these tools compare to each other to help users choose the most suitable one for their situation:
Please take note of the following so you can make optimal use of the Integrity Checker capabilities:


We saw the next diagram further above in the Integrity Checker Scope section. Here we will expand on each of these categories and provide examples.

From left to right:
Note that the numeric and data type checks sound similar, but they are different: a value in a field can pass the data type check (e.g. a double field contains the value -2000), but not the numeric check (a latitude field can only contain values between -90 and 90, so -2000 would be invalid).
We hope you will find the Integrity Checker to be a helpful additional tool to facilitate your model building in Cosmic Frog! For any questions, please contact Optilogic support on support@optilogic.com.
In a supply chain model, sourcing policies describe how network components create and order necessary materials. In Cosmic Frog, sourcing rules & policies appear in two different table categories:


In this section, we will discuss how to use these Sourcing policy tables to incorporate real-world behavior. In the sourcing policy tables we define 4 different types of sourcing relationships:
First we will discuss the options user has for the simulation policy logic used in these 4 tables and the last section covers the other simulation specific fields that can be found on these sourcing policies tables.
Customer fulfillment policies describe which supply chain elements fulfill customer demand. For a Throg (Simulation) run, there are 3 different policy types that we can select in the “Simulation Policy” column:
If “By Preference” is selected, we can provide a ranking describing which sites we want to serve customers for different products. We can describe our preference using the “Simulation Policy Value” column.
In the following example we are describing how to serve customer CZ_CA’s demand. For Product_1, we prefer that demand is fulfilled by DC_AZ. If that is not possible, then we prefer DC_IL to fulfill demand. We can provide rankings for each customer and product combination.
Under this policy, the model will source material from the highest ranked site that can completely fill an order. If no sites can completely fill an order, and if partial fulfillment is allowed, the model will partially fill orders from multiple sources in order of their preference.

If “Single Source” is selected, the customer must receive the given product from 1 specific source, 1 of the 3 DCs in this example.
The “Allocation” policy is similar to the “By Preference” policy, in that it sources from sites in order of a preference ranking. The “Allocation” policy, however, does not look to see whether any sites can completely fill an order before doing partial fulfillment. Instead, it will source as much as possible from source 1, followed by source 2, etc. Note that the “Allocation” and “By Preference” policies will only be distinct if partial fulfillment is allowed for the customer/product combination.

Consider the following example, customer CZ_MA can source the 3 products it puts orders in for from 3 DCs using the By Preference simulation policy. For each product the order of preference is set the same: DC_VA is the top choice, then DC_IL, and DC_AZ is the third (last) choice. Also note that in the Customers table, CZ_MA has been configured so that it is allowed to partially fill orders and line items for this customer.

The first order of the simulation is one that CZ_MA places (screenshot from the Customer Orders table), it orders 20 units of Product_1, 600 units of Product_2, and 160 units of Product_3:

The inventory at the DCs for the products at the time this orders comes in is the same as the initial inventory as this customer order is the first event of the simulation:

When the simulation policy is set to By Preference, we will look to fill the entire order from the highest priority source possible. The first choice is DC_VA, so we check its inventory: it has enough inventory to fill the 20 units of Product_1 (375 units in stock) and the 160 units of Product_3 (500 units in stock), but not enough to fill the 600 units of product_2 (150 units in stock). Since the By Preference policy prefers to single source, it looks at the next priority source, DC_IL. DC_IL does have enough inventory to fulfill the whole order as it has 750 units of Product_1, 1000 units of Product_2, and 300 units of Product_3 in stock.
Now, if we change all the By Preference simulation policies to Allocation via a scenario and run this scenario, the outcomes are different. In this case, as many units as possible are sourced from the first choice DC, DC_VA in this case. This means sourcing 20 units of Product_1, 150 units of Product_2 (all that are in stock), and 160 units of Product_3 from DC_VA. Then next, we look at the second choice source, DC_IL, to see if we can fill the rest of the order that DC_VA cannot fill: the 450 units left of Product_1, which DC_IL does have enough inventory to fill. These differences in sourcing decisions for these 2 scenarios can for example be seen in the Simulation Shipment Report output table:

Replenishment policies describe how internal (i.e. non-customer) supply chain elements source material from other internal sources. For example, they might describe how a distribution center gets material from a manufacturing site. They are analogous to customer fulfillment policies, except instead of requiring a customer name, they require a facility name.

Procurement policies describe how internal (i.e. non-customer) supply chain elements source material from external suppliers. They are analogous to replenishment policies, except instead of using internal sources (e.g. manufacturing sites), they use external suppliers in the Source Name field.

Production policies allow us to describe how material is generated within our supply chain.

There are 4 simulation policies regarding production:
Besides setting the Simulation Policy on each of these Sourcing Policies tables, each has several other fields that the Throg Simulation engine uses as well, if populated. All 4 Sourcing Policies tables contain a Unit Cost and a Lot Size field, plus their UOM fields. The following screenshot shows these fields on the Replenishment Policies table:

The Customer Fulfillment Policies and Replenishment Policies tables both also have an Only Source From Surplus field which can be set to False (default behavior when not set) or True. When set to True, only sources which have available surplus inventory are considered as the source for the customer/facility – product combination. What is considered surplus inventory can be configured using the Surplus fields on the Inventory Policies input table.
Finally, the Production Policies table also has following additional fields:
Inventory policies describe how inventory is managed across facilities in our supply chain. These policies can include how and when to replenish, how stock is picked out of inventory, and many other important rules.
In general, we add inventory policies using the Inventory Policies table in Cosmic Frog.

In this documentation we will cover the types of inventory simulation policies available and also other settings contained in the Inventory Policies table.
An (R,Q) policy is a commonly used inventory management approach. Here, when inventory drops below a value of R units, the policy is to order Q units. In Cosmic Frog, when an (R,Q) policy is selected, we can define R and Q in “SimulationPolicyValue1” and “SimulationPolicyValue2”, respectively. We can define the unit of measure (e.g. pallets, volume, individual units, etc.) for both parameters in their corresponding simulation policy value UOM column.
In the following example, MFG_STL has an (R,Q) inventory policy of (100,1900) for Product_2, measured in terms of individual units (i.e. “each”).

(s,S) policies are like (R,Q) policies in that they define a reorder point and how much to reorder. In an(s,S) policy, when inventory is below s units, the policy is to “order up to” S units. In other words, if x is the current inventory level, and x < s, the policy is to order (S-x) units of inventory.
In the example below, DC_VA has an (s,S) inventory policy of (150,750) for Product_1. If inventory dips below 150, the policy is to order so that inventory would replenish to 750 units.

(s,S) policies may also be referred to as (Min,Max) policies; both policy names are accepted in the Anura schema and both behave as described above.
A (T,S) inventory policy is like an (s,S) inventory policy in that whenever inventory is replenished, it is replenished up to level S. Under an (s,S) inventory policy, we check the inventory level in each period when making reorder decisions. In contrast, under a (T,S) inventory policy, the current inventory level is only checked every T periods. During one of these checks, if the inventory level is below S, then inventory is replenished up to level S.
In the example below, DC_VA manages Product_1 using a (T,S) inventory policy. The DC checks the inventory level every 5 days. If inventory is below 750 units during any of these checks, inventory is replenished up to 750 units.

As the name suggests a Do Nothing inventory policy does not trigger any replenishment orders. This policy can for example be used for products that are being phased out or at manufacturing locations where production occurs based on a schedule.
In the example below, MFG_STL uses the Do Nothing inventory policy for the 3 products it manufactures.

On the inventory policies table, other fields available to the user to model inventory include those to set initial inventory, how often inventory is reviewed, and the inventory carrying cost percentage:

When Only Source From Surplus is set to True on a customer fulfillment or a replenishment policy, the Surplus fields on the Inventory Policies table can be used to specify what is considered surplus inventory for a facility – product combination:

Note that if all inventory needs to be pushed out of a location, Push replenishment policies need to be set up for that location (where the location is the Source), and Surplus Level needs to be set to 0.
Inventory Policy Value fields can also be expressed in terms of the number of days of supply to enable the modelling of inventory where the levels go up or down when (forecasted) demand goes up or down. Please see the help center article “Inventory – Days of Supply (Simulation)” to learn more about how this can be set up and the underlying calculations.
Transportation policies describe how material flows throughout a supply chain. In Cosmic Frog, we can define our transportation policies using the Transportation Policies (required) and Transportation Modes (optional) tables. The Transportation Policies table will be covered in this documentation. In general, we can have a unique transportation policy for each combination of origin, destination, product, and transport mode.

Typically in simulation models, transportation policies are defined over the group of all products (which can be done by leaving Product Name blank as is done in the screenshot above), unless some products need to be prevented from being combined into shipments together on the same mode. If Transportation Policies list products explicitly, these products will not be combined in shipments.
Here, we will first cover the available transportation policies; other transportation characteristics that can be specified in the Transportation Policies table will be discussed in the sections after.
Currently supported transportation simulation policies are:
Selecting “On Volume”, “On Weight”, or “On Quantity” as a simulation policy means that either the volume, weight, or quantity of the shipment will determine which transportation mode is selected. In this case, the “Simulation Policy Value” defines the lowest volume that will go by that mode. We can use multiple lines to define multiple breakpoints for this policy.

Please note that:
If “By Preference” is selected, we can provide a ranking describing which transportation mode we want to use for different origin-destination-product combinations. We can describe our preference using the “Simulation Policy Value” column.

This screenshot shows that all MFG to DC transportation lanes only have 1 Mode of Container and the Simulation Policy is set to By Preference for all of them. If there are multiple Modes available, the By Preference policy will select them pending availability in the order of preference specified by the Simulation Policy Value field, the lowest value being the most preferred mode. If there were 2 modes available and the policy set to By Preference, where 1 mode has a simulation policy value of 1 and the other of 2, the Mode with simulation policy value = 1 will be used if available, if it is not available, the mode with simulation policy value = 2 will be used.
In the following example, the “Container” mode is preferred over the “Truck” mode for the MFG_CA to DC_IL route. Note that since the “Product Name” column is left blank, this policy applies to all products using this route.

Selecting “By Due Date” is like “By Preference” in that different modes can be ranked via the “Simulation Policy Value”. However, selecting “By Due Date” adds the additional component of demand timing into its selection. This policy selects the highest preference option that can meet the due date of the shipment. The following screenshot shows that the By Due Date simulation policy is used on certain DC to CZ lanes where 2 Modes are used, Truck and Parcel:

Costs associated with transportation can be entered in the Transportation Policies table Fixed Cost and Unit Cost fields. Additionally, the distance and time travelled using a certain Mode can be specified too:

Maximum flow on Lanes (origin-destination-product combinations) and/or Modes (origin-destination-product-mode combinations) can also be specified in the Transportation Policies table:

The Lane Capacity field and its UOM field specify the maximum flow on the Lane, while the Lane Capacity Period and its UOM field are used to indicate over what period of time this capacity applies. In this example, the MFG_CA to DC_AZ lane (first record) has a maximum capacity of 30 shipments every 13 weeks. Once 30 shipments have been shipped on this lane in a 13 week period, this lane cannot be used anymore during those 13 weeks; it is available for shipping again from the first day of the next 13 week period. If a lane’s capacity is reached, it depends on the simulation logic set up what happens. It can for example lead to the simulation making different sourcing decisions: if By Preference sourcing is used and the lane capacity on the lane of the preferred source to the destination has been reached for the period, this source is not considered available anymore and the next preferred source will be checked for availability, etc.
Analogous to the 4 fields to set Lane Capacity shown and discussed above, there are also 4 fields in the Transportation Policies table to set the Lane Mode Capacity where the capacity is specifically applied to a mode and not the whole lane in case multiple Modes exist on the lane: Lane Mode Capacity and its UOM field, and Lane Mode Capacity Period and its UOM field.
There are a few other fields on the Transportation Policies table that the Throg simulation engine will take into account if populated:
In a supply chain model, sourcing policies describe how network components create and order necessary materials. In Cosmic Frog, sourcing policies and rules appear in two different table categories:


In this section, we describe how to use the model elements tables to define sourcing rules for customers and facilities. Specifically, we can decide if each element is single sourced, allows backorders, and/or allows partial fulfillment.
Single source policies can be defined on either the order level or the line-item level. Setting “Single Source Orders” to “True” for a location means that for each order placed by that location, every item in that order must come from a single source. Setting this value to “False” does not prohibit single sourcing, it just removes the requirement.

Setting “Single Source Line Items” to “True” only requires each individual line-item come from a single source. In other words, even if this is “True”, an individual order can have multiple sources, as long as each line item is single sourced.
If “Single Source Orders” is set to “True” and “Single Source Line Items” is set to “False”, the “Single Source Orders” value takes precedence.
In case an order cannot be fulfilled by the due date (as set on the Customer Orders table in the case of Customers), it is possible to allow backorders where the order will still be filled, but it will be late, by setting the “Allow Backorders” value to “True”. A time limit can be set on this by using the “Backorder Time Limit” field and its UOM field, set to 7 days in the below screenshot. This means that the orders are allowed to be backordered, but if after 7 days the order still is not filled, it is cancelled. Leaving Backorder Time Limit blank means there is no time limit, and the order can be filled late indefinitely.

We can also decide to allow partial fulfillment of orders or individual line-items. If “Allow Partial Fill Orders” is set to “False”, orders need to be filled in full. If set to “True”, then only filling part of an order on time (by the due date) is allowed. What happens with the unfulfilled part of the order depends on if backorders are allowed. If so (“Allow Backorders” = “True”), then the remaining quantity of a partially filled order can be satisfied in the future with additional shipments. If a time limit on backorders is set and is reached on a partially filled order, the remaining quantity will be cancelled. “Partial Fill Orders” and “Partial Fill Line Items” behave similarly to the single sourcing policies, where it is possible to for example allow partially filling orders, but not partially filling line items. If “Partial Fill Orders” is set to “True”, then “Partial Fill Line Items” will also be forced to “True”.

The Transportation Modes table is an often used optional input table to run a simulation. Mode attributes like fill levels and capacities are specified in this table to control the size of shipments, which will be explained first in this documentation. Rules of precedence when using multiple fill level / capacity fields and when using On Volume / Weight / Quantity transportation simulation policies will be covered also.

The same capacity and fill level fields as for Volume are also available in this table for Quantity and Weight (not shown in the screenshot above).
When utilizing more than 1 of the Fill Level fields, the one that is reached first is applied. For example, if a shipment’s weight has reached the weight fill level, but its volume has not yet reached the volume fill level, the shipment is allowed to be dispatched.
Similarly, if more than 1 Capacity field has been populated, the one that is reached first is applied. For example, if a shipment’s volume has reached the volume capacity but not yet the weight capacity, it cannot be filled up further and will be dispatched.
As mentioned above, when transportation simulation policies of On Quantity / Weight / Volume are being used, the fill levels and capacities of these Modes are specified in the simulation policy value field on the Transportation Policies table. If also using the Transportation Modes table to set any fill level and/or capacity for these modes, user needs to take note of the effects this may have:
Simulations are generally (mostly) driven by demand specified as customer orders. These orders can be entered in the Customer Orders and/or the Customer Order Profiles input tables. The Customer Orders table typically contains historical transactional demand records to simulate a historical baseline. The Customer Order Profiles table on the other hand contains descriptions of customer order behaviors from which the simulation engine (Throg) generates orders that follow these profiles.
In this documentation we cover both these input table, the Customer Orders table and the Customer Order Profiles table.
To achieve the level of granularity needed and the time-based events to mimic reality as best as possible, every customer order to be simulated is explicitly defined in the customer orders table; this includes line items, order and due dates, and order quantities:

Users can utilize the following additional fields available on the Customer Orders table if required. The single sourcing, allow partial fill, and allow backorder settings behave the same as those that can be set on the Customers table (see this help article), except these here apply to individual orders/individual line items rather than to all orders at the customer over the whole simulation horizon. Note that if these are set here on the Customer Orders table, these values take precedence over any values set for the particular customer in the Customers table:
Rather than specifying individual orders and line items, the Customer Order Profiles table generates these individual orders from profiles which can for example disaggregate monthly demand forecasts into assumed or inferred profiles, using variability to randomize characteristics like quantities and time between orders.


Note that by using start and end dates for profiles, users can control the portion of the simulation horizon in which a profile is used. This enables users to for example capture seasonal demand behaviors by defining a profile for Customer A/Product X in winter, and another profile for the same customer-product combination in summer.
Two scenarios were run, 1 named “CZ_CO P4 profile a” where customer order profile a to generate orders at CZ_CO for Product_4 is included and 1 named “CZ_CO P4 profile b” where customer order profile b to generate orders at CZ_CO for Product_4 is included. These are the profiles shown in the 2 screenshots above. In the Simulation Order Report output table one can see the individual orders generated by these profiles during the simulation runs of these 2 scenarios:

When running models in Cosmic Frog, users can choose the size of the resource the model’s scenario(s) will be run on in terms of available memory (RAM in Gb) and number of CPU cores. Depending on the complexity of the model and the number of elements, policies and constraints in the model, the model will need a certain amount of memory to run to completion successfully. Bigger, more complex models typically need to be run using a resource that has more memory (RAM) available as compared to smaller, less complex models. The bigger the resource that is being used, the higher the billing factor which leads to using more of the available cloud compute hours available to the customer (the total amount of cloud compute time available to the customer is part of customer’s Master License Agreement with Optilogic). Ideally, users choose a resource size that is just big enough to run their scenario(s) without the resource running out of memory, while minimizing the amount of cloud compute time used. This document guides users in choosing an initial resource size and periodically re-evaluating it to ensure optimal usage of the customer’s available cloud compute time.
Once a model has been built and the user is ready to run 1 or multiple scenarios, they can click on the green Run button at the right top in Cosmic Frog which opens the Run Settings screen. The Run Settings screen is documented in the Running Models & Scenarios in Cosmic Frog Help Center article. On the right-hand side of the Run Settings screen, user can select the Resource Size that will be used for the scenario(s) that are being kicked off to run:


In this section, we will guide users on choosing an initial resource size for the different engines in Cosmic Frog, based on some model properties. Before diving in, please keep following in mind:
There are quite a few model factors that influence how much memory a scenario needs to solve a Neo run. These include the number of model elements, policies, periods, and constraints. The type(s) of constraints used may play a role too. The main factors, in order of impact on memory usage, are:
These numbers are those after expansion of any grouped records and application of scenario items, if any.
The number of lanes can depend on the Lane Creation Rule setting in the Neo (Optimization) Parameters:

Note that for lane creation, expansion of grouped records and application of scenario item(s) need to be taken into account too to get at the number of lanes considered in the scenario run.
Users can use the following list to choose an initial resource size for Neo runs. First, calculate the number of demand records multiplied with the number of lanes in your model (after expansion of grouped records and application of scenario items). Next, find the range in the list, and use the associated recommended initial resource size:
# demand records * # lanes: Recommended Initial Resource Size
A good indicator for Throg and Dendro runs to base the initial resource size selection on is the order of magnitude of the total number of policies in the model. To estimate the total number of policies in the model, add up the number of policies contained in all policies tables. There are 5 policies tables in the Sourcing category (Customer Fulfillment Policies, Replenishment Policies, Production Policies, Procurement Policies, and Return Policies), 4 in the Inventory category (Inventory Policies, Warehousing Policies, Order Fulfillment Policies, and Inventory Policies Advanced), and the Transportation Policies table in the Transportation category. The policy counts of each table should be those after expansion of any grouped records and application of scenario items, if any. The list below shows the minimum recommended initial resource size based on the total number of policies in the model to solve models using the Throg or Dendro engine.
Number of Policies: Minimum Resource
For Hopper runs, memory is the most important factor in choosing the right resource, and the main driver of memory requirements is the number of origin-destination (OD) pairs in the model. OD pairs are determined primarily by all possible facility-to-customer, facility-to-facility, and customer-to-customer lane combinations.
Most Hopper models have many more customers compared to facilities, and so we often can use the number of customers in a model as a guide for resource size. The list below shows the minimum recommended initial resource size to solve models using Hopper.
Customers: Minimum Resource Size
Most Triad models should solve very quickly, typically under 10 minutes. Still, choosing the right resource size will ensure your Triad model solves successfully, without paying for unneeded compute resources.
As with Hopper, memory is the most important factor in resource selection. In Triad, the main driver of memory requirements is the number of customers, with a smaller secondary effect from the number of greenfield facilities.
The list below shows the minimum recommended initial resource size to solve models using Triad where the number of facilities is assumed to be between 1 and 10:
Customers: Minimum Resource Size
Please note:
After running a scenario with the initially selected resource size, users can evaluate if it is the best resource size to use or if a smaller or larger one is more appropriate. The Run Manager application on Optilogic’s platform can be used to assess resource size:


Using this knowledge that the RAM required at peak usage is just over 1 Gb, we can conclude that going down to resource size 3XS, which has 2Gb of RAM available should still work OK for this scenario. The expectation is that going further down to 4XS, which has 1Gb of RAM available, will not work as the scenario will likely run out of memory. We can test this with 2 additional runs. These are the Job Usage Metrics after running with resource size 3XS:

As expected, the scenario runs fine, and the memory usage is now at about 54% (of 2Gb) at peak usage.
Trying with resource size 4XS results in an error:

Note that when a scenario runs out of memory like this one here, there are no results for it in the output tables in Cosmic Frog if it is the first time the scenario is run. If the scenario has been run successfully before, then the previous results will still be in the output tables. To ensure that a scenario has run successfully within Cosmic Frog, user can check the timestamp of the outputs in the Optimization Network Summary (Neo), Transportation Summary (Hopper), or Optimization Greenfield Output Summary (Triad) output tables, or review the number of error jobs versus done jobs at the top of Cosmic Frog (see next screenshot). If either of these 2 indicate that the scenario may not have run, then double-check in the Run Manager and review the logs there to find the cause.

In the status bar at the top of Cosmic Frog, user can see that there were 2 error jobs and 13 done jobs within the last 24 hours.
In conclusion, for this scenario we started with a 2XS resource size. Using the Run Manager, we reviewed the percentage of memory used at peak usage in the Job Usage Metrics and concluded that a smaller 3XS resource size with 2Gb of RAM should still work fine for this scenario, but an even smaller 4XS resource size with 1Gb of RAM would be too small. Test runs using the 3XS and 4XS resource sizes confirmed this.
Transportation lanes are a necessary part of any supply chain. These lanes represent how product flows throughout our supply chain. In network optimization, transportation lanes are often referred to as arcs or edges.
In general, lanes in our supply chain are generated from the transportation policies and sourcing policies provided in our data tables.

Transportation policies are stored in the TransportationPolicies table. Sourcing policies are stored in the following tables:
From the data in these tables, the software automatically generates the lanes (i.e. arcs or edges) in our network before sending it to the optimization solver. We can control how these lanes are generated as a parameter of our Neo model.
Neo models can follow 4 different lane creation policies:

If the “Transportation Policy Lanes Only” rule is selected, Cosmic Frog will only generate transportation lanes based on data in the TransportationPolicies table. If a lane between two sites is not explicitly defined here, product will not be able to directly flow between those sites. Note that any additional information specified in a Sourcing Policy table (unit cost, policy rule etc.) will still be respected for the lane so long as it exists in the Transportation Policies table.

If the “Sourcing Policy Lanes Only” rule is selected, Cosmic Frog will only generate transportation lanes based on data in the Sourcing tables. Even if an origin-destination path is defined in the TransportationPolicies table, product will not be able to flow via this lane unless there is a specific sourcing policy defining how the destination site gets product from the origin site. Note that any additional information specified in a Transportation Policies table (cost, policy rule, multiple modes etc.) will still be respected for the lane so long as it exists in a Sourcing Policy table.

If the “Intersection” rule is selected, Cosmic Frog will only generate transportation lanes if they are defined in both the transportation policy table and one of the sourcing policy tables.
For users converting models from Supply Chain Guru©, the default SCG© lane creation rule is “Intersection”.

If the “Union” rule is selected, Cosmic Frog will generate transportation lanes if they are defined in either the transportation policy table or one of the sourcing policy tables.

Here we will cover the options a Cosmic Frog user has for modeling transportation costs when using the Neo Optimization engine. The different fields that can be populated and how the calculations under the hood work will be explained in detail.
There are many ways in which transportation can be costed in real life supply chains. The Transportation Policies table contains 4 cost fields to help users model costs as close as possible to reality. These fields are: Unit Cost, Fixed Cost, Duty Rate and Inventory Carrying Cost Percentage. Not all these costs need to be used: the one(s) that are applicable should be populated and the others can be left blank. The way some of these costs work depends on additional information specified in other fields, which will be explained as well.
Note that in the screenshots throughout this documentation some fields in the Cosmic Frog tables have been moved so they could be shown together in a screenshot. You may need to scroll right to see the same fields in your Cosmic Frog model tables and they may be in a different order.
We will first discuss the input fields with the calculations and some examples; at the end of the document an overview is given of how the cost inputs translate to outputs in the optimization output tables.
This field is used for transportation costs that increase when the amount of product being transported increases and/or the transportation distance or time increases. As there are quite a few different measures based on which costs can depend on the amount of product that is transported (e.g. $2 per each, or $0.01 per each per mile, or $10 per mile for a whole shipment of 1000 units, etc.) there is a Unit Cost UOM field that specifies how the cost specified in the Unit Cost field should be applied. In a couple of cases, the Average Shipment Size and Average Shipment Size UOM fields must be specified too as we need to know the total number of shipments for the total Unit Cost calculation. The following table provides an overview of the Unit Cost UOM options and explains how the total Unit Costs are calculated for each UOM:

With the settings as in the screenshot above, total Unit Costs will be calculated as follows for beds, pillows, and alarm clocks going from DC_Reno to CUST_Phoenix:
The Unit Cost field can contain a single numeric value (as in the examples above), a step cost specified in the Step Costs table, a rate specified in the Transportation Rates table, or a custom cost function.
If stepped costs are used as the Unit Cost for Transportation Policies that use Groups in the Product Name field, then the Product Name Group Behavior field determines how these stepped costs are applied:
See following screenshots for an example of using stepped costs in the Unit Cost field and the difference in cost calculations for when Product Name Group Behavior is set to Enumerate vs Aggregate:

On the Step Costs table (screenshot above), the stepped costs we will be using in the Unit Cost field on the Transportation policies table are specified. All records with the same Step Cost Name (TransportUnitCost_2 here) make up 1 set of stepped costs. The Step Cost Behavior is set to Incremental here, meaning that discounted costs apply from the specified throughput level only, not to all items once we go over a certain throughput. So, in this example, the per unit cost for units 0 through 10,000 is $1.75, $1.68 for units 10,001 through 25,000, $1.57 for units 25,001 through 50,000, and $1.40 for all units over 50,000.
The configuration in the Transportation Policies table looks as follows:

The following screenshot shows the outputs on the Optimization Flow Summary table of 2 scenarios that were run with these stepped costs, 1 scenario used the Enumerate option for the Product Name Group Behavior and the other 1 used the Aggregate option. The cost calculations are explained below the screenshot.

The Fixed Cost field can be used to apply a fixed cost to each shipment for the specified origin-destination-product-mode combination. An average shipment size needs to be specified to be able to calculate the number of shipments from the amount of product that is being transported. When calculating the number of shipments, the result can contain fractions of shipments, e.g. 2.8 or 5.2. If desirable, these can be rounded up to the next integer (e.g. 3 and 6 respectively) by setting the Fixed Cost Rule field to Treat As Full. Note however that using this setting can increase model runtimes, and using the default Prorate setting is recommended in most cases.
In summary, The Fixed Cost field therefore works together with the Fixed Cost Rule, Average Shipment Size, and Average Shipment Size UOM fields. The following table shows how the calculations work:
The Fixed Cost field can contain a single numeric value, or a step cost specified in the Step Costs table.
Following example shows how Fixed Costs are calculated on the DC_Scranton – CUST_Augusta lane and illustrates the difference between setting the Fixed Cost Rule to Prorate vs Treat As Full

This setup in the Transportation Policies table means that the cost for 1 shipment with on average 1,000 units on it is $100. 2 scenarios were run with this cost setup, 1 where Fixed Cost Rule was set to Prorate and 1 where it was set to Treat As Full. Following screenshot shows the outputs of these 2 scenarios:

For Fixed Costs on Transportation Policies that use Groups in the Product Name field, the Product Name Group Behavior field determines how these fixed costs are applied:
See following screenshots for an example of using Fixed Costs where the Fixed Cost Rule is set to Treat As Full and the difference in cost calculations for when Product Name Group Behavior is set to Enumerate vs Aggregate:

The transportation policy from DC_Birmingham to CUST_Baton Rouge uses the AllProducts group as the ProductName. This Group contains all 3 products being modelled: beds, pillows, and alarm clocks. The costs on this policy are a fixed cost of $100 per shipment, where an average shipment contains 1,000 units. The Fixed Cost Rule is set to Treat As Full meaning that the number of shipments will be rounded up to the next integer. Depending on the Product Name Group Behavior field this is done for the flow of each product individually (when set to Enumerate) or done for the flow of all 3 products together (when set to Aggregate):

When products are imported or exported from/to different countries, there may be cases where duties need to be paid. Cosmic Frog enables you to capture these costs by using the Duty Rate field on the Transportation Policies table. In this field you can specify the percentage of the Product Value (as specified on the Products table) that will be incurred as duty. If this percentage is for example 9%, you need to enter a value of 9 into the Duty Rate field. The calculation of total duties on a lane is as follows: Flow Quantity * Product Value * Duty Rate.
The following screenshots show the Product Value of beds, pillows and alarm clocks in the Products table, the Duty Rate set to 10% on the DC_Birmingham to CUST_Nashville lane in the Transportation Policies table, and the resulting Duty Costs in the Optimization Flow Summary table, respectively.



Alarm clocks have a Product Value of $30. With a Duty Rate of 10% and moving 24,049 from DC_Birmingham to CUST_Nashville, the resulting Duty Cost = 24,049 * $30 * 0.1 = $72,147.
If in transit inventory holding costs need to be calculated, the Inventory Carrying Cost Percentage field on the Transportation Policies table can be used. The value entered here will be used as the percentage of product value (specified on the Products table) to incur the in transit holding costs. If the Inventory Carrying Cost Percentage is 13%, then enter a value of 13 into this field. This percentage is interpreted as an annual percentage, so the in transit holding cost is then prorated based on transit time. The calculation of the in transit holding costs becomes: Flow Quantity * Product Value * Inventory Carrying Cost Percentage * Transit Time (in days) / 365.
Note that there is also an Inventory Carrying Cost Percentage field in the Model Settings table. If this is set to a value greater than 0 and there is no value specified in the Transportation Policies table, the value from the Model Settings table is automatically used for inventory carrying cost calculations, including in transit holding costs. If there are values specified in both tables, the one(s) in the Transportation Policies table take precedence for In Transit Holding Cost calculations.
The following screenshots show the Inventory Carrying Cost Percentage set to 20% on the DC_Birmingham to CUST_Nashville lane in the Transportation Policies table, and the resulting In Transit Holding Costs in the Optimization Flow Summary table, respectively. The Product Values are as shown in the screenshot of the Products table in the previous section on Duty Rates.


For Pillows, the Product Value set on the Products Table is $100. When 120,245 units are moved from DC_Birmingham to CUST_Nashville, which takes 3.8909 hours (214 MI / 55 MPH), the In Transit Holding Costs are calculated as follows: 120,245 (units) * $100 (product value) * 0.2 (Carrying Cost Percentage) * (3.8909 HR (transport time) / 24 (HRs in a day)) / 365 (days in a year) = $1,068.18.
The following table gives an overview of how the inputs into the 4 cost fields on the Transportation Policies table translate to outputs in multiple optimization output tables. The table contains the field names in the output tables and shows from which input field they result.
Note that the 4 different types of transportation costs are also included in the Landed Cost (Optimization Demand Summary table) and Parent Node Cost (Optimization Cost To Serve Parent Information Report table) calculations.
The SQL Editor helps users write, edit, and execute SQL (Structured Query Language) queries within Optilogic’s platform. It provides direct access to database objects such as tables and views stored within the platform. In this documentation, the Anura Supply Chain Model Database (Cosmic Frog’s database) will be used as the database example.
Anura model exploration and editing are enabled through the three windows of the SQL Editor:

The Anura database is stored in PostgreSQL and exclusively supports PostgreSQL query statements to ensure optimized performance. Visit https://www.postgresql.org/ for more detailed information.
To enable the SQL editor, select a table or view from a database. Once selected, the SQL Editor will prepopulate a Select query, and the Metadata Explorer displays the table schema to enable initial data exploration.

The Database Browser offers several tools to explore your databases and display key information.

The Query Editor enables users to create and execute custom SQL queries and view the results. Reserved words are highlighted in blue to assist in SQL editing. This window is not enabled until a model table or view has been selected from the database browser; once selected, the user is able to customize this query to run in the context of the selected database.

The Metadata Explorer provides a set of tools to efficiently create and store SQL queries.

SQL is a powerful language that allows you to manipulate and transform tabular data. The query basics overview will help guide you through creating basic SQL queries.

Example 1: Filter Criteria - Customers with status set to include without latitude
SELECT A.CustomerName, A.Status, A.Region
FROM customers A
Where A.Latitude IS NOT NULL and A.Status = ‘Include’
Example 2: Summarizing Records - Regions with 2 or more geocoded customers
SELECT A.Region, A.Status, Count(*) AS Cnt
FROM customers A
Where A.Latitude IS NOT NULL
Group By A.Region, A.Status
Having Count(*) > 1
Order by Cnt DescOften, your model analysis will require you to use data stored in more than one table. To include multiple tables in a single SQL query, you will have to use table joins to list the tables and their relationships.
If you are unsure if all joined values are present in both tables, leverage a Left or Right join to ensure you don’t unintentionally exclude records.

Example 1: Inner Join - Join Customer Demand and Customers to add Region to Demand
SELECT A.CustomerName, A.ProductName, B.Region, A.Quantity
FROM customerdemand A INNER JOIN Customers B
on A.CustomerName = B.CustomerName
Example 2: Left Join - Find Customer Demand records missing Customer record
SELECT A.CustomerName, A.ProductName, B.Region, A.Quantity
FROM customerdemand A Left JOIN Customers B
on A.CustomerName = B.CustomerName
Where B.CustomerName is Null
Example 3: Inner Join & Aggregation – Summarize Demand by Region
SELECT B.Region, A.ProductName, SUM(Cast (A.Quantity as Int)) Quantity
FROM customerdemand A INNER JOIN Customers B
on A.CustomerName = B.CustomerName
Group By B.Region, A.ProductNameWhen data is separated into two or more tables due to categorical differences in the data, a join won’t work because there is a common structure, not a relationship between the tables. A UNION is a type of join that allows you to merge the results of two separate table queries into a single unified output. Ensure each query has the same number of columns in the same order.
Example 1: UNION – Create a unified view of all customers and facilities that are geocoded
SELECT A.CustomerName as SiteName, A.City, A.Region, A.Country, A.Latitude, A.Longitude, 'Cust' as Type
FROM customers A
UNION
SELECT B.FacilityName as SiteName, B.City, B.Region, B.Country,B.Latitude, B.Longitude, 'Facility' as Type
FROM Facilities BAs queries grow in complexity, it is often easiest to reset the table references by creating a sub-query. A sub-query allows you to create a new virtual table and reference this abbreviated name and structure as you build out a query in phases.
Example 1: Subquery +UNION – Create a unified view of all customers and facilities that are geocoded
SELECT C.SiteName, C.city, C.Region, C.Country, C.Latitude, C.Longitude, C.Type
FROM (
SELECT A.CustomerName as SiteName, A.City, A.Region, A.Country, A.Latitude, A.Longitude, 'Cust' as Type
FROM customers A
UNION
SELECT B.FacilityName as SiteName, B.City, B.Region, B.Country,B.Latitude, B.Longitude, 'Facility' as Type
FROM Facilities B
) C
WHERE C.Latitude IS NOT NULLAs data tables grow, it is often more efficient to use a table filter to find missing values than a left join and null filter criteria.
Example 1: Table Search Filter – CustomerDemand without a Customer match
SELECT * FROM customerdemand A
WHERE NOT EXISTS (SELECT B.CustomerName FROM Customers B WHERE A.CustomerName = B.CustomerName)The Analytics module in Cosmic Frog allows you to display data from tables and views. Custom queries can be stored as views, enabling the analytics module to reference this virtual table to display results. Creating a view follows a very similar query construct as a sub-query, but rather than layering in a select statement, you add CREATE VIEW viewname as ( query).
Once created, a view can be selected with the Analytics module of Cosmic Frog.

Example 1: Create View – Creating an all-site view
CREATE VIEW V_All_Sites as
(
SELECT C.SiteName, C.city, C.Region, C.Country, C.Latitude, C.Longitude, C.Type
FROM (
SELECT A.CustomerName as SiteName, A.City, A.Region, A.Country, A.Latitude, A.Longitude, 'Cst' as Type
FROM customers A
UNION
SELECT B.FacilityName as SiteName, B.City, B.Region, B.Country,B.Latitude, B.Longitude, 'Fac' as Type
FROM Facilities B
) C
)
Example 2: Delete View – Delete V_all_sites view
Drop VIEW v_all_sitesSQL queries can also modify the contents and structure of data tables. This is a powerful capability, and the results, if improperly applied, cannot be undone.
Table updates & modifications can be completed within Cosmic Frog, with the added benefit of the context of allowed column values. This can also be done within the SQL editor by executing UPDATE and ALTER TABLE SQL statements.
Example 1: Modifying Tables – Adding Additional Notes Columns
ALTER TABLE Customers
ADD Notes_1 character varying (250)
Example 2: Modifying Values – Updating Notes Columns
UPDATE Customers
SET Notes_1 = CONCAT(Country , '-' , Region)
Example 3: Modifying Tables – Delete New Notes Columns
ALTER TABLE Customers
DROP COLUMN Notes_1
Example 4: Copying Tables – Copy Customers Table
SELECT *
INTO Customers_1
FROM Customers
Example 5: Deleting Tables – Delete Customers Table
DROP TABLE Customers_1
DROP TABLE Customers_1
Visit https://www.postgresqltutorial.com/ for more information on PostgreSQL query syntax.
A confirmation email is sent following account creation, however this email could potentially be blocked due to an organization’s IT policies. If you are failing to receive your confirmation email, please make sure that www.optilogic.com is whitelisted, as well as the following email address: support=www.optilogic.com@mail.www.optilogic.com.
If possible, please request that a wildcard whitelist be established for all URL’s that end in *.optilogic.app.
After confirming that these have been whitelisted, try and send another confirmation email. If the problem persists, please send a note in to support@optilogic.com.
One of Cosmic Frog’s great competitive features is the ability to quickly run many sensitivity analysis scenarios in parallel on Optilogic’s Cloud-based platform. This built-in Sensitivity at Scale (S@S) functionality lets a user run sensitivity on demand quantity and transportation costs with 1 click of a button, on any scenario using any of Cosmic Frog’s engines. In this documentation, we will walk through how to kick-off a S@S run, where to track the status of the scenarios, and show some example outputs of S@S scenarios once they have completed running.
Kicking off a S@S analysis is simply done by clicking on the green S@S button on the right-hand side in the toolbar at the top of Cosmic Frog:

After clicking on the S@S button, the Run Sensitivity at Scale screen comes up:

Please note that the parameters that are configured on the Run Settings screen (which comes up when clicking on the Run button at the right top of Cosmic Frog) are used for the Sensitivity at Scale scenario runs.
The scenarios are then created in the model, and we can review their setup by switching to the Scenarios module within Cosmic Frog:

As an example of the sensitivity scenario items that are being created and assigned to the sensitivity scenarios as part of the S@S process, let us have a look at one of these newly created scenario items:

Once the sensitivity scenarios have been created, they are kicked off to all be run simultaneously. Users can have a look in the Run Manager application on the Optilogic platform to track their progress:

Once a S@S scenario finishes, its outputs are available for review in Cosmic Frog. As with other models and scenarios, users can review outputs through output tables, maps, and graphs/charts/dashboards in the Analytics module. Here we will just show the Optimization Network Summary output table and a cost comparison chart as example outputs. Depending on the model and technology run, users may want to look at different outputs to best understand them.

To understand how the costs are divided over the different cost types and how they compare by scenario, we can look at following Supply Chain Cost Detail graph in the Analytics module:

Optimization (NEO) will read from all 5 of input tables in the Sourcing section of Cosmic Frog.
We are able to use these tables to define the sourcing logic that describes costs and where a product can be introduced into the network through production at a Facility (Production Policies) or by way of a Supplier (Supplier Capabilities). We can also define additional rules around how a product must be sourced using the Max Sourcing Range and Optimization Policy fields in the Customer Fulfillment, Replenishment, and Procurement Policies tables.
The Max Sourcing Range field can be used to specify the maximum flow distance allowed for a listed location / product combination. If flow distances are not specified in the Distance field of the Transportation Policies table, a straight-line distance will be calculated based on the Origin / Destination geocoordinates. This will take into account the Circuity Factor specified in the Model Settings as a multiplication factor to estimate real road distances. Any transportation distances that exceed the Max Sourcing Range will result in the arcs being dropped from consideration.
There are 4 allowable entries for Optimization Policy. For any given Destination / Product combination, only a single Optimization Policy entry will be supported meaning you can not have one source listed with a policy of Single Source and another as By Ratio (Auto Scale).
This is the default entry that will be used if nothing is specified. To Optimize places no additional logic onto the sourcing requirement and will use the least cost option available.
For the listed destination / product combination, only one of the possible sources can be selected.
This option allows for sources to be split by the defined ratios that are entered into the Optimization Policy Value field. All of the entries into this Policy Value field will be automatically scaled, and the flow ratios will be followed for all inbound flow to the listed destination / product combination.
For example, there are 3 potential sources for a single Customer location. There is a flow split enforced of 50-30-20 from DC_1, DC_2, DC_3 respectively. This can be entered as Policy Values of 50, 30, and 20:

The same sourcing logic could be achieved by entering values of 5, 3, 2 or even 15, 9, 6. All values will be automatically scaled for each valid source that has been defined for a destination / product combination.
Similar to the Auto Scale option, By Ratio (No Scale) allows for sources to be split by the defined ratios entered into the Optimization Policy Value field. However, no scaling will be performed and the Optimization Policy Value fields will be treated as absolute sourcing percentages where an entry of 50 means that exactly 50% of the inbound flow will come from the listed source.
For example, there are 3 possible sources for a single Customer location and we want to enforce that DC_1 will account for exactly 50% of the flow while the remainder can come from any valid location. We can specify that DC_1 will have a Policy Value of 50 while leaving our other options open for the model to optimize.

If Policy Values add up to less than 100 for a listed destination / product combination, another sourcing option must be available to fulfill the remaining percentage.
If Policy Values add up to more than 100 for a listed destination / product combination, the percentages will be scaled to 100 and used as the only possible sources.
You can create a free account on the Optilogic platform, which includes Cosmic Frog, in just a few clicks. This document shows you two ways in which you can do this. Use the first option if you have Single Sign On (SSO) enabled for your Google (Gmail) or Microsoft account and you want to use this to log into the Optilogic platform.
This video posted on the Optilogic Training website also covers account creation and then goes into how to navigate Cosmic Frog, a good starting point for new users.
To create your free account, go to signup.optilogic.app. This will automatically re-direct you to a Cosmic Frog Log In page:

First, we will walk through the steps of continuing with Microsoft where the user has Single Sign On enabled for their Microsoft account and has clicked on “Continue with Microsoft”. In the next section we will similarly go through the steps for using SSO with a Google account.
After the user has clicked on “Continue with Microsoft”, the following page will be brought up. Click on Accept to continue if the information displayed is correct.

You will see the following message about linking your Microsoft account to your Optilogic account:

Go into your email and find the email with subject “Link Microsoft”, and click on the Link Account button at the bottom of this email:

Should you not have received this email, you can click on “Resend Email”. If you did receive it and you have clicked on the Link Account button, you will be immediately logged into www.optilogic.com and will see the Home screen within the platform, which will look similar to the below screenshot:

From now on, you have 2 options when logging into the Optilogic platform via cosmicfrog.com (see the first screenshot in this documentation): you can log in by clicking on the “Continue with Microsoft” option which will immediately log you in or you can type your credentials into the username / email and password fields to manually log in.
After the user has clicked on “Continue with Google”, the following page will be brought up. If you have multiple Google email addresses, click on the one you want to use for logging into the Optilogic platform. If the email you want to use is not listed, you can click on “Use another account” and then enter the email address.

If the email you choose to use is not signed in on the device you are on currently, you will be asked for your password next. Please provide it and continue. If it is the first time you are using the email address to log into the Optilogic platform, you will be asked to verify it in the next step:

The default verification method associated with the Google account will be suggested, which in the example screenshot above is to send the verification code to a phone number. If other ways to verify the Google account have been set up, you can click on “More ways to verify” to change the verification method. If you are happy with the suggested method, click on Send. Once you have hit Send, the following form will come up:

You can again switch to another verification method in this screen by clicking on “More ways to verify”, or, if you have received the verification code, you can just enter it into the “Enter the code” field and click on Next. This will log you into the Optilogic platform and you will now see the Home screen within the platform, which will look similar to the last screenshot in the previous section (“Steps for the “Continue with Microsoft” Option”).
From now on, you have 2 options when logging into the Optilogic platform via cosmicfrog.com (see the first screenshot in this documentation): you can log in by clicking on the “Continue with Google” option which will immediately log you in after you have selected the Google email address to use, or you can type your credentials into the username / email and password fields to manually log in.
To create your free account, go to www.optilogic.com and click on the yellow “Create a Free Account” button.

The following form will be brought up, please fill out your First Name, Last Name, Email Address, and Phone Number. Then click on Next Step.

Your entered information will be shown back to you, and you can just click on Next Step again. Next, a form where you can set your Username and Password will come up. Click on Next Step again once this form is filled out.

In the final step you will be asked to fill out your Company Name, Role, Industry, and Company Size. Click on Submit after you have filled out these details.

A submission confirmation will pop up with instructions to verify your email address. Once you have verified your email address you can immediately start using your free account!
Cosmic Frog for Excel Applications provide alternative interfaces for specific use cases as companion applications to the full Cosmic Frog Supply chain design product. For example, they can be used to access a subset of the Cosmic Frog functionality in a simplified manner or provide specific users who are not experienced in working with Cosmic Frog models access to a subset of inputs and/or outputs of a full-blown Cosmic Frog model that are relevant to their position.
Several example use cases are:
It is recommended to review the Cosmic Frog for Excel App Builder before diving into this documentation, as basic applications can quickly and easily be built with it rather than having to edit/write code, which is what will be explained in this help article. The Cosmic Frog for Excel App Builder can be found in the Resource Library and is also explained in the “Getting Started with the Cosmic Frog for Excel App Builder” help article.
Here we will discuss how one can set up and use a Cosmic Frog for Excel Application, which will include steps that use VBA (Visual Basic for Applications) in Excel and scripting using the programming language Python. This may sound daunting at first if you have little or no experience using these. However, by following along with this resource and the ones referenced in this document, most users will be able to set up their own App in about a day or 2 by copy-pasting from these resources and updating the parts that are specific to their use case. Generative AI engines like Chat GPT and perplexity can be very helpful as well to get a start on VBA and Python code. Cosmic Frog functionality will not be explained much in this documentation, the assumption is that users are familiar with the basics of building, running, and analyzing outputs of Cosmic Frog models.
In this documentation we are mainly following along with the Greenfield App that is part of the Resource Library resource “Building a Cosmic Frog for Excel Application”. Once we have gone through this Greenfield app in detail, we will discuss how other common functionality that the Greenfield App does not use can be added to your own Apps.
There are several Cosmic Frog for Excel Applications that have been developed by Optilogic available in the Resource Library. Links to these and a short description of each of them can be found in the penultimate section “Apps Available in the Resource Library” of this documentation.
Throughout the documentation links to other resources are included; in the last section “List of All Resources” a complete list of all resources mentioned is provided.
The following screenshot shows at a high-level what happens when a typical Cosmic Frog for Excel App is used. The left side represents what happens in Excel, and on the right side what happens on the Optilogic platform.

A typical Cosmic Frog for Excel Application will contain at least several worksheets that each serve a specific purpose. As mentioned before, we are using the MicroAPP_Greenfield_v3.xlsm App from the Building a Cosmic Frog for Excel Application resource as an example. The screenshots in this section are of this .xlsm file. Depending on the purpose of the App, users will name and organize worksheets differently, and add/remove worksheets as needed too:




To set up and configure Cosmic Frog for Excel Applications, we mostly use .xlsm Excel files, which are macro-enabled Excel workbooks. When opening an .xlsm file that for example has been shared with you by someone else or has been downloaded from the Optilogic Resource Library (Help Article on How To Use the Resource Library), you may find that you see either a message about a Protected View where editing needs to be enabled or a Security Warning that Macros have been disabled. Please see the Troubleshooting section towards the end of this documentation on how to resolve these warnings.
To set up Macros using Visual Basic for Applications (VBA), go to the Developer tab of the Excel ribbon:

If the Developer option is not available in the ribbon, then go to File > Options > Customize Ribbon, select Developer from the list on the left and click on the Add >> button, then click on OK. Should you not see Options when clicking on File, then click on “More…” instead, which will then show you Options too.
Now that you are set up to start building Macros using VBA: go to the Developer tab, enable Design Mode and add controls to your sheets by clicking on Insert, and selecting any controls to insert from the drop-down menu. For example, add a button and assign a Macro to it by right clicking on the button and selecting Assign Macro from the right-click menu:



To learn more about Visual Basic for Applications, see this Microsoft help article Getting started with VBA in Office, it also has an entire section on VBA in Excel.
It is possible to add custom modules to VBA in which Sub procedures (“Subs”) and functions to perform specific tasks have been pre-defined and can be called in the rest of the VBA code used in the workbook where the module has been imported into. Optilogic has created such a module, called Optilogic.bas. This module provides 8 standard functions for integration into the Optilogic platform.
You can download Optilogic.bas from the Building a Cosmic Frog for Excel Application resource in the Resource Library:

You can then import it into the workbook you want to use it in:

Right click on Modules in the VBA Project of the workbook you are working in and then select Import File…. Browse to where you have saved Opitlogic.bas and select it. Once done, it will appear in the Modules section, and you can double click on it to open it up:


These Optilogic specific Sub procedures and the standard VBA for Excel functionality enable users to create the Macros they require for their Cosmic Frog for Excel Applications.
App Keys are used to authenticate the user from the Excel App on the Optilogic platform. To get an App Key that you can enter into your Excel Apps, see this Help Center Article on Generating App and API Keys. During the first run of an App, the App Key will be copied from the cell it is entered into to an app.key file in the same folder as the Excel .xlsm file, and it will be removed from the worksheet. This is done by using the Manage_App_Key Sub procedure described in the “Optilogic.bas VBA Module” section above. User can then keep running the App without having to enter the App Key again unless the workbook or app.key file is moved elsewhere.
It is important to emphasize that App Keys should not be saved into Excel Apps as they can easily be accidentally shared when the Excel App itself is shared. Individual users need to authenticate with their own App Key.
When sharing an App with someone else, one easy way to do so is to share all contents of the folder where the Excel App is saved (optionally, zipped up). However, one needs to make sure to remove the app.key file from this folder before doing so.
A Python Job file in the context of Cosmic Frog for Excel Applications is the file that contains the instructions (in Python script format) for the operations of the App that take place on the Optilogic Platform.
Notes on Job files:
For Cosmic Frog for Excel Apps, a .job file is typically created and saved in the same folder as the Macro-enabled Excel workbook. As part of the Run Macro in that Excel workbook, the .job file will be uploaded to the Optilogic platform too (together with any input & settings data). Once uploaded, the Python code in the .job file will be executed, which may do things like loading the data from any uploaded CSV files into a Cosmic Frog model, run that Cosmic Frog model (a Greenfield run in our example), and retrieve certain outputs of interest from the Cosmic Frog model once the run is done.
For a Python job that uses functionality from the cosmicfrog library to run, a requirements.txt file that just contains the text “cosmicfrog” (without the quotes) needs to be placed in the same folder as the .job file. Therefore, this file is typically created by the Excel Macro and uploaded together with any exported data & settings worksheets, the app.key file, and the .job file itself so they all land in the same working folder on the Optilogic platform. Note that the Optilogic platform will soon be updated so that using a requirements.txt file will not be needed anymore and the cosmicfrog library will be available by default.
Like VBA, users and creators of Cosmic Frog for Excel Apps do not need to be experts in Python code, and will mostly be able to do the things they want to by copy-pasting from existing Apps and updating only the parts that are different for their App. In the greenfield.job section further below we will go through the code of the python Job for the Greenfield App in more detail, which can be a starting point for users to start making changes to for their own Apps. Next, we will provide some more details and references to quickly equip you with some basic knowledge, including what you can do with the cosmicfrog Python library.
There are a lot of helpful resources and communities online where users can learn everything there is to know about using & writing Python code. A great place to start is on the Python for Beginners page on python.org. This page also mentions how more experienced coders can get started with Python.
Working locally on any Python scripts/Jobs has the advantage that you can make use of code completion features which helps with things like auto-completion, showing what arguments functions need, catch incorrect syntax/names, etc. An example set up to achieve this is for example one where Python, Visual Studio Code, and an IntelliSense extension package for Python for Visual Studio Code are installed locally:
Once you are set up locally and are starting to work with Python files in Visual Studio Code, you will need to install the pandas and cosmicfrog libraries to have access to their functionality. You do this by typing following in a terminal in Visual Studio Code:
More experienced users may start using additional Python libraries in their scripts and will need to similarly install them when working locally to have access to their functionality.
If you want to access items on the Optilogic platform (like Cosmic Frog models) while working locally, you will likely need to whitelist your IP address on the platform, so the connections are not blocked by a firewall. You can do this yourself on the Optilogic platform:

A great resource on how to write Python scripts for Cosmic Frog models is this “Scripting with Cosmic Frog” video. In this video, the cosmicfrog Python library, which adds specific functionality to the existing Python features to work with Cosmic Frog models, is covered in some detail already. The next set of screenshots will show an example using a Python script named testing123.py on our local set-up. The first screenshot shows a list of functions available from the cosmicfrog Python library:

When you continue typing after you have typed “model.” the code completion feature will auto-generate a list of functions you may be getting at. In the next screenshot ones that start with or contain a “g” as I have only typed a “g” so far. This list will auto-update the more you type. You can select from the list with your cursor or arrow up/down keys and hitting the Tab key to auto-complete:

When you have completed typing the function name and next type a parenthesis ‘(‘ to start entering arguments, a pop-up will come up which contains information about the function and its arguments:

As you type the arguments for the function, the argument that you are on and the expected format (e.g. bool for a Boolean, str for string, etc.) will be in blue font and a description of this specific argument appears above the function description (e.g. above box 1 in the above screenshot). In the screenshot above we are on the first argument input_only which requires a Boolean as input and will be set to False by default if the argument is not specified. In the screenshot below we are on the fourth argument (original_names) which is now in blue font; its default is also False, and the argument description above the function description has changed now to reflect the fourth argument:

The next screenshot shows 2 examples of using the get_tablelist function of the FrogModel module:

As mentioned above, you can also use Atlas on the Optilogic platform to create and run Python scripts. One drawback here is that it currently does not have code completion features like IntelliSense in Visual Studio Code.
The following simple test.py Python script on Atlas will print the first Hopper output table name and its column names:


After running the Greenfield App, we can see the following files together in the same folder on our local machine:

On the Optilogic platform, a working folder is created by the Run Greenfield Macro. This folder is called “z Working Folder for Excel Greenfield App”. After running the Greenfield App, we can see following files in here:

Parts of the Excel Macro and Python .job file will be different from App to App based on the App’s purpose, but a lot of the content will be the same or similar. In this section we will step through the Macro that is behind the Run Greenfield button in the Cosmic Frog for Excel Greenfield App that is included in the “Building a Cosmic Frog for Excel Application” resource, where it will be explained what is happening at a high level each step of the way and mention if this part is likely to be different and in need of editing for other Apps or if it would typically stay the same across most Apps. After stepping through this Excel Macro in this section, we will the same for the Greenfield.job file in the next section.
The next screenshot shows the first part of the VBA code of the Run Greenfield Macro:

Note that throughout the Macro you will see text in green font. These are comments to describe what the code is doing and are not code that is executed when running the Macro. You can add comments by simply starting the line with a single quote and then typing your comment. Comments can be very helpful for less experienced users to understand what the VBA code is doing.
Next, the file path to the workbook is retrieved:

This piece of code uses the Get_Workbook_File_Path function of the Optilogic.bas VBA module to get the file path of the current workbook. This function first tries to get the path without user input. If it finds that the path looks like the Excel workbook is stored online in for example a Cloud folder, it will use user input in cell B3 on the Admin worksheet to get the file path instead. Note that specifying the file path is not necessary if the App runs fine without it, which means it could get the path without the user input. Only if user gets the message “Local file path to this Excel workbook is invalid. It is possible the Excel workbook is in a cloud drive, or you have provided an invalid local path. Please review setup step 4 on Admin sheet.”, the local file path should be entered into cell B3 on the Admin worksheet.
This code can be left as is for other Apps if there is an Admin worksheet (the variable pathsheetName indicated with 1 in screenshot above) where in cell B3 the file path (the variable pathCell indicated with 2 in screenshot above) can be specified. Of course, the worksheet name and cell can be updated if these are located elsewhere in the App. The message the user gets in this case (set as pathusrMsg indicated with 3 in the screenshot above) may need to be edited accordingly too.
The following code takes care of the App Key management:

The Manage_App_Key function from the Optilogic.bas VBA module is used here to retrieve the App Key from cell B2 on the Admin worksheet and put it into a file named app.key which is saved in the same location as the workbook when the App is run for the first time. The key is then removed from cell B2 and replaced with the text “app key has been saved; you can keep running the App”. As long as the app.key file and the workbook are kept together in the same location, the App will keep working.
Like the previous code on getting the local file path of the workbook, this code can be left as is for other Apps. Only if the location of where the App Key needs to be entered before the first run is different from cell B2 on the worksheet named Admin, the keysheetName and keyCell variables (indicated with 1 and 2 in the screenshot above) need to be updated accordingly.
This App has a greenfield.job file associated with it that contains the Python script which will be run on the Optilogic platform when the App is run. The next piece of code checks that this greenfield.job file is saved in the same location as the Excel App, and it also sets the name of the folder to be created on the Optilogic platform where files will get uploaded to:

This code can be left as is for other Cosmic Frog for Excel Apps, except following will likely need updating:
The Greenfield settings are set in the next step. The ones the user can set on the Settings worksheet are taken from there and others are set to a default value:

Next, the Greenfield Settings and the other input data are written into .csv files:


The firstSpaceIndex variable is set to the location of the first space in the resource size string.
Looking in the Greenfield App on the Customers worksheet we see that this means that the Customer Name (column A), Latitude (column B), Longitude (column C), and Quantity (column D) columns will be exported. The Customers.csv file will contain the column names on the first row, plus 96 rows with data as the last populated row is row 97. Here follows a screenshot showing the Customers worksheet in the Excel App (rows 6-93 hidden) and the first 11 lines in the Customers.csv file that was exported while running the Greenfield App:

Other Cosmic Frog for Excel Applications will often contain data to be exported and uploaded to the Optilogic platform to refresh model data; the Export_CSV_File function can be used in the same way to export similar and other tabular data.
As mentioned in the “Python Job File and requirements.txt” section earlier, a requirements.txt file placed in the same folder as the .job file that contains the Python script is needed so the Python script can run using functionality from the cosmicfrog Python library. The next code snippet checks if this file already exists in the same location as the Excel App, and if not creates it there, plus writes the text cosmicfrog into it.

This code can be used as is by other Excel Apps.
The next step is to upload all the files needed to the Optilogic platform:

Besides updating the local/platform file names and paths as appropriate, the Upload_File_To_Optilogic Sub procedure will be used by most if not all Excel Apps: even if the App is only looking at outputs from model runs and not modifying any input data or settings, the function is still required to upload the .job, app.key, and requirements.txt files.
The next bit of code uses 2 more of the Optilogic.bas VBA module functions to run and monitor the Python job on the Optilogic platform:

This piece of code can stay as is for most Apps, just make sure to update the following if needed:
The last piece of code before some error handling downloads the results (2 .csv files) from the Optilogic platform using the Download_File_From_Optilogic function from the Optilogic.bas VBA module:

This piece of code can be used as is with the appropriate updates for worksheet names, cell references, file names, path names, and text of status updates and user messages. Depending on the number of files to be downloaded, the part of the code setting the names of the output files and doing the actual download (bullet 2 above) can be copy-pasted and updated as needed.
The last piece of VBA code of the Macro shown in the screenshot below has some error handling. Specifically, when the Macro tries to retrieve the local path of the Macro-enabled .xlsm workbook and it finds it looks like it is online, an error will pop up and the user will be requested to put the file path name in cell B3 on the Admin worksheet. If the Macro hits any other errors, a message saying “An unexpected error occurred: <error number> <error description>” will pop up. This piece of code can be left as is for other Cosmic Frog for Excel Applications.

We have used version 3 of the Greenfield App which is part of the Building a Cosmic Frog for Excel Application resource in the above. There is also a stand-alone newer version (v6) of the Cosmic Frog for Excel – Greenfield application available in the Resource Library. In addition to all of the above, this App also:
This functionality is likely helpful for a lot of other Cosmic Frog for Excel Apps and will be discussed in section “Additional Common App Functionality” further below. We especially recommend using the functionality to prevent Excel from locking up in all your Apps.
Now we will go through the greenfield.job file that contains the Python script to be run on the Optilogic platform in detail.

This first piece of code takes care of importing several python libraries and modules (optilogic, pandas, time; lines 1, 2, and 5). There is another library, cosmicfrog, that is imported through the requirements.txt file that has been discussed before in the section titled “Python Job File and requirements.txt”. Modules from these libraries are imported here as well (FrogModel from cosmicfrog on line 3 and pioneer.API from optilogic on line 4). Now the functionality of these libraries and their modules can be used throughout the code of the script that follows. The optilogic and cosmicfrog libraries are developed by Optilogic and contain specific functionality to work with Cosmic Frog models (e.g. the functions discussed in the section titled “Working with Python Locally” above and on the Optilogic platform.
For reference:
This first piece of code can be left as is in the script files (.job files locally, .py files on the Optilogic platform) for most Cosmic Frog for Excel Applications. More advanced users may import different libraries and modules to use functionality beyond what the standard Python functionality plus the optilogic, cosmicfrog, pandas, and time libraries & modules together offer.
Next, a check_job_status function is defined that will keep checking a job until it is completed. This will be used when running a job to know if the job is done and ready to move onto the next step, which will often be downloading the results of the run. This piece of code can be kept as is for other Cosmic Frog for Excel Applications.

The following screenshot shows the next snippet of code that defines a function called wait_for_jobs_to_complete. It uses the check_job_status to periodically check if the job is done, and once done, moves onto the next piece of code. Again, this can be kept as is for other Apps.

Now it is time to create and/or connect to the Cosmic Frog model we want to use in our App:

Note that like the VBA code in the Excel Macro, we can add comments describing what the code is doing to our Python script too. In Python, comments need to start with the number (/hash) sign # and the font of comments automatically becomes green in the editor that is being used here (Visual Studio Code using the default Dark Modern color theme).
After clearing the tables, we will now populate them with the date from the Excel workbook. First, the uploaded Customers.csv file that contain the columns Customer Name, Latitude, Longitude, and Quantity is used to update both the Customers and the CustomerDemand tables:

It is very dependent on the App that you are building how much of the above code you can use as is, but the concepts of reading csv files, renaming, and dropping columns as needed and writing tables into the Cosmic Frog model will be frequently used. The following piece of code also writes the Facilities and Suppliers data into the Cosmic Frog tables. Again, the concepts used here will be useful for other Apps too, it may just not be exactly the same depending on the App and the tables that are being written to:

Next up, the Settings.csv file is used to populate the Greenfield Settings table in Cosmic Frog and to set 2 variables for resource size and scenario name:

Now that the Greenfield App Cosmic Frog model is populated with all the data needed, it is time to kick off the model and run a Greenfield analysis:

Besides updating any tags as desired (bullet 2b above), this code can be kept exactly as is for other Excel Apps.
Lastly, once the model is done running, the results are retrieved from the model and written into .csv files, which will then be downloaded by the Excel Macro:

When the greenfield_job.py file starts running on the Optilogic platform, we can monitor and see the progress of the job in the Run Manager App:

The Greenfield App (version 3) that is part of the Building a Cosmic Frog for Excel Application resource covers a lot of common features users will want to use in their own Apps. In this section we will discuss some additional functionality users may also wish to add to their own Apps. This includes:
A newer version of the Greenfield App (version 6) can be found here in the Resource Library. This App has all the functionality version 3 has, plus: 1) it has an updated look with some worksheets renamed and some items moved around, 2) has the option to cancel a Run after it has been kicked off and has not completed yet, 3) it prevents locking up of Excel while the App is running, 4) reads a few CSV output files back into worksheets in the same workbook, and 5) uses a Python library called folium to create Maps that a user can open from the Excel workbook, which will then open the map in the user’s default browser. Please download this newer Greenfield App if you want to follow along with the screenshots in this section. First, we will cover how a user can prevent locking of Excel during a run and how to add a cancel button which can stop a run that has not yet completed.
The screenshots call out what is different as compared to version 3 of the App discussed above. VBA code that is the same is not covered here. The first screenshot is of the beginning of the RunGreenfield_Click Macro that runs when the user hits the Run Greenfield button in the App:

The next screenshot shows the addition of code to enable the Cancel button once the Job has been uploaded to the Optilogic platform:

If everything completes successfully, a user message pops up, and the same 3 lines of code are added here too to enable the Run Greenfield buttons, disable the Cancel button, and keep other applications accessible:

Finally, a new Sub procedure CancelRun is added that is assigned to the Cancel button and will be executed when the Cancel button is clicked on:

This code gets the Job Key (unique identifier of the Job) from cell C9 on the Start worksheet and then uses a new function added to the Optilogic.bas VBA module that is named Cancel_Job_On_Optilogic. This function takes 2 arguments: the Job Key to identify the run that needs to be cancelled and the App Key to authenticate the user on the Optilogic platform.
Version 6 of the Greenfield App reads results from the Facility Summary, Customer Summary, and Flow Summary back into 3 worksheets in the workbook. A new Sub procedure named ImportCSVDataToExistingSheet (which can be found at the bottom of the RunGreenfield Macro code) is used to do this:

The function is used 3 times: to import 1 csv file into 1 worksheet at a time. The function takes 3 arguments:
We will discuss a few possible options on how to visualize your supply chain and model outputs on maps when using/building Cosmic Frog for Excel Applications.
This table summarizes 3 of the mapping options: their pros, cons, and example use cases:
There is standard functionality in Excel to create 3D Maps. You can find this on the Insert tab, in the Tours groups (next to Charts):

Documentation on how to get started with 3D Maps in Excel can be found here. Should your 3D Maps icon be greyed out in your Excel workbook, then this thread on the Microsoft Community forum may help troubleshoot this.
How to create an Excel 3D Map in a nutshell:
With Excel 3D Maps you can visualize locations on the map and for example base their size on characteristics like demand quantity. You can also create heat maps and show how location data changes over time. Flow maps that show lines between source and destination locations cannot be created with Excel 3D Maps. Refer to the Microsoft documentation to get a deeper understanding of what is possible with Excel 3D Maps.
The Cosmic Frog for Excel – Geocoding App in the Resource Library uses Excel 3D Maps to visualize customer locations that the App has geocoded on a map:

Here, the geocoded customers are shown as purple circles which are sized based on their total demand.
A good option to for example visualize Hopper (= transportation optimization) routes on a map is the ArcGIS Excel Add-in. If you do not have the add-in, you can get it from within Excel as follows:

You may be asked to log into your Microsoft account when adding this in and/or when starting to use the Add-in. Should you experience any issues while trying to get the Add-in added to Excel, we recommend closing all Office applications and then only open one Excel workbook through which you add the Add-in.
To start using the add-in and create ArcGIS maps in Excel:

Excel will automatically select all data in the worksheet that you are on. You can ensure the mapping of the data is correct or otherwise edit it:

After adding a layer, you can further configure it through the other icons at the top of the Layers window:

The other configuration options for the Map are found on the left-hand side of the Map configuration pane:

As an example, consider the following map showing the stops on routes created by the Hopper engine (Cosmic Frog’s transportation optimization technology). The data in this worksheet is from the Transportation Stop Summary Hopper output table:

As a next step we can add another layer to the map based on the Transportation Segment Summary Hopper output table to connect the source-destination pairs with each other using flow lines. For this we need to use the Esri JSON Geometry Location types mentioned earlier. An example Excel file containing the format needed for drawing polylines can be found in the last answer of this thread on the Esri community website: https://community.esri.com/t5/arcgis-for-office-questions/json-formatting-in-arcgis-for-excel/td-p/1130208, on the PolylinesExample1 worksheet. From this Excel file we can see that the format needed to draw a line connecting 2 locations:
{“paths”: [[<point1_longitude>,<point1latitude>],[<point2_longitude>,<point2_latitude>]],”spatialReference”: {“wkid”: 4326}}
Where wkid indicates the well-known ID of the spatial reference to be used on the map (see above for a brief explanation and a link to a more elaborate explanation of spatial references). Here it is set to 4326, which is WGS 1984.
The next 2 screenshots show the data from a Segments Summary and an added layer to the map to show the lines from the stops on the route:


Note that for Hopper outputs with multiple routes, we now need to filter both the worksheet with the Stops information and the worksheet with the Segments information for the same route(s) to synchronize them. A better solution is to bring the stopID and Delivered Quantity information from the Stops output into the Segments output, so we only have 1 worksheet with all the information needed and both layers are generated from the same data. Then filtering this set of data will update both layers simultaneously.
Here, we will discuss a Python library called folium, which gives users the ability to create maps that can show flows, tooltips, and has options to customize/auto-size location shapes and flow lines. We will use the example of the Cosmic Frog for Excel – Greenfield App (version 6) again where maps are created as .html files as part of the greenfield_job.py Python script that runs on the Optilogic platform. They are then downloaded as part of the results and from within Excel, users can click on buttons to show flows or customers which then opens the .html files in user’s default browser. We will focus on the differences with version 3 of the Greenfield App that are related to maps and folium. We will discuss the changes/addition to both the VBA code in the Excel Run Greenfield Macro and the additions to greenfield_job.py. First up in the VBA code, we need to add folium to the requirements.txt file so that the Python script can make use of the library once it is uploaded to the Optilogic platform:

To do so, a line to the VBA code is added to write “folium” into requirements.txt.
As part of downloading all the results from the Optilogic platform after the Greenfield run has completed, we need to add downloading the .html map files that were created:

In this version of the Greenfield app, there is a new Results Summary worksheet that has 3 buttons at the top:

Each of these buttons has a Sub procedure assigned to it, let’s look at the one for the “Show Flows with Customers” button:

The map that is opened will look something like this, where a tooltip comes up when hovering over a flow line. (How to create and configure the map using folium will be discussed next.)

The additions to the greenfield.job file to make use of folium and creating the 3 maps will now be covered:

First, at the beginning of the script, we need to add “import folium” (line 6), so that the library’s functionality can be used throughout the script. Next, the 3 Greenfield output tables that are used to create the 3 maps are read in, and a few data type changes are made to get the data ready for mapping:

This is repeated twice, once for the Optimization Greenfield Customer Summary output table and once for the Optimization Greenfield Flow Summary output table.
The next screenshot shows the code where the map to show Facilities is created and the Markers of them are configured based on if the facility is an Existing Facility or a Greenfield Facility:

In the next bit of code, df_Res_Flows dataframe is used to draw lines on the map between origin and destination locations:

Lastly, the customers from the Optimization Greenfield Customer Summary output table are added to the map that already contains facilities and flow lines, and is saved as greenfield_flows_customers_map.html:

Here are some additional pointers that may be useful when building your own Cosmic Frog for Excel applications:


You may run into issues where Macros or scripts are not running as expected. Here we cover some common problems you may come across and their solutions.
When opening an Excel .xslm file you may find that you see following message about the view being protected, you can click on Enable Editing if you trust the source:

Enabling content is not necessarily sufficient to also be able to run any Macros contained in the .xlsm file, and you may see following message after clicking on the Enable Editing button:

Closing this message box and then trying to run a Macro will result in the following message.

To resolve this, it is not always sufficient to just close and reopen the workbook and enable macros as the message suggests. Rather, go to the folder where the .xlsm file is saved in File Explorer, right click on it, and select Properties:

At the bottom in the General tab, check the Unblock checkbox and then click on OK.

Now, when you open the .xlsm file again, you have the option to Enable Macros, do so by clicking on the button. From now on, you will not need to repeat any of these steps when closing and reopening the .xlsm file; Macros will work fine.

It is also possible that instead of the Enable Editing warning and warnings around Macros not running discussed above, you will see a message that Macros have been disabled, as in the following screenshot. In this case, please click on the Enable Content button:

Depending on your anti-virus software and its settings, it is possible that the Macros in your Cosmic Frog for Excel Apps will not run as they are blocked by the anti-virus software. If you get “An unexpected error occurred: 13 Type mismatch”, this may be indicative of the anti-virus software blocking the Macro. Work with your IT department to allow the running of Macros.
If you are running Python scripts locally (say from Visual Studio Code) that are connecting to Cosmic Frog models and/or uploading files to the Optilogic platform, you may be unsuccessful and get warnings with the text “WARNING – create_engine_with_retry: Database not ready, retrying”. In this case, the likely cause is that your IP address needs to be added to the list of firewall exceptions within the Optilogic platform, see the instructions on how to do this in the “Working with Python Locally” section further above.
You will find that if you export cells that contain formulas from Excel to CSV that these are exported as 0’s and not as the calculated value. Possible solutions for this are 1) to export to a format other than CSV, possibly .xslx, or 2) to create an extra column in your data where the results of the cells with formulas are copy-pasted as values into and export this column instead of the one with the formulas (this way the formulas stay intact for a next run of the App). You could use the record Macro option to get a start on the VBA code for copy-pasting values from a certain column into a certain column so that you do not have to manually do this each time you run the App, but it becomes part of the Macro that runs when the App runs. An example of VBA code that copy-pastes values can be seen in this screenshot:

When running an App that has been run previously, there are likely output files in the folder where the App is located, for example CSV files that are opened by the user to view the results or are read back into a worksheet in the App. When running the App again, it is important to not have these output files open, otherwise an error will be thrown when the App gets to the stage of downloading the output files since open files cannot be overwritten.
There are currently several Cosmic Frog for Excel Applications available in the Resource Library, with more being added over time. Check back frequently and search for “Cosmic Frog for Excel” in the search bar to find all available Apps. A short description for each App that is available follows here:
As this documentation contains many links to references and resources, we will list them all here in one place:
We love for our users to connect, keep up to date, learn from and share with other Cosmic Frog users & experts through the Frogger Pond Community! If you have an Optilogic account (see this page on how to create your free account if you do not have one yet), you can use that same account to log into the Frogger Pond Community.
Here, we will describe what the Frogger Pond Community consists of, how to interact with, search, sort, and contribute to Topics, and how to manage your account. Recommended reads for new users are included in the last section too.
When you login to the Frogger Pond Community, the homepage you see will look similar to the screenshot below:

Once you have clicked on a topic that you want to read and possibly interact with, you will see something similar to the following screenshot:

After you click on the “+ New Topic” button, the following window will pop-up at the bottom of your browser:

The third icon at the top right of the homepage opens a Menu that you can use to quickly navigate to different parts of the Frogger Pond Community.

When you click on your profile picture at the top right of the homepage, a small window that gives you quick access to (from left to right) Notifications (bell icon), Bookmarks (bookmark icon), Messages (envelope icon), and Preferences (person icon) opens up:

If you click on the upside-down caret at the bottom of notifications, bookmarks, or messages or click on any item in the preferences list, you will be taken to that area of your account. In the following screenshot, we have gone to the Messages section of the user account:

Using a platform you may not be familiar with can be overwhelming. To help new users getting started, we recommend reading the following items. These are also mentioned in the “Welcome to Optilogic!” message all new users of the Frogger Pond Community receive.
We look forward to your questions & contributions over at the Frogger Pond!
There are two methods for establishing a secure connection to the Optilogic platform:
An App key is a code that can be linked to your account and will not expire. API keys are generated with code and only last for one hour before they expire. Both keys can be useful depending on how you wish to access the platform. Without either an App Key or an API Key you will not be able to run any API endpoints.
Login to the Optilogic website and click on your name in the top right corner, then click on “Account.”

Click on the “App Key Management” tab from their name your app key and click on the “Create Key” button.

At this point you may copy your App Key to be used for authentication purposes.
To generate an API key you will need to leverage python and the following instructions.
In a python file copy and paste this code and replace the USERNAME and PASSWORD with your own. Make sure to remove both sets of {{}} curly brackets so that it looks like this: headers = {‘X-USER-ID’: ‘CMorrell’ }
import requests
url = ‘https://api.optilogic.app/v0/r…’
headers = {
‘X-USER-ID’: ‘{{user_id}}’,
‘X-USER-PASSWORD’: ‘{{user_password}}’
}
response = requests.request(‘POST’, url, headers=headers)
print(response.text)

The result of this code will be an API key that can be used for authentication.
When running geocoding through the default Mapbox provider, all of the available location data from the Customers, Facilities and Suppliers table will be used to try and determine the latitude and longitude coordinates. Mapbox will use all of these components and perform the best mapping possible and will return a latitude / longitude coordinate along with a confidence score. By default, Cosmic Frog will only accept scores with a confidence score of 100. You can optionally turn this option off and the top confidence score will then be returned by Mapbox.

More information on how Mapbox calculates latitude and longitude coordinates can be found here: Mapbox Geocoding Documentation.
If you’d like to use an alternate provider instead of Mapbox, setup instructions can be found here: Using Alternate Geocoding Providers.
Every account holder has access to create the Global Supply Chain Strategy demo model. Following is an overview of the features of the model (and of Cosmic Frog).
If you wish to build the model instead, please follow the instructions located here: Build Your First Cosmic Frog Model
When running Cosmic Frog models and other jobs on the Optilogic platform, cloud resources are used. Usage of these resources is billed based on the billing factor of the resource used for the job. Each Optilogic customer has an amount of cloud compute hours included in their Master License Agreement (MLA). Users may want to check how many of these hours have been used up and in this documentation 2 ways to do so will be covered. In the last section we will touch on how to best track hours at the team/company level.
The first option for hours tracking that will be covered is through the Usage tab in the user’s Account:

If a user is asked by their manager to report the hours they have used on the Optilogic platform, they can go here and use the Custom Time Window Preset option to align the start and end date of the reporting period with the dates of the MLA. They can then report back the number shown as the Total Billed Compute Time (box 4 in the above screenshot).
Through the Run Manager application on the Optilogic platform, user can also analyze their jobs run, including retrieving the Total Billed Compute Time:

After clicking on the View Charts icon, a screen similar to the following screenshot will be shown:


If a user needs to report their hours used on the Optilogic platform, they can download this jobs.csv file and:
Currently, only tracking of usage hours at the individual user level is available as described above. To get total team or company usage, a manager can ask their users to use 1 of the above 2 methods to report their Total Billed Compute Time and the manager can then add these up to get the total used hours so far. Tracking at the team/company level is planned to be made available on the Optilogic platform later in 2024.
With Intelligent Greenfield Analysis (the Triad engine in Cosmic Frog), you have control over several different solve settings. For ease of use with scenario modeling, these have been placed in a dedicated table called Greenfield Settings. This allows for quick scenario building that leverages the column names. We will cover the settings which can be configured on the Greenfield Settings table and show an example of how scenarios can be used to change these settings.
Following screenshot shows the Greenfield Settings table:

An explanation of each setting is as follows:
Note that the “Getting Started with Intelligent Greenfield Analysis” help article contains a visual explanation of customer clustering too.
Finally, we will look at an example where a scenario item changes a Greenfield setting:
