Cosmic Frog supports importing and exporting both CSV and Excel files directly through the application. This enables users to for example:
In this documentation we will cover how users can import and export data into and out of Cosmic Frog, and illustrate this with multiple examples.
There are 2 methods of importing Excel/CSV data into Cosmic Frog’s input tables available to users:
Pointers on how data to be imported needs to be formatted will be covered first, including some tips and call outs of specifics to keep in mind when using the upsert import method. Next, the steps to import a CSV/Excel file will be walked through step by step.
Data is mapped from CSV/Excel files based on matching column names and table names matching to the file name (CSV) or worksheet name (Excel):
Data preparation tips:

CSV vs Excel: CSV files only have 1 “worksheet”, so it can only contain data to be imported into 1 table, whereas Excel files can have multiple worksheets with data to be imported to different tables in Cosmic Frog.
Please take note of how existing records are treated when using the upsert import method to import to a table which already has some data in it:
We will illustrate these behaviors through several examples too.
Users can import 1 or multiple CSV or Excel files simultaneously, please take note of how the import will work for following situations:
Once ready to import the prepared CSV/Excel file(s), user has 2 ways of accessing the import and export methods: from the File menu in the toolbar and from the right-click context menu of an input table. It looks like this from the File menu to import a file:

And when using the right-click context menu the steps to import a file are as follows:

When using the replace import method, a confirmation message will now be shown on which user can click Import to continue the import or Cancel to abort.
Next, a file explorer window opens in which user can browse to and select the CSV/Excel file(s) to import:

Once the import starts, a status message shows at the top of the active table:

The Model Activity log will also have an entry for each import action:

User can see the results of the import by opening and inspecting the affected input table(s), and by looking at the row counts for the tables in the input tables list, outlined in green in this screenshot:

A common way to start building a new model in Cosmic Frog is to make use of the replace import method to populate multiple tables simultaneously with data from Excel or CSV files. These files have typically been prepared from ERP extracts which have been manipulated to match the Cosmic Frog table and column names. This way, users do not need to enter data manually into the Cosmic Frog input tables, which would be very laborious. Note that it can be helpful to first export empty tables from a new, empty Cosmic Frog model to have a template to start filling out (see the “Exporting to CSV/Excel Files” section further below on how to do this).
Starting with an empty new model in Cosmic Frog:

User has prepared the following Excel .xlsx file:

After importing this file into Cosmic Frog, we notice that the Customers, Facilities and Products tables now have row counts that match the number of records we had in the Excel file that was used for the import, and we can open the individual tables to see the imported records:

Consider user is modelling a sports equipment company and has populated the Products table of a Cosmic Frog model with 8 products as follows:

After working with the model for a while, the user realizes a few things:
As item number 1 will change the product names, a column that is part of the primary key of the Products table, user will need to use the replace import method to make these changes as the upsert method does not change the values of columns that are part of the primary key. Following is the .xlsx file user prepares to replace the data in the Products table with:

After importing the file using the replace method, the Products table looks like this:

We see the records are the exact same as what was contained in the Products.xlsx file that was imported, and the row count for the Products table has correctly gone up to 10 with the 2 new products added.
Continuing from the Products table in the last screenshot above, user now wants to make a few additional changes as follows:
To make these changes to the Products table, the user prepares the following Products file to be upserted to the Products table, where the green numbers in the screenshot below match the items described in the bullet point list directly above:

After using the upsert import method for this file into the Products table, it contains following records. The ones changed / added are listed at the bottom:

In the boxes outlined in green we see that all the expected changes and the insertion of the 1 new record have been made.
Let us also illustrate what will happen when files with invalid /missing data are imported. We will use the replace import method for the example here, but similar results will be seen when using the upsert method. Following screenshot shows a Products table that has been prepared in Excel, where we can see several issues already: a blank Product Name, a negative value for Unit Price, etc.

After this file is imported to the Products table using the replace method, the Products table will look as follows:

The cells that are outlined in red contain invalid values. Hovering over each cell will show a tooltip message describing the problem.
For tables with many records, it may be hard to find the fields in red outline manually. To help with this, there is a standard filter user can apply that will show all records that have 1 or multiple input data errors:

In conclusion, Cosmic Frog will let a user import invalid data, and then helps user identify the data issues with the red outlines, hover over tooltips, and the Show Input Data Errors filter.
Consider following Transportation Policies table:

There is now a change where from MFG_1 all Racket products need to be shipped by Parcel for a fixed cost of $50. User creates 2 Named Filters (see the Named Filters in Cosmic Frog help center article) in the Products table: 1 that filters out all racket products (those products that have a product name that start with FG_Racket) which is named Rackets and 1 that filters out all non-racket products (those products that do not contain racket in the product name) which is named AllExceptRackets. Next, user prepares following TransportationPolicies.csv file to upsert into the Transportation policies table with the intention to update the first 2 records in the existing table to be specific for the AllExceptRackets products and add 2 new ones for the Rackets products:

The result of using this file to upsert to the Transportation Policies table is as follows:

This example shows that users need to be mindful of which fields are part of the table’s primary key and remember that values of primary key fields cannot be changed by the upsert import method. An example workflow that will achieve the desired changes to the Transportation Policies table is as follows:
It is possible to export a single table or multiple tables (input and output tables) to CSV or Excel from Cosmic Frog. Similar to importing data from CSV/Excel, user can access the export options in 2 ways: from the File menu in the toolbar and from the context menus that come up when right-clicking on tables in the input/output/custom tables lists.
Please note:
The steps to export multiple tables to an Excel file are as follows:

Once the export starts, following message appears at the top of the active table:

Once the export is complete, the exported file can be found in the folder where user’s downloaded files are saved:

When exporting multiple tables to Excel or CSV, the downloaded file will be a .zip file with an automatically generated name based on the model’s Cosmic Frog ID. Extracting the zip-file will show an .xlsx file of the same name, which can be opened in Excel:

These are the steps to export multiple tables to CSV:

When the export starts, the same “File is exporting…” message as shown in the previous section will be showing at the top of the active table. Once the export process is finished, the exported file can again be found in the folder where user’s downloaded files are saved:

The file is again a zip-file, and it has the same name based on the model’s Cosmic Frog ID, just appended with (1), as there is already a zip-file of the same name in the Downloads folder from the previous export to Excel. Unzipping the file creates a new sub-folder of the same name in the Downloads folder:

Exporting a single table to Excel can also be done from the File menu, in the same way as multiple tables are exported to Excel, which was shown above in the “Export Multiple Tables to Excel” section. Now, we will show the second way of doing this by using the context menu that comes up when right-clicking on a table:

When the export starts, the same “File is exporting…” message as shown above will be showing at the top of the active table. Once the export process is finished, the exported file can again be found in the folder where user’s downloaded files are saved:

The name of the exported CSV file matches that of the table that was exported.
Exporting a single table to CSV can also be done from the File menu, in the same way as multiple tables are exported to CSV, which was shown above in the “Export Multiple Tables to CSV” section. Now, we will show the second way of doing this by using the context menu that comes up when right-clicking on a table:

When the export starts, the same “File is exporting…” message as shown above will be showing at the top of the active table. Once the export process is finished, the exported file can again be found in the folder where user’s downloaded files are saved:

For single tables exported to CSV, the name of the file is the same as the name of the exported table. If the Cosmic Frog table was filtered, the file name is appended with “_filtered” like it is here to remind user that only the filtered rows are contained in this exported file.
Tax systems can be complex, like for example those in Greece, Colombia, Italy, Turkey, and Brazil are considered to be among the most complex ones. It can however be important to include taxes, whether as a cost or benefit or both, in supply chain modeling as they can have a big impact on sourcing decisions and therefore overall costs. Here we will showcase an example of how Cosmic Frog’s User Defined Variables and User Defined Costs can be used to model Brazilian ICMS tax benefits and take these into account when optimizing a supply chain.
The model that is covered in this documentation is the “Brazil Tax Model Example” which was put together by Optilogic’s partner 7D Analytics. It can be downloaded from the the Resource Library. Besides the Cosmic Frog model, the Resource Library content also links to this “Cosmic Frog – BR Tax Model Video” which was also put together by 7D Analytics.
A helpful additional resource for those unfamiliar with Cosmic Frog’s user defined variables, costs, and constraints is this “How to use user defined variables” help article.
In this documentation the setup of the example model will first be briefly explained. Next, the ICMS tax in Brazil will be discussed at a high level, including a simplified example calculation. In the third section, we will cover how ICMS tax benefits can be modelled in Cosmic Frog. And finally, we will look at the impact of including these ICMS tax benefits on the flows and overall network costs.
One quick note upfront is that the screenshots of Cosmic Frog tables used throughout this help article may look different when comparing to the same model in user’s account after taking it from the Resource Library. This is due to columns having been moved or hidden and grids being filtered/sorted in specific ways to show only the most relevant information in these screenshots.
In this example model, 2 products are included: Prod_National to represent products that are made within Brazil at the MK_PousoAlegre_MG factory and Prod_Imported to represent products that are imported, which is supplied from SUP_Itajai_SC within the model, representing the seaport where imported products would arrive. There are 6 customer locations which are in the biggest cities in Brazil; their names start with CLI_. There are also 3 distribution centers (DCs): DC_Barueri_SP, DC_Contagem_MG, and DC_FeiraDeSantana_BA. Note that the 2 letter postfixes in the location names are the abbreviations of the states these locations are in. Please see the next screenshot where all model locations are shown on a map of Brazil:

The model’s horizon is all of 2024 and the 6 customers each have demand for both products, ranging from 100 to 600 units. The SUP_ location (for Prod_Imported) and MK_ location (for Prod_National) replenish the DCs with the products. Between the DCs, some transfers are allowed too. The demand at the customer locations can be fulfilled by 1, 2 or all 3 DCs, depending on the customer. The next screenshot of the Transportation Policies table (filtered for Prod_National) shows which procurement, replenishment, and customer fulfillment flows are allowed:


For the other product modelled, Prod_Imported, the same customer fulfillment, DC-DC transfer, and supply options are available, except:
In Brazil, the ICMS tax (Imposto sobre Circulaçao de Mercadorias e Serviços, or Tax on Commerce and Services) is levied by the states. It applies to movement of goods, transportation services between several states or municipalities, and telecommunication services. The rate varies and depends on the state and product.
When a company sells a product, the sales price includes ICMS, which results in an ICMS debit for the company (the company owes this to the state). Likewise, when purchasing or transferring product, the ICMS is included in what the company pays the supplier. This creates ICMS credit for the company. The difference between the ICMS debits and credits is what the company will pay as ICMS tax.
The next diagram shows an ICMS tax calculation example, where company also has a 55% tax benefit which is a discount on the ICMS it needs to pay.

In order to include ICMS tax benefits in a model, we need to be able to calculate ICMS debits and credits based on the amount of flow between locations in different states for both national and imported products. As different states and different products can have different ICMS rates, we need to define these individual flow lanes as variables and apply the appropriate rate to each. This can be done by utilizing the User Defined Variables and User Defined Costs input tables, which can be found in the “Constraints” section of the Cosmic Frog input tables, shown in the below screenshot (here user entered a search term of “userdef” to filter out these 2 tables):

In the User Defined Variables table, we will define 3 variables related to DC_Contagem_MG: one that represents the ICMS Debits, one that represents the ICMS Credits, and one that represents the ICMS Balance (= ICMS Debits – ICMS Credits) for this DC. The ICMS Debits and ICMS Credits variables have multiple terms that each represents a flow out of or a flow into the Contagem DC, respectively. Let us first look at the ICMS Debits variable:

Still looking at the same top records that define the DC_Contagem_MG|ICMS_Debit variable, but freezing the Variable Name and Term Name columns and scrolling right, we can see more of the columns in the User Defined Variables table:

Note that there are quite a few custom columns in this table (not shown in the screenshots; can be added through Grid > Table > Create Custom Column), which were used to calculate the ICMS rates outside of the model. These are helpful to keep in the model, should changes need to be made to the calculations.
Next, we will have a look at the ICMS Credit variable, which is made up of 3 terms, where each term represents a possible supply/replenishment flow into the Contagem DC:

The last step on the User Defined Variables table is to combine the ICMS Credit and ICMS Debit variables to calculate the ICMS balance:

To finalize the setup, we need to add 1 record to the User Defined Costs table, where we will specify that the company has a 55% discount (tax incentive) for the ICMS it pays relating to the Contagem DC:

As mentioned in the previous section, all records in the User Defined Variables and User Defined Costs tables have their Status set to Exclude. This way, when the Baseline scenario is run, the ICMS tax incentive is not included, and the network will be optimized just based on the costs included in the model (in this case only transportation costs). We want to include the ICMS tax incentive in a scenario and then compare the outputs with the Baseline scenario. This “IncludeDCMGTaxBenefit” scenario is set up as follows:

Next, we have a look at the second scenario item that is part of this scenario:

With the scenario set up, we run a network optimization (using the Neo engine) on both scenarios and then first look in the Optimization Network Summary output table:

Notice that the Baseline scenario as expected only contains transportation costs, while the IncludeDCMGTaxBenefits scenario also contains user defined costs, which represent the calculated ICMS tax benefit and have a negative value. So, overall, the IncludeDCMGTaxBenefit scenario has about R$ 331k lower total cost as compared to the Baseline scenario, even though the transportation costs are close to R$ 47k higher. Since the transportation costs are different between the 2 scenarios, we expect some of the flows have changed.
There are 3 network optimization output tables that contain the outputs related to User Defined Variables and Costs:

We will first discuss the Optimization User Defined Variable Term Summary output table:

The Optimization User Defined Variable Summary output table contains the outputs at the variable level (e.g. the individual terms of the variables have been aggregated):

Finally, the Optimization User Defined Cost Summary output table shows the cost based on the 55% benefit that was set:

The DC_Contagem_MG_TaxIncentive benefit is calculated from the DC_Contagem_MG|ICMS_Balance variable, where the Variable Value of R$ 686,980 is multiplied by -0.55 to arrive at the Cost value of R$ -377,839.
Now that we understand at a high level the cost impact of the ICMS tax incentive and the details of how this was calculated, let us look at more granular outputs, starting with looking at the flows between locations. Navigate to the Maps module within Cosmic Frog and open the maps named Baseline and Include DC MG Tax Benefit, which show outputs from the Baseline and IncludeDCMGTaxBenefit scenarios, respectively. The next 2 screenshots show the flows from DCs to customer locations: Baseline flows in the top screenshot and scenario “Include DC MG Tax Benefit” flows in the bottom screenshot:


We see that in the Baseline the customer in Rio de Janeiro is served by the DC in Sao Paulo. This changes in the scenario where the tax benefit is included: now the Rio de Janeiro customer is served by the Contagem DC (located close to Belo Horizonte). The other customer fulfillment flows are the same between the 2 scenarios.
This model also has 2 custom dashboards set up in the Analytics module; the 1. Scenarios Overview dashboard contains 2 graphs:

This Summary graph shows the cost buckets for each scenario as a bar chart. As discussed when looking at the Optimization Network Summary output table, the IncludeDCMGTaxBenefit scenario has an overall lower cost due to the tax benefit, which offsets the increased transportation costs as compared to the Baseline scenario.

This Site Summary bar chart shows the total outbound quantity for each DC / Factory / Supplier by scenario. We see that the outbound flow for the DC in Barueri is reduced by 500 units in the IncludeDCMGTaxBenefit scenario as compared to the Baseline scenario, whereas the Contagem DC has an increased outbound flow, from 1,000 to 2,500 units. We can examine these shifts in further detail in the second custom dashboard named 2. Outbound Flows by Site, as shown in the next 2 screenshots:

This first screenshot of the dashboard shows the amount of flow from the 3 DCs and the factory to the 6 customer locations. As we already noticed on the map, the only shift here is that the Rio De Janeiro customer is served by the Barueri DC in the Baseline scenario and this changes to it being served by the Contagem DC in the IncludeDCMGTaxBenefit scenario.

Scrolling further right in this table, we see the replenishment flows from the 3 DCs and the Factory to the 3 DCs. There are some more changes here where we see that the flow from the factory to the Barueri DC is reduced by 500 units in the scenario, whereas the flow from the factory to the Contagem DC is increased by 500 units. In the Baseline, the Barueri DC transferred a total of 1,000 units to the other 2 DCs (500 each to the Contagem and Feira de Santana DCs), and the other 2 DCs did not make DC transfers. In the Tax Benefit scenario, the Barueri DC only transfers to the Contagem DC, but now for 1,500 units. We also see that the Contagem DC now transfers 500 units to the Feira de Santana DC, whereas it did not make any transfers in the Baseline scenario.
We hope this gives you a good idea of how taxes and tax incentives can be considered in Cosmic Frog models. Give it a go and let us know of any feedback and/or questions!
Utilities enable powerful modelling capabilities for use cases like integration to other services or data sources, repeatable data transformation or anything that can be supported by Python! System Utilities are available as a core capability in Cosmic Frog for use cases like LTL rate lookups, TransitMatrix time & distance generation, and copying items like Maps and Dashboards from one model to another. More useful System Utilities will become available in Cosmic Frog over time. Some of these System Utilities are also available in the Resource Library where they can be downloaded from, and then customized and made available to modelers for specific projects or models. In this Help Article we will cover both how to use use System Utilities as well as how to customize and deploy Custom Utilities.
The “Using and Customizing Utilities” resource in the Resource Library includes a helpful 15-minute video on Cosmic Frog Model Utilities and users are encouraged to watch this.
In this Help Article, System Utilities will be covered first, before discussing the specifics of creating one’s own Utilities. Finally, how to use and share Custom Utilities will be explained as well.
Users can access utilities within Cosmic Frog by going to the Utilities section via the Module Menu drop-down:

Once in the Utilities section, user will see the list of available utilities:

The appendix of this Help Article contains a table of all System Utilities and their descriptions.
Utilities vary in complexity by how many input parameters a user can configure and range from those where no parameters need to be set at all to those where many can be set. Following screenshot shows the Orders to Demand utility which does not require any input parameters to be set by the user:

The Copy map to a model utility shown in the next screenshot does require several parameters to be set by the user:

When the Run Utility button has been clicked, a message appears beneath it briefly:

Clicking on this message will open the Model Activity pane to the right of the tab(s) with open utilities:


Users will not only see activities related to running utilities in the Model Activity list. Other actions that are executed within Cosmic Frog will be listed here too, like for example when user has geocoded locations by using the Geocode tool on the Customers / Facilities / Suppliers tables or when user makes a change in a master table and chooses to cascade these changes to other tables.
Please note that the following System Utilities have separate Help Articles where they are explained in more detail:
The utilities that are available in the Resource Library can be downloaded by users and then customized to fit the user’s specific needs. Examples are to change the logic of a data transformation, apply similar logic but to a different table, etc. Or users may even build their own utilities entirely. If a user updates a utility or creates a new one, they can share these back with other users so they can benefit from them as well.
Utilities are Python scripts that contain a specific structure which will be explained in this section. They can be edited directly in the Atlas application on the Optilogic platform or users can download the Python file that is being used as a starting point and edit it using an IDE (Integrated Development Environment) installed on their computer. A rich text editor geared towards coding, like for example Visual Studio Code, will work fine too for most. An advantage of working locally is that user can take advantage of code completion features (auto-completion while typing, showing what arguments functions need, catch incorrect syntax/names, etc.) by installing an extension package like for example IntelliSense (for Visual Studio Code). The screenshots of the Python files underlying the utilities that follow in this documentation are taken while working with them in Visual Studio Code locally and on a machine that has the IntelliSense extension package installed.
A great resource on how to write Python scripts for Cosmic Frog models is this “Scripting with Cosmic Frog” video. In this video, the cosmicfrog Python library, which adds specific functionality to the existing Python features to work with Cosmic Frog models, is covered in some detail.
We will start by looking at the Python file of the very simple Hello World utility. In this first screenshot, the parts that can stay the same for all utilities are outlined in green:

Next, onto the parts of the utility’s Python script that users will want to update when customizing / creating their own scripts:

Now, we will discuss how input parameters, which users can then set in Cosmic Frog, can be added to the details function. After that we will cover different actions that can be added to the run function.
If a utility needs to be able to take any inputs from a user before running it, these are created by adding parameters in the details function of the utility’s Python script:

We will take a closer look at a utility that uses parameters and map the arguments of the parameters back to what the user sees when the utility is open in Cosmic Frog, see the next 2 screenshots: the numbers in the script screenshot are matched to those in the Cosmic Frog screenshot to indicate what code leads to what part of the utility when looking at it in Cosmic Frog. These screenshots use the Copy dashboard to a model utility of which the Python script (Copy dashboard to a model.py) was downloaded from the Resource Library.

Note that Python lists are 0-indexed, meaning that the first parameter (Destination Model in this example) is referenced by typing params[0], the second parameter (Replace of Append dashboards) by typing params[1], etc. We will see this in the code when adding actions to the run function below too.
Now let’s have a look at how the above code translates to what a user sees in the Cosmic Frog user interface for the Copy dashboard to a model System Utility (note that the numbers in this screenshot match with those in the above screenshot):

The actions a utility needs to perform are added to the run function of the Python script. These will be different for different types of utilities. We will cover the actions the Copy dashboard to a model utility uses at a high level and refer to Python documentation if user is interested in understanding all the details. There are a lot of helpful resources and communities online where users can learn everything there is to know about using & writing Python code. A great place to start is on the Python for Beginners page on python.org. This page also mentions how more experienced coders can get started with Python. Also note that text in green font that follows a hash sign are comments to add context to code.



For a custom utility to be showing in the My Utilities category of the utilities list in Cosmic Frog, it needs to be saved under My Files > My Utilities in the user’s Optilogic account:

Note that if a Python utility file is already in user’s Optilogic account, but in a different folder, user can click on it and drag it to the My Utilities folder.
For utilities to work, a requirements.txt file which only contains the text cosmicfrog needs to be placed in the same My Files > My Utilities folder (if not there already):

A customized version of the Copy dashboard to a model utility was uploaded here, and a requirements.txt file is present in the same folder too.
Once a Python utility file is uploaded to My Files > My Utilities, it can be accessed from within Cosmic Frog:

If users want to share custom utilities with other users, they can do so by right-clicking on it and choosing the “Send Copy of File” option:

The following form then opens:

When a custom utility has been shared with you by another user, it will be saved under the Sent To Me folder in your Optilogic account:

Should you have created a custom utility that you feel a lot of other users can benefit from and you are allowed to share outside of your organization, then we encourage you to submit it into Optilogic’s Resource Library. Click on the Contribute button at the left top of the Resource Library and then follow the steps as outlined in the “How can I add Python Modules to the Resource Library?” section towards the end of the “How to use the Resource Library” help article.
Utility names and descriptions by category:
Leapfrog helps Cosmic Frog users explore and use their model data via natural language. View data, make changes, create & run scenarios, analyze outputs, learn all about the Anura schema that underlies Cosmic Frog models, and a whole lot more!
Leapfrog combines an extensive knowledge of PostgreSQL with the complete knowledge of Optilogic’s Anura data schema, and all the natural language capabilities of today’s advanced general purpose LLMs.
For a high-level overview and short video introducing Leapfrog, please see the Leapfrog landing page on Optilogic’s website.
In this documentation, we will first get users oriented on where to find Leapfrog and how to interact with it. In the section after, Leapfrog’s capabilities will be listed out with examples of each. Next, the Tips & Tricks section will give users helpful pointers so they can get the most out of Leapfrog. Finally, we will step through the process of building, running, and analyzing a Cosmic Frog model start to finish by only using Leapfrog!
Dive in if you’re ready to take the leap!
Start using Leapfrog by opening the module within Cosmic Frog:

Once the Leapfrog module is open, users’ screens will look similar to the following screenshot:

The example prompts when using the Anura Help LLM are shown here:

When first starting to use Leapfrog, users will also see the Privacy and Data Security statement, which reads as follows:
“Leapfrog AI Training: Optilogic does not use your model data to train Leapfrog. We do collect and store conversational data so it can be accessed again by the user, as well as to understand usage patterns and areas of strength/weakness for the LLM. Included in this data: natural language input prompts, text and SQL responses, as well as feedback from users. This information is maintained by Optilogic, not shared with third parties, and all of the conversation data is subject to the data security and privacy terms of the Optilogic platform.”

This message will stay visible within Leapfrog whenever it is being used, unless user clicks on the grey cross button on the right to close the message. Once closed, the message will not be shown again while using Leapfrog.
Conversation history is stored on the platform at the user level - not in the model database - so it does not get shared when a model is shared. Note that if you are working in a Team rather than in your My Account (see documentation on Teams on the Optilogic platform here), the Leapfrog conversations you are creating will be available to the other team members when they are working with the same model.
As mentioned in the previous section, Leapfrog currently makes use of 2 large language models (LLMs): Text2SQL and Anura Help (also referred to as Anura Aficionado or A2). They will be explained in some more detail here. There is also an appendix to this documentation where for a few example personas Leapfrog questions and responses are listed, which showcases how some users may predominantly use one model, while others may switch back and forth between them. Of course, when unsure, users can try a specific prompt using both LLMs to see which provides the most helpful response.
Please note that in future users will not need to indicate which LLM they want to run a prompt against as Leapfrog will recognize which one will be most suitable to use based on the prompt.
The Text2SQL LLM combines extensive knowledge of PostgreSQL with Optilogic’s Anura data schema, and all the natural language capabilities of today’s advanced general purpose LLMs. It has been further fine-tuned on a large set of prompt-response pairs hand-crafted by supply chain modeling experts. This allows the Text2SQL model to generate SQL queries from natural language prompts.
Prompts for which it is best to use the Text2SQL model often imply an action: “Show me X”, “Add Y”, “Delete Z”, “Run scenario A”, “Create B”, etc. See also the example prompts listed when starting a new conversation and those in the Prompt Library on the Frogger Pond community.
Leapfrog responses using this model are usually actionable: run the returned SQL query to add / edit / delete data, create a scenario or model, run a scenario, geocode locations, etc.
A full list of the capabilities of both LLMs is covered in the section “Leapfrog Capabilities” further below.
Anura Help (also referred to as Anura Aficionado or A2) is a specialized assistant that leverages advanced natural language processing to help users navigate and understand the Anura schema within Optilogic's Cosmic Frog application. The Anura schema is the foundational framework powering Cosmic Frog's optimization, simulation, and risk assessment capabilities. Anura Help eliminates traditional barriers to schema understanding by providing immediate, authoritative guidance for supply chain modelers, developers, and analysts.
Anura Help’s architecture uses the Retrieval Augmented Generation (RAG) approach: based on the natural language prompt, first the most relevant documents of those in its knowledge base are retrieved (e.g. schema details or engine awareness details). Next, it uses them to generate a natural language response.
Use the Anura Help model when wanting to learn about specific fields, tables or engines in Cosmic Frog. Its core capabilities include:
Responses from Leapfrog when using the Anura Help model are text-based and generated from retrieved documents shown in the context section. This context can for example be of the category “column info” where all details for a specific field are listed.
A full list of the capabilities of both LLMs is covered in the section “Leapfrog Capabilities” further below.
The following list compares the 2 LLMs available in Leapfrog today:
Depending on the type of question, Leapfrog’s response to it can take different forms: text, links, SQL queries, data grids, and options to create models, scenarios, scenario items, groups, run scenarios, or geocode locations. We will look at several examples of questions that result in these different types of responses in this section. This is not an exhaustive list; the next section “Leapfrog Capabilities” will go through the types of prompt-response pairs Leapfrog is capable of today.
For our first question, we used the first Text2SQL example prompt “What are the top 3 products by demand?” by clicking on it. After submitting the prompt, we see that Leapfrog is busy formulating a response:

And Leapfrog’s response to the prompt is as follows:


The metadata included here are:
Clicking on the icon with 3 dots again will collapse the response metadata.
This first prompt asked a question about the input data contained in the Cosmic Frog model. Let us now look at a slightly different type of prompt, which asks to change model input:

We are going to run the SQL query of the above response to our “Increase demand by 20%” prompt. Before doing so, let’s review a subset of 10 records of the Customer Demand input table (under the Data Module, in the Input Tables section):

Next, we will run the SQL query:

After clicking the Run SQL button at the bottom of the SQL Query section in Leapfrog’s response, it becomes greyed out so it will not accidentally be run again. Hovering over the button also shows text indicating the query was already run:

Note that closing and reopening the model or refreshing the browser will revert the Run SQL button’s state so it is clickable again.
Opening the Customer Demand table again and looking at the same 10 records, we see that the Quantity field has indeed been changed to its previous value multiplied by 1.2 (the first record’s value was 643, and 643 * 1.2 = 771.6, etc.):

Running the SQL query to increase the demand by 20% directly in the master data worked fine as we just saw. However, if we do not want to change the master data, but rather want to increase the demand quantity as part of a scenario, this is possible too:


After navigating to the Scenarios module within our Cosmic Frog model, we can see the scenario and its item have been created:

Note that if desired, the scenario and scenario item names auto-generated by Leapfrog can be changed in the Scenarios module of Cosmic Frog: just select the scenario or item and then choose “Rename” from the Scenario drop-down list at the top.
As a final example of a question & answer pair in this section, let us look at one where we use the Anura Help LLM, and Leapfrog responds with text plus context:



There is a lot of information listed here; we will explain the most commonly used information:
Prompts and their responses are organized into conversations in the Leapfrog module:

Users can organize their conversations with Leapfrog by using the options from the Conversations drop-down at the top of the Leapfrog module:

Users can rate Leapfrog responses by clicking on the thumbs up (like) and thumbs down (dislike) buttons and, optionally, providing additional feedback. This feedback is used to continuously improve Leapfrog. Giving a thumbs up to indicate the response is what you expected helps reinforce correct answers from Leapfrog. When a response is not what was expected or wrong, users can help improve Leapfrog’s underlying LLMs by giving the response a thumbs down. Especially thumbs down ratings & additional feedback will be reviewed so Leapfrog can learn and become more useful all the time.
When a response is not as expected as was the case in the following screenshot, user is encouraged to click the thumbs down button:

After clicking on the Send button, the detailed feedback is automatically added to the conversation:

The next screenshot shows an example where user gave Leapfrog’s response a thumbs up as it was what user expected. This feedback can then be used by Leapfrog to reinforce correct answers. User also had the option to provide detailed feedback again, using any of the following 4 optional tags: Showcase Example, Surprising, Fun, and Repeatable Use Case. In this example, user decided not to give detailed feedback and clicked on the Close button after the detailed feedback form came up:

If you have any additional Leapfrog feedback (or questions) beyond what can be captured here, you can feel free to send an email to Leapfrog@Optilogic.com. You are also very welcome to ask questions, share your experiences, and provide feedback on Leapfrog in the Frogger Pond Community.
We will now go back to our first prompt “What are the top 3 products by demand” to explore some of the options users have when Data Grids are included in a Leapfrog response, which is the case when Leapfrog’s SQL Query response is a SELECT statement.



When clicking on the Download File button, a zip file with the name of the active Cosmic Frog model appended with an ID, is downloaded to the user’s Downloads folder. The zip contains:

After clicking on Save, following message appears beneath the Data Grid in Leapfrog’s response:

Looking in the Custom Tables section (#2 in screenshot below) of the Data module (#1 in screenshot below), we indeed see this newly created table named top3products (#3 in screenshot below) with the same contents as the Data Grid of the Leapfrog response:

If we choose to save the Data Grid as a view instead of a table, it goes as follows:

We choose Save as View and give it the name of Top3Products_View. The message that comes up once the view is created reads as follows:

Going to the Analytics module in Cosmic Frog, choosing to add a new dashboard and in this new dashboard a new visualization, we can find the top3products_view in the Views section:

We will go back to the original Data Grid in Leapfrog’s response to explore a few more options user has here:


Please note:
In this section we will list out what Leapfrog is capable of and give examples of each capability. These capabilities include (the LLM each capability applies to is listed in parentheses):
Each of these capabilities will be discussed in the following sections, where a brief description of each capability is given, several example prompts illustrating the capability are listed, and a few screenshots showing the capability are included as well. Please remember that many more example prompts can be found in the Prompt Library on the Frogger Pond community.
Interrogate input and output data using natural language. Use it to check completeness of input data, and to summarize input and/or output data. Leapfrog responds with SELECT Statements and shows a Data Grid preview as we have seen above. Export the data grid or save it as a table or view for further use, which has been covered above already too.
Example prompts:
The following 3 screenshots show examples of checking input data (first screenshot), and interrogating output data (second and third screenshot):



Tell Leapfrog what you want to edit in the input data of your Cosmic Frog model, and it will respond with UPDATE, INSERT, and DELETE SQL Statements. User can opt to run these SQL Queries to permanently make the change in the master input data. For UPDATE SQL Queries, Leapfrog’s response will also include the option to create a scenario and scenario item that will make the change, which we will focus on in the next section.
Example prompts:
The following 3 screenshots show examples changing values in the input data (first screenshot), adding records to an input table (second screenshot), and deleting records from an input table (third screenshot):



Make changes to input data, but through scenarios rather than updating the master tables directly. Prompts that result in UPDATE SQL Queries will have a Scenarios part in their responses and users can easily create a new scenario that will make the input data change by one click of a button.
Example prompts:
The following 3 screenshots show example prompts with responses from which scenarios can be created: create a scenario which makes a change to all records in 1 input table (first screenshot), create a scenario which makes a change to records in 1 input table that match a condition (second screenshot), and create a scenario that makes changes in 2 input tables (third screenshot):



The above screenshots show examples of Leapfrog responses that contain a Scenarios section and from which new scenarios and scenario items can be created by clicking on the Create Scenario button. In addition to the above, users can also use Leapfrog to manage scenarios by using prompts that specifically create scenarios and/or items and assigning specific scenario items to specific scenarios. These result in INSERT INTO SQL Statements which can then be implemented by using the Run SQL button. See the following 2 screenshots for examples of this, where 1) a new scenario is created and an existing scenario item is then assigned to it, and 2) a new scenario item is created which is then assigned to an already existing scenario:


Leapfrog can create new groups and add group members to new and existing groups. Just specify the group name and which members it needs to have in the prompt and Leapfrog’s response will be one or multiple INSERT INTO SQL Statements.
Example prompts:
The following 4 screenshots show example prompts of creating groups and group members: 1) creates a new products group and adds products that have names with a certain prefix (FG_) to it, 2) creates a new periods group and adds 3 specific periods to it, 3) creates a new suppliers group and adds all suppliers that are located in China to it, and 4) adds a new member to an existing facilities group, and in addition explicitly sets the Status and Notes field of this new record in the Groups table:




Leapfrog can create a new, blank model. Leapfrog's response will ask user to confirm if they want to create the new model before creating it. If confirmed, the response will update to contain a link which takes user to the Leapfrog module in this newly created model in a new tab of the browser.
Example prompts:
Following 2 screenshots show an example where a new model named “FrogsLeaping” is created:


You can ask Leapfrog to kick off any model runs for you. Optionally you can specify the scenario(s) you want to be run, which engine to use, and what resource size to use. For Neo (network optimization) runs, user can additionally indicate if the infeasibility check should be turned on. If no scenario(s) are specified, all scenarios present in the model will be run. If no engine is specified, the Neo engine (network optimization) will be used. If no resource size is specified, S will be used. If for Neo runs it is not specified if the infeasibility check should be turned on or off it will be off by default.
Leapfrog’s response will summarize the scenario(s) that are to be run, the engine that will be used, the resource size that will be used, and for Neo runs if the infeasibility check will be on or off. If user indeed wants to run the scenario(s) with these settings, they can confirm by clicking on the Run button. If so, the response will change to contain a link to the Run Manager application on Optilogic’s platform, which will be opened in a new tab of the browser when clicked. In the Run Manager, users can monitor the progress of any model runs.
The engines available in Cosmic Frog are:
The resource sizes available are as follows, from smallest to largest: Mini, 4XS, 3XS, 2XS, XS, S, M, L, XL, 2XL, 3XL, 4XL, Overkill. Guidance on choosing a resource size can be found here.
Example prompts:
The following 2 screenshots show an example prompt where the response is to run the model: only the scenario name is specified in which case a network optimization (Neo) is run using resource size S with the infeasibility check turned off (False):


Leapfrog's response now indicates the run has been kicked off and provides a link (click on the word "here") to check the progress of the scenario run(s) in the Run Manager.
The next screenshot shows a prompt asking to specifically run Greenfield (Triad) on 2 scenarios, where the resource size to be used is specified in the prompt too:

The last screenshot in this section shows a prompt to run a specific scenario with the infeasibility check turned on:

Leapfrog can find latitude & longitude pairs for locations (customers, facilities, and suppliers) based on the location information specified in these input tables (e.g. Address, City, Region, Country). Leapfrog’s response will ask user to confirm they want to geocode the specified table(s). If so, the response will change to contain a link which will open a Cosmic Frog map showing the locations that have been geocoded in a new tab of the browser.
Example prompts:
Notes on using Leapfrog for geocoding locations:
In the following screenshot, user asks Leapfrog to geocode customers:

As geocoding a larger set of locations can take some time, it may look like the geocoding was not done or done incompletely if looking at the map or in the Customers / Facilities / Suppliers input tables shortly after kicking off the geocoding. A helpful tool which shows the progress of the geocoding (and other tools / utilities within Cosmic Frog) is the Model Activity list:


Leapfrog can teach users all about the Anura schema that underlies Cosmic Frog models, including:
Example prompts:
The following 4 screenshots show examples of these types of prompts & Leapfrog’s responses: 1) ask Leapfrog to teach us about a specific field on a specific table, 2) find out which table to use for a specific modelling construct, 3) understand the SCG to Cosmic Frog’s Anura mapping for a specific field on a specific table, and 4) ask about breaking changes in the latest Anura schema update:




Anura Help provides information around system integration, which includes:
Example prompts:
The following 4 screenshots show examples of these types of prompts & Leapfrog’s responses: 1) ask which tables are required to run a specific engine, 2) find out which engines use a specific table, 3) learn which table contains a certain type of outputs, and 4) ask about availability of template models for a specific purpose:




Leapfrog knows about itself, Optilogic, Cosmic Frog, the Anura database schema, LLM’s, and more. Ask Leapfrog questions so it can share its knowledge with you. For most general questions both LLMs will generate the same or a very similar answer, whereas for questions that are around capabilities, each may only answer what is relevant to it.
Example prompts:
The following 5 screenshots show examples of these types of prompts & Leapfrog’s responses: 1) ask both LLMs about their version, 2) ask a general question about how to do something in Cosmic Frog (Text2SQL), 3) ask Anura Help for the Release Notes, and 4 & 5) ask both LLMs about what they are good at and what they are not good at:





Even though this documentation and Leapfrog example prompts are predominantly in English, Leapfrog supports many languages so users can ask questions in their most natural language. Where the Leapfrog response is in text form, it can respond in the language the question was asked in. Other response types like a standard message with a link or names of scenarios and scenario items will be in English.
The following 3 screenshots show: 1) a list of languages Leapfrog supports, 2) a French prompt to increase demand by 20%, and 3) a Spanish prompt asking Leapfrog to explain the Primary Quantity UOM field on the Model Settings table:



To get the most out of Leapfrog, please take note of these tips & tricks:


After this is turned on, you can start using it by pressing the keyboard’s Windows key + H. A bar with a microphone which shows messages like “initializing”, “listening”, “thinking” will show up at the top of your active monitor:

Now you can speak into your computer’s microphone, and your spoken words will be turned into text. If you put your cursor in Leapfrog’s question / prompt area, click on the microphone in the bar at the top so your computer starts listening, and then say what you want to ask Leapfrog, it will appear in the prompt area. You can then click on the send icon to submit your prompt / question to Leapfrog.
The following screenshots show several examples of how one can build on previous prompts and responses and try to re-direct Leapfrog as described in bullets 6 and 7 of the Tips & Tricks above. In the first example user wants to delete records from an input table whereas Leapfrog’s initial response is to change the Status of these records to Exclude. The follow-up prompt clarifies that user wants to remove them. Note that it is not needed to repeat that it is about facilities based in the USA, which Leapfrog still knows from the previous prompt:

In the following example shown in the next 3 screenshots, user starts by asking Leapfrog to show the 2 DCs with the highest throughput. The SQL query response only looks at Replenishment flows, but user wants to include Customer Fulfillment flows too. Also, the SQL Query does not limit the list to the top 2 DCs. In the follow-up prompt the user clarifies this (“Like that, but…”) without needing to repeat the whole question. However, Leapfrog only picks up on the first request (adding the Customer Fulfillment flows), so in the third prompt user clarifies further (again: “Like that, but…”), and achieves what they set out to do:



In the next 2 screenshots we see an example of first asking Leapfrog to show outputs that meet certain criteria (within 3%), and then essentially wanting to ask the same question but with the criteria changed (within 6%). There is no need to repeat the first prompt, it suffices to say something like “How about with [changed criteria]?”:


When Leapfrog only does part of what a user intends to do, it can often still be achieved in multiple steps. See the following screenshots where user intended to change 2 fields on the Production Count Constraints table and initially Leapfrog only changes one. The follow-up prompt simply consists of “And [change 2]”, building on the previous prompt. In the third prompt user was more explicit in describing the 2 changes and then Leapfrog’s response is what user intended to achieve:


Here we will step through the process of building a complete Cosmic Frog demo model, creating an additional scenario, running this new scenario and the Baseline scenario, and interrogating some of the scenarios’ outputs, all by using only Leapfrog.
Please note that if you are trying the same steps using Leapfrog in your Cosmic Frog:
We will first list the prompts that were used to build, run, and analyze the model, and then review the whole process step-by-step through (lots of!) screenshots. Here is the list of prompts that were submitted to Leapfrog (all of them used the Text2SQL LLM):
And here is the step-by-step process shown through screenshots, starting with the first prompt given to Leapfrog to create a new empty Cosmic Frog model with the name “US Distribution”:

Clicking on the link in Leapfrog’s response will take user to the Leapfrog module in this newly created US Distribution model:


In the next prompt (the third one from the list), distribution center (DC) and manufacturing (MFG) locations are added to the Facilities table, and customer locations to the Customers table. Note the use of a numbered list to help Leapfrog break the response up into multiple INSERT INTO statements:

After running the SQL of that Leapfrog response, user has a look in the Facilities and Customers tables and notices that as expected all Latitude and Longitude values are blank:


Since all Facilities and Customers have blank Latitudes and Longitudes, our next (fourth) prompt is to geocode all sites:

Once the geocoding completes (which can be checked in the Model Activity list), user clicks on one of the links in the Leapfrog response. This opens the Supply Chain map of the model in a new tab in the browser, showing Facilities and Customers, which all look to be geocoded correctly:

We can also double-check this in the Customers and Facilities tables, see for example next screenshot of a subset of 5 customers which now have values in their Latitude and Longitude fields:

For a (network optimization - Neo) model to work, we will also need to add demand. As this is an example/demo model, we can use Leapfrog to generate random demand quantities for us, see this next (fifth) prompt and response:

After clicking the Run SQL button, we can have a look in the Customer Demand input table, where we find the expected 200 records (50 customers which each have demand for 4 products) and eyeballing the values in the Quantity field we see the numbers are as expected between 10 and 1000:

Our sixth prompt sets the model end and start dates, so the model horizon is all of 2025:

Again, we can double-check this after running the SQL response by having a look in the Model Settings input table:

We also need Transportation Policies, the following prompt (the seventh from our list) takes care of this and creates lanes from all MFGs to all DCs and from all DCs to all customers:

We see the 6 enumerated MFG (2 locations) to DC (3 locations) lanes when opening and sorting the Transportation policies table, plus the first few records of the 150 enumerated DC to customer lanes. No Unit Costs are set so far (blank values):

Our eighth prompt sets the transportation unit costs on the transportation policies created in the previous step. All use a unit of measure of EA-MI which means the costs entered are per unit per mile, and the cost itself is 1 cent on MFG to DC lanes and 2 cents on DC to customer lanes:

Clicking the Run SQL button will run the 4 UPDATE statements, and we can see the changes in the Transportation Policies input table:

In order to run, the model also needs Production Policies, which the next (ninth) prompt takes care of: both MFG locations can produce all 4 products:

Again, double-checking after running the SQL from the response, we see the 8 expected records in the Production Policies input table:

Our 3 DCs have an upper limit as to how much throughput they can handle over the year, this is 50,000 for the DCs in Reno and Memphis and 100,000 for the DC in Jacksonville. Prompt number 10 sets these:

We can see these numbers appear in the Throughput Capacity field on the Facilities input table after running the SQL of Leapfrog’s response:

We want to explore what happens if the maximum throughput of the DC in Memphis is increased to 100,000; this is what the eleventh prompt asks to do:

Leapfrog’s response has both a SQL UPDATE query, which would change the throughput at DC_Memphis in the Facilities input table, and a Scenarios section. We choose to click on the Create Scenario button so a new scenario is created (Increase Memphis DC Capacity) which will contain 1 scenario item (set_dc_memphis_capacity_to_100000) that sets the throughput capacity at DC_Memphis to 100,000:

Our small demo model is now complete, and we will use Leapfrog (using our twelfth prompt) to run network optimization (using the Neo engine) on the Baseline and Increase Memphis DC Capacity scenarios:

While the scenarios are running, we are thinking about what will be interesting outputs to review, and ask Leapfrog about how one can compare customers flows between scenarios (prompt number 13):

This information can come in handy in one of the next prompts to direct Leapfrog on where to look.
Using the link from the previous Leapfrog response where we started the optimization runs for both scenarios, we open the Run Manager in a new tab of the browser. Both scenarios have completed successfully as their State is set to Done:

Looking in the Optimization Network Summary output table, we also see there are results for both scenarios:

In the next few prompts Leapfrog is used to look at outputs of the 2 scenarios that have been run. The prompt (number 14 from our list) in the next screenshot aims to get Leapfrog to show us which customers have a different source in the Increase Memphis DC Capacity scenario as compared to the Baseline scenario:

Leapfrog’s response is almost what we want it to be, however it has duplicates in the Data Grid. Therefore, we follow our previous prompt up with the next one (number 15), where we ask to see only distinct combinations. Instead of “distinct” we could have also used the word “unique” in our prompt:

We see that the source for around 11-12 customers changed from the DC in Jacksonville in the Baseline to the DC in Memphis in the Increase Memphis DC Capacity scenario.
Cost comparisons between scenarios are usually interesting too, so that is what prompt number 16 asks about:

We notice that increasing the throughput capacity at DC_Memphis leads to a lower total supply chain cost by about 56.5k USD. Next, we want to see how much flow has shifted between the DCs in the Baseline scenario compared to the Increase Memphis DC Capacity scenario, which is what the last prompt (number 17) asks about:

This tells us that the throughput at DC_Reno is the same in both scenarios, but that increasing the DC_Memphis throughput capacity allows a shift of about 24k units from the DC in Jacksonville to the DC in Memphis (which was at its maximum 50k throughput in the Baseline scenario). This volume shift is what leads to the reduction in total supply chain cost.
We hope this gives you a good idea of what Leapfrog is capable of today. Stay tuned for more exciting features to be added in future releases!
Do you have any Leapfrog questions or feedback? Feel free to use the Frogger Pond Community to ask questions, share your experiences, and provide feedback. Or, shoot us an email at Leapfrog@Optilogic.com.
Happy Leapfrogging!
PERSONA: Alex is an experienced supply chain modeler who knows exactly what to analyze but often spends too much time pulling and formatting outputs. They are looking to be more efficient in summarizing results and identifying key drivers across scenarios. While confident in their domain expertise, they want help extracting insights faster without losing control or accuracy. They see AI as a time-saving partner that helps them focus on decision-making, not data wrangling.
USE CASE: After running multiple scenarios in Cosmic Frog, Alex wants to quickly understand the key differences in cost and service across designs. Instead of manually exporting data or writing SQL queries, Alex uses Leapfrog to ask natural-language questions which saves Alex hours and lets them focus on insight generation and strategic decision-making.
Model to use: Global Supply Chain Strategy (available under Get Started Here in the Explorer).
Prompt #1

Prompt #2

Prompt #3
Prompt #4
Prompt #5

PERSONA: Chris is an experienced supply chain modeler with a well-established, repeatable workflow that pulls data from internal systems to rebuild models every quarter. He relies on consistency in the model schema to keep his automation running smoothly.
USE CASE: With an upcoming schema change in Cosmic Frog, Chris is concerned about disruptions or errors in his process and wants Leapfrog to help provide info on the changes that may require him to update his workflow.
Model to use: any.
Prompt #1
Prompt #2
Prompt #3
PERSONA: Larry Loves LLMs – he wants to use an LLM to find answers.
USE CASE: I need to understand the outputs of this model someone else built. I want to know how many products come from each supplier for each network configuration. Can Leapfrog help with that?
Yes, Leapfrog can help with that! Let's use Anura Help to better understand which tables have that data and then ask Text2SQL to pull the data.
Model to use: any; Global Supply Chain Strategy (available under Get Started Here in the Explorer).
Prompt #1
Prompt #2

To enable users to build basic Cosmic Frog for Excel Applications to interact directly with Cosmic Frog from within Excel without needing to write any code, Optilogic has developed the Cosmic Frog for Excel Application Builder (also referred to as App Builder in this documentation). In this App Builder, users can build their own workflows using common actions like creating a new model, connecting to an existing model, importing & exporting data, creating & running scenarios, and reviewing outputs. Once a workflow has been established, the App can be deployed so it can be shared with other users. These other users do not need to build the workflow of the App again, they can just use the App as is. In this documentation we will take a user through the steps of a complete workflow build, including App deployment.
You can download the Cosmic Frog for Excel – App Builder from the Resource Library. A video showing how the App Builder is used in a nutshell is included; this video is recommended viewing before reading further. After downloading the .zip file from the Resource Library and unzipping it on your local computer, you will find there are 2 folders included: 1) Cosmic_Frog_For_Excel_App_Builder, which contains the App Builder itself and this is what this documentation will focus on, and 2) Cosmic_Frog_For_Excel_Examples, which contains 3 examples of how the App Builder can be used. This documentation will not discuss these examples in detail; users are however encouraged to browse through them to get an idea of the types of workflows one can build with the App Builder.
The Cosmic_Frog_For_Excel_App_Builder folder contains 1 subfolder and 1 Macro-enabled Excel file (.xlsm):

When ready to start building your first own basic App, open the Cosmic_Frog_For_Excel_Builder_v1.xlsm file; the next section will describe the steps a user needs to take to start building.
When you open the Cosmic_Frog_For_Excel_App_Builder_v1.xlsm file in Excel, you will find there are 2 worksheets present in the workbook, Start and Workflow. The top of the Start worksheet looks like this:

Going to the Workflow worksheet and clicking on the Cosmic Frog tab in the ribbon, we can see the actions that are available to us to create our basic Cosmic Frog for Excel Applications:

We will now walk through building and deploying a simple App to illustrate the different Actions and their configurations. This workflow will: connect to a Greenfield model in my Optilogic account, add records to the Customer and CustomerDemand tables, create a new scenario with 2 new scenario items in it, run this new scenario, and then export the Greenfield Facility Summary output table from the Cosmic Frog model into a worksheet of the App. As a last step we will also deploy the App.
On the Workflow worksheet, we will start building the workflow by first connecting to an existing model in my Optilogic account:

The following screenshot shows the Help tab of the “Connect To Or Create Model Action”:

In the remainder of the documentation, we will not show the Help tab of each action. Users are however encouraged to use these to understand what the action does and how to configure it.
After creating an action, the details of it will be added to 2 columns in the Workflow worksheet, see screenshot below. The first action of the workflow will use columns A & B, the next action C & D, etc. When adding actions, the placement on the Workflow worksheet is automatic and user does not need to do or change anything. Blue fields contain data that cannot be changed, white fields are user inputs when setting up the action and can be changed in the worksheet itself too.

The United States Greenfield Facility Selection model we are connecting to contains about 1.3k customer locations in the US which have demand for 3 products: Rockets, Space Suits, and Consumables. As part of this workflow, we will add 10 customers located in the state of Ontario in Canada to the Customers table and add demand for each of these customers for each product to the CustomerDemand table. The next 2 screenshots show the customer and customer demand data that will be added to this existing model.


First, we will use an Import Data action to append the new customers to the Customers table in the model we are connecting to:

Next, use the Import Data Action again to upsert the data contained in the New_CustomerDemand worksheet to the CustomerDemand table in the Cosmic Frog model, which will be added to columns E & F. After these 2 Import Data actions have been added, our workflow now looks like this:

Now that the new customers and their demand have been imported into the model, we will add several actions to create a new scenario where the new customers will be included. In this scenario, we will also remove the Max Number of New Facilities value, so the Greenfield algorithm can optimize the number of new facilities just based on the costs specified in the model. After setting up the scenario, an action will be added to run it.
Use the Create Scenario action to add a new scenario to the model:

Then, use 2 Create Item Actions to 1) include the Ontario customers and 2) remove the Max Number Of New Facilities value:


After setting up the scenario and its 2 items, the next step of the workflow will be to run it. We add a Run Scenario action to the workflow to do so:

The configuration of this action takes following inputs:
We now have a workflow that connects to an existing US Greenfield model, adds Ontario customers and their demand to this model, then creates and runs a new scenario with 2 items in this Cosmic Frog model. After running the scenario, we want to export the Optimization Greenfield Facility Summary output table from the Cosmic Frog model and load it into a new worksheet in the App. We do so by adding an Export Data Action to the workflow:

After adding the above actions to the workflow, the workflow worksheet now looks like the following 2 screenshots from column G onwards (columns A-F contain the first 3 actions as shown in a screenshot further above):

Columns G-H contain the details of the action that created the new ON Customers Cost Optimized scenario, and columns I-J & K-L contain the details of the actions that added the 2 scenario items to this scenario.

Columns M-N contain the details of the action that will run the scenario that was added and columns O-P those of the action that will export the selected output table (Optimization Greenfield Facility Summary) into the GF_Facility_Summary worksheet of the App.
To run the completed Workflow, all we need to do is click on the Run Workflow action and confirm we want to run it:

After kicking off the workflow, if we switch to the Start worksheet, details of the run and its progress are shown in rows 9-11:

Looking on the Optilogic Platform, we can also check the progress of the App run and the Cosmic Frog model changes:

Once the run is done all 3 jobs will have their State changed to Done, unless an error occurred in which case the State will say Error.
Checking the United Stated Greenfield Facility Selection model itself in the Cosmic Frog application on cosmicfrog.com:

Once the App is finished running, we see that a worksheet named GF_Facility_Summary was added to the App Builder:

There are several other actions that users of the App Builder can incorporate into a workflow or use to facilitate workflow building. We will cover these now. Feel free to skip ahead to the “Deploying the App” section if your workflow is complete at this stage.
Additional actions that can be incorporated into workflows are the Run Utility, Upload File, and Download File actions. The Run Utility action can be used to run a Cosmic Frog Utility (a Python script), which currently can be a Utility downloaded from the Resource Library or a Utility specifically built for the App.
There are currently 4 Utilities available in the Resource Library:

After downloading the Python file of the Utility you want to use in your workflow, you need to copy it into the working_files_do_not_change folder that is located in the same folder as where you saved the App Builder. Now you can start using it as part of the Run Utility action. In the below example, we will use the Python script from the Copy Map to a Model Resource Library Utility to copy a map and all its settings from one model (“United States Greenfield Facility Selection”, the model connected to in a previous action) to another (“European Greenfield Facility Selection”):

The parameters of the Copy Dashboard to a Model Utility are the same as those of the Copy Map to a Model Utility:
The Orders to Demand and Delete SaS Scenarios utilities do not have any parameters that need to be set, so the Utility Params part of the Run Utility action can be left blank when using these utilities.
The Upload File action can be used to take a worksheet in the App Builder and upload it as a .csv file to the Optilogic platform:

Files that get uploaded to the Optilogic platform are placed in a specific working folder related to the App Builder, the name and location of which are shown in this screenshot:

The Download File action can be used to download a .txt file from the Optilogic platform and load it into a worksheet in the App:

Other actions that facilitate workflow building are the Move an Action, Delete an Action, and Run Actions actions, which will be discussed now. If the order of some actions needs to be changed, you do not need to remove and re-add them, you can use the Move an Action action to move them around:

It is also possible that an action needs to be removed from a Workflow. For this, the “Delete an Action” action can be used, rather than manually deleting it from the Workflow worksheet and trying to move other actions in its place:

Instead of running a complete workflow, it is also possible to only run a subset of the actions that are part of the workflow:

Once a workflow has been completed in the Cosmic Frog for Excel App Builder, it can be deployed so other users can run the same workflow without having to build it first. This section covers the Deployment steps.

The following message will come up after the app had been deployed:

Looking in the folder mentioned in this message, we see the following contents:


Congratulations on building & deploying your own Cosmic Frog for Excel App!
If you want to build Apps that go beyond what can be done using the App Builder, you can do so too. This may require some coding using Excel VBA, Python, and/or SQL. Detailed documentation walking through this can be found in this Getting Started with Cosmic Frog for Excel Applications article on Optilogic’s Help Center.
Hopper is the Transportation Optimization algorithm within Cosmic Frog. It designs optimal multi-stop routes to deliver/pickup a given set of shipments to/from customer locations at the lowest cost. Fleet sizing and balancing weekly demand can be achieved with Hopper too. Example business questions Hopper can answer are:
Hopper’s transportation optimization capabilities can be used in combination with network design to test out what a new network design means in terms of the last-mile delivery configuration. For example, questions that can be looked at are:
With ever increasing transportation costs, getting the last-mile delivery part of your supply chain right can make a big impact on the overall supply chain costs!
It is recommended to watch this short Getting Started with Hopper video before diving into the details of this documentation. The video gives a nice, concise overview of the basic inputs, process, and outputs of a Hopper model.
In this documentation we will first cover some general Cosmic Frog functionality that is used extensively in Hopper, next we go through how to build a Hopper model which discusses required and optional inputs, how to run a Hopper model is explained, Hopper outputs in tables, on maps and analytics are covered as well, and finally references to a few additional Hopper resources are listed. Note that the use of user-defined variables, costs and constraints for Hopper models is covered in a separate help article.
To not make this document too repetitive we will cover some general Cosmic Frog functionality here that applies to all Cosmic Frog technologies and is used extensively for Hopper too.
To only show tables and fields in them that can be used by the Hopper transportation optimization algorithm, disable all icons except the 4th (“Transportation”) in the Technologies Selector from the toolbar at the top in Cosmic Frog. This will hide any tables and fields that are not used by Hopper and therefore simplifies the user interface:

Many Hopper related fields in the input and output tables will be discussed in this document. Keep in mind however that a lot of this information can also be found in the tooltips that are shown when you hover over the column name in a table, see following screenshot for an example. The column name, technology/technologies that use this field, a description of how this field is used by those algorithm(s), its default value, and whether it is part of the table’s primary key are listed in the tooltip.

There are a lot of fields with names that end in “…UOM” throughout the input tables. How they work will be explained here so that individual UOM fields across the tables do not need to be explained further in this documentation as they all work similarly. These UOM fields are unit of measure fields and often appear to the immediate right of the field that they apply to, like for example Distance Cost and Distance Cost UOM in the screenshot above. In these UOM fields you can type the Symbol of a unit of measure that is of the required Type from the ones specified in the Units Of Measure table. For example, in the screenshot above, the unit of measure Type for the Distance Cost UOM field is Distance. Looking in the Units of Measure table, we see there are multiple of these specified, like for example Mile (Symbol = MI), Yard (Symbol = YD) and Kilometer (Symbol = KM), so we can use any of these in this UOM field. If we leave a UOM field blank, then the Primary UOM for that UOM Type specified in the Model Settings table will be used. For example, for the Distance Cost UOM field in the screenshot above the tooltip says Default Value = {Primary Distance UOM}. Looking this up in the Model Settings table shows us that this is set to MI (= mile) in our current model. Let’s illustrate this with the following screenshots of 1) the tooltip for the Distance Cost UOM field (located on the Transportation Assets table), 2) units of measure of Type = Distance in the Units Of Measure table and 3) checking what the Primary Distance UOM is set to in the Model Settings table, respectively:



Note that only hours (Symbol = HR) is currently allowed as the Primary Time UOM in the Model Settings table. This means that if another Time UOM, like for example minutes (MIN) or days (DAY), is to be used, the individual UOM fields need to be used to set these. Leaving them blank would mean HR is used by default.
With few exceptions, all tables in Cosmic Frog contain both a Status field and a Notes field. These are often used extensively to add elements to a model that are not currently part of the supply chain (commonly referred to as the “Baseline”), but are to be included in scenarios in case they will definitely become part of the future supply chain or to see whether there are benefits to optionally include these going forward. In these cases, the Status in the input table is set to Exclude and the Notes field often contains a description along the lines of ‘New Market’, ‘New Product’, ‘Box truck for Scenarios 2-4’, ‘Depot for scenario 5’, ‘Include S6’, etc. When creating scenario items for setting up scenarios, the table can then be filtered for Notes = ‘New Market’ while setting Status = ‘Include’ for those filtered records. We will not call out these Status and Notes fields in each individual input table in the remainder of this document, but we definitely do encourage users to use these extensively as they make creating scenarios very easy. When exploring any Cosmic Frog models in the Resource Library, you will notice the extensive use of these fields too. The following 2 screenshots illustrate the use of the Status and Notes fields for scenario creation: 1) shows several customers on the Customers table where CZ_Secondary_1 and CZ_Secondary_2 are not currently customers that are being served but we want to explore what it takes to serve them in future. Their Status is set to Exclude and the Notes field contains ‘New Market’; 2) a scenario item called ‘Include New Market’ shows that the Status of Customers where Notes = ‘New Market’ is changed to ‘Include’.


The Status and Notes fields are also often used for the opposite where existing elements of the current supply chain are excluded in scenarios in cases where for example locations, products or assets are going to go offline in the future. To learn more about scenario creation, please see this short Scenarios Overview video, this Scenario Creation and Maps and Analytics training session video, this Creating Scenarios in Cosmic Frog help article, and this Writing Scenario Syntax help article.
A subset of Cosmic Frog’s input tables needs to be populated in order to run Transportation Optimization, whereas several other tables can be used optionally based on the type of network that is being modelled, and the questions the model needs to answer. The required tables are indicated with a green check mark in the screenshot below, whereas the optional tables have an orange circle in front of them. The Units Of Measure and Model Settings tables are general Cosmic Frog tables, not only used by Hopper and will always be populated with default settings already; these can be added to and changed as needed.

We will first discuss the tables that are required to be populated to set up a basic Hopper model and then cover what can be achieved by also using the optional tables and fields. Note that the screenshots of all input and output tables mostly contain the fields in the order they appear in in the Cosmic Frog user interface, however on occasion the order of the fields was rearranged manually. So, if you do not see a specific field in the same location as in a screenshot, then please scroll through the table to find it.
The Customers table contains what for purposes of modelling are considered the customers: the locations that we need to deliver a certain amount of certain product(s) to or pick a certain amount of product(s) up from. The customers need to have their latitudes and longitudes specified so that distances and transport times of route segments can be calculated, and routes can be visualized on a map. Alternatively, users can enter location information like address, city, state, postal code, country and use Cosmic Frog’s built in geocoding tool to populate the latitude and longitude fields. If the customer’s business hours are important to take into account in the Hopper run, its operating schedule can be specified here too, along with customer specific variable and fixed pickup & delivery times. Following screenshot shows an example of several populated records in the Customers table:

The pickup & delivery time input fields can be seen when scrolling right in the Customers table (the accompanying UOM fields are omitted in this screenshot):

Finally, scrolling even more right, there are 3 additional Hopper-specific fields in the Customers table:

The Facilities table needs to be populated with the location(s) the transportation routes start from and end at; they are the domicile locations for vehicles (assets). The table is otherwise identical to the customers table, where location information can again be used by the geocoding tool to populate the latitude and longitude fields if they are not yet specified. And like other tables, the Status and Notes field are often used to set up scenarios. This screenshot shows the Facilities table populated with 2 depots, 1 current one in Atlanta, GA, and 1 new one in Jacksonville, FL:

Scrolling further right in the Facilities table shows almost all the same fields as those to the right on the Customers table: Operating Schedule, Operating Calendar, and Fixed & Unit Pickup & Delivery Times plus their UOM fields. These all work the same as those on the Customers table, please refer to the descriptions of them in the previous section.
The item(s) that are to be delivered to the customers from the facilities are entered into the Products table. It contains the Product Name, and again a Status and Notes fields for ease of scenario creation. Details around the Volume and Weight of the product are entered here too, which are further explained below this screenshot of the Products table where just one product “PRODUCT” has been specified:

On the Transportation Assets table, the vehicles to be used in the Hopper baseline and any scenario runs are specified. There are a lot of fields around capacities, route and stop details, delivery & pickup times, and driver breaks that can be used on this table, but there is no requirement to use all of them. Use only those that are relevant to your network and the questions you are trying to answer with your model. We will discuss most of them through multiple screenshots. Note that the UOM fields have been omitted in these screenshots. Let’s start with this screenshot showing basic asset details like name, number of units, domicile locations, and rate information:

The following screenshot shows the fields where the operating schedule of the asset, any fixed costs, and capacity of the vehicles can be entered:

Note that if all 3 of these capacities are specified, the most restrictive one will be used. If you for example know that a certain type of vehicle always cubes out, then you could just populate the Volume Capacity and Volume Capacity UOM fields and leave the other capacity fields blank.
If you scroll further right, you will see the following fields that can be used to set limits on route distance and time when using this type of vehicle. Where applicable, you will notice their UOM fields too (omitted in the screenshot):

Limits on the amount of stops per route can be set too:

A tour is defined as all the routes a specific unit of a vehicle is used on during the model horizon. Limits around routes, time, and distance for tours can be added if required:

Scrolling still further right you will see the following fields that can be used to add details around how long pickup and delivery take when using this type of vehicle. These all have their own UOM fields too (omitted in the screenshot):

The next 2 screenshots shows the fields on the Transportation Assets table where rules around driver duty, shift, and break times can be entered. Note that these fields each have a UOM field that is not shown in the screenshot:


Limits around out of route distance can be set too. Plus details regarding the weight of the asset itself and the level of CO2 emissions:


Lastly, a default cost, fixed times for admin, and an operating calendar can be specified for a vehicle in the following fields on the transportation assets table:

As a reference, these are the department of transportation driver regulations in the US and the EU. They have been somewhat simplified from these sources: US DoT Regulations and EU DoT Regulations:
Consider this route that starts from the DC, then goes to CZ1 & CZ2, and then returns to the DC:

The activities on this route can be thought of as follows, where the start of the Rest is the end of Shift 1 and Shift 2 starts at the end of the Rest:

Notes on Driver Breaks:
Except for asset fixed costs, which are set on the Transportation Assets table, and any Direct Costs which are set on the Shipments table, all costs that can be associated with a multi-stop route can be specified in the Transportation Rates table. The following screenshot shows how a transportation rate is set up with a name, a destination name and the first several cost fields. Note that UOM fields have been omitted in this screenshot, but that each cost field has its own UOM field to specify how the costs should be applied:

Scrolling further right in the Transportation Rates table we see the remaining cost fields:

Finally, a minimum charge and fuel surcharge can be specified as part of a transportation rate too:

The amount of product that needs to be delivered from which source facility/supplier to which destination customer or picked up from which customer is specified on the Shipments table. Optionally, details around pickup and delivery times, direct costs, and fixed template routes can be set on this table too. Note that the Shipments table is Transportation Asset agnostic, meaning that the Hopper transportation optimization algorithm will choose the optimal one to use from the vehicles domiciled at the source location. This first screenshot of the Shipments table shows the basic shipment details:

Here is an example of a subset of Shipments for a model that will route both pickups and deliveries:

To the right in the Shipments table we find the fields where details around shipment windows can be entered:

Still further right on the Shipments table are the fields where details around pickup and delivery times can be specified:

Finally, furthest right on the Shipments table are fields where Direct Costs, details around Template Routes and decompositions can be configured:

Note that there are multiple ways to switching between forcing Shipments and the order of stops onto a template route and letting Hopper optimize which shipments will be put on a route together and in which order. Two example approaches are:
The tables and their input fields that can optionally be populated for their inputs to be used by Hopper will now be covered. Where applicable, it will also be mentioned how Hopper will behave when these are not populated.
In the Transit Matrix table, the transport distance and time for any source-destination-asset combination that could be considered as a segment of a route by Hopper can be specified. Note that the UOM fields in this table are omitted in following screenshot:

The transport distances for any source-destination pairs that are not specified in this table will be calculated based on the latitudes and longitudes of the source and destination and the Circuity Factor that is set in the Model Settings table. Transport times for these pairs will be calculated based on the transport distance and the vehicle’s Speed as set on the Transportation Assets table or, if Speed is not defined on the Transportation Assets table, the Average Speed in the Model Settings table.
Costs that need to be applied on a stop basis can be specified in the Transportation Stop Rates table:

If Template Routes are specified on the Shipments table by using the Template Route Name and Template Route Stop Sequence fields, then the Template Routes table can be used to specify if and how insertions of other Shipments can be made into these template routes:

If a template route is set up by using the Template Route Name and Template Route Stop Sequence fields in the Shipments table and this route is not specified in the Template Routes table, it means that no insertions can be made into this template route.
In addition to routing shipments with a fixed amount of product to be delivered to a customer location, Hopper can also solve problems where routes throughout a week need to be designed to balance out weekly demand while achieving the lowest overall routing costs. The Load Balancing Demand and Load Balancing Schedules tables can be used to set this up. If both the Shipments table and the Load Balancing Demand/Schedules tables are populated, by default the Shipments table will be used and the Load Balancing Demand/Schedules tables will be ignored. To switch to using the Load Balancing Demand/Schedules tables (and ignoring the Shipments) table, the Run Load Balancing toggle in the Hopper (Transportation Optimization) Parameters section on the Run screen needs to be switched to on (toggle to the left and grey is off; to the right and blue is on):

The weekly demand, the number of deliveries per week, and, optionally, a balancing schedule can be specified in the Load Balancing Demand table:

To balance demand over a week according to a schedule, these schedules can be specified in the Load Balancing Schedules table:


In the screenshots above, the 3 load balancing schedules that have been set up will spread the demand out as follows:
In the Relationship Constraints table, we can tell Hopper what combinations of entities are not allowed on the same route. For example, in the screenshot below we are saying that customers that make up the Primary Market cannot be served on the same route as customers from the Secondary Market:

A few examples of common Relationship Constraints are shown in the following screenshot where the Notes field explains what the constraint does:

To set the availability of customers, facilities, and assets to certain start and end times by day of the week, the Business Hours table can be used. The Schedule Name specified on this table can then be used in the Operating Schedule fields on the Customers, Facilities and Transportation Assets tables. Note that the Wednesday – Saturday Open Time and Close Time fields are omitted in the following screenshot:

To schedule closure of customers, facilities, and assets on certain days, the Business Calendars table can be used. The Calendar Name specified on this table can then be used in the Operating Calendar fields on the Customers, Facilities and Transportation Assets tables:

Groups are a general Cosmic Frog feature to make modelling quicker and easier. By grouping elements that behave the same together in a group we can reduce the number of records we need to populate in certain tables since we can use the Group names to populate the fields instead of setting up multiple records for each individual element which will all have the same information otherwise. Underneath the hood, when a model that uses Groups is run, these Groups are enumerated into the individual members of the group. We have for example already seen that groups of Type = Customers were used in the Relationship Constraints table in the previous section to prevent customers in the Primary Market being served on the same route as customers in the Secondary Market. Looking in the Groups table we can see which customers are part (‘members’) of each of these groups:

Examples of other Hopper input tables where use of Groups can be convenient are:
Note that in addition to Groups, Named Filters can be used in these instances too. Learn more about Named Filters in this help center article.
The Step Costs table is a general table in Cosmic Frog used by multiple technologies. It is used to specify costs that change based on the throughput level. For Hopper, all cost fields on the Transportation Rates table, the Transportation Stop Rates table, and the Fixed Cost on the Transportation Assets table can be set up to use Step Costs. We will go through an example of how Step Costs are set up, associated with the correct cost field, and how to understand outputs using the following 3 screenshots of the Step Costs table, Transportation Rates table and Transportation Route Summary output table, respectively. The latter will also be discussed in more detail in the next section on Hopper outputs.

In this example, the per unit cost for units 0 through 20 is $1, $0.9 for units 21 through 40, and $0.85 for all units over 40. Had the Step Cost Behavior field been set to All Item, then the per unit cost for all items is $1 if the throughput is between 0 and 20 units, the per unit cost for all items is $0.9 if the throughput is between 21 and 40 units, and the per unit cost for all items is $0.85 if the throughput is over 41 units.
In this screenshot of the Transportation Rates table, it is shown that the Unit Cost field is set to UnitCost_1 which is the stepped cost with 3 throughput levels that we just discussed in the screenshot above:

Lastly, this is a screenshot of the Transportation Route Summary output table where we see that the Delivered Quantity on Route 1 is 78. With the stepped cost structure as explained above for UnitCost_1, the Unit Cost in the output is calculated as follows: 20 * $1 (for units 1-20) + 20 * $0.9 (for units 21-40) + 38 * $0.85 (for units 41-78) = $20 + $18 + $32.30 = $70.30.

When the input tables have been populated and scenarios are created (several resources explaining how to set up and configure scenarios are listed in the “2.4 Status and Notes fields” section further above), one can start a Hopper run by clicking on the Run button at the top right in Cosmic Frog:

The Run screen will come up:

Once a Hopper run is completed, the Hopper output tables will contain the outputs of the run.
As with other Cosmic Frog algorithms, we can look at Hopper outputs in output tables, on maps and analytics dashboards. We will discuss each of these in the next 3 sections. Often scenarios will be compared to each other in the outputs to determine which changes need to be made to the last-mile delivery part of the supply chain.
In the Output Summary Tables section of the Output Tables are 8 Hopper specific tables, they start with “Transportation…”. Plus, there is also the Hopper specific detailed Transportation Activity Report table in the Output Report Tables section:

Switch from viewing Input Tables to Output Tables by clicking on the round grid at the top right of the tables list. The Transportation Summary table gives a high-level summary of each Hopper scenario that has been run and the next 6 Summary output tables contain the detailed outputs at the route, asset, shipment, stop, segment, and tour level. The Transportation Load Balancing Summary output table is populated when a Load Balancing scenario has been run, and summarizes outputs at the daily level. The Transportation Activity Report is especially useful to understand when Rests and Breaks are required on a route. All these output tables will be covered individually in the following sections.
The Transportation Summary table contains outputs for each scenario run that include Hopper run details, cost details, how much product was delivered and how, total distance and time, and how many routes, stops and shipments there were in total.

The Hopper run details that are listed for each scenario include:
The next 2 screenshots show the Hopper cost outputs, summarized by scenario:


Scrolling further right in the Transportation Summary table shows the details around how much product was delivered in each scenario:

For the Quantity UOM that is shown in the farthest right column in this screenshot (eaches here), the Total Delivered Quantity, Total Direct Quantity and Total Undelivered Quantity are listed in these columns. If the Total Direct Quantity is greater than 0, details around which shipments were delivered directly to the customer can be found in the Transportation Shipment Summary output table where the Shipment Status = Direct Shipping. Similarly, if the total undelivered quantity is greater than 0, then more details on which shipments were not delivered and why are detailed in the Unrouted Reason field of the Transportation Shipment Summary output table where the Shipment Status = Unrouted.
The next set of output columns when scrolling further right repeat these delivered, direct and undelivered amounts by scenario, but in terms of volume and weight.
Still further to the right we find the outputs that summarize the total distance and time by scenario:


Lastly, the fields furthest right on the Transportation Summary output table contain details around the number of routes, assets and shipments, and CO2 emissions:

A few columns contained in this table are not shown in any of the above screenshots, these are:
The Transportation Route Summary table contains details for each route in each scenario that include cost, distance & time, number of stops & shipments, and the amount of product delivered on the route.

The costs that together make up the total Route Cost are listed in the next 11 fields shown in the next 2 screenshots:


The next set of output fields show the distance and time for each route:


Finally, the fields furthest right in the Transportation Route Summary table list the amount of product that was delivered on the routes, and the number of stops and delivered shipments on each route.

The Transportation Asset Summary output table contains the details of each type of asset used in each scenario. These details include costs, amount of product delivered, distance & time, and the number of delivered shipments.

The costs that together make up the Total Cost are listed in the next 12 fields:


The next set of fields in the Transportation Asset Summary summarize the distances and times by asset type for the scenario:


Furthest to the right on the Transportation Asset Summary output table we find the outputs that list the total amount of product that was delivered, the number of delivered shipments, and the total CO2 emissions:

The Transportation Shipment Summary output table lists for each included Shipment of the scenario the details of which asset type it is served by, which stop on which route it is, the amount of product delivered, the allocated cost, and its status.

The next set of fields in the Transportation Shipment Summary table list the total amount of product that was delivered to this stop.

The next screenshot of the Transportation Shipment Summary shows the outputs that detail the status of the shipment, costs, and a reason in case the shipment was unrouted.

Lastly, the outputs furthest to the right on the Transportation Shipment Summary output table list the pickup and delivery time and dates, the allocation of CO2 emissions and associated costs, and the Decomposition Name if used:

The Transportation Stop Summary output table lists for each route all the individual stops and their details around amount of product delivered, allocated cost, service time, and stop location information.
This first screenshot shows the basic details of the stops in terms of route name, stop ID, location, stop type, and how much product was delivered:

Somewhat further right on the Transportation Stop Summary table we find the outputs that detail the route cost allocation and the different types of time spent at the stop:

Lastly, farthest right on the Transportation Stop Summary table, arrival, service, and departure dates are listed, along with the stop’s latitude and longitude:

The Transportation Segment Summary output table contains distance, time, and source and destination location details for each segment (or “leg”) of each route.
The basic details of each segment are shown in the following screenshot of the Transportation Segment Summary table:

Further right on the Transportation Segment Summary output table, the time details of each segment are shown:

Next on the Transportation Segment Summary table are the latitudes and longitudes of the segment’s origin and destination locations:

And farthest right on the Transportation Segment Summary output table details around the start and end date and time of the segment are listed, plus CO2 emissions and the associated CO2 cost:

For each Tour (= asset schedule) the Transportation Tour Summary output table summarizes the costs, distances, times, and CO2 details.
The next 3 screenshots show the basic tour details and all costs associated with a tour:



The next screenshot shows the distance outputs available for each tour on the Transportation Tour Summary output table:

Scrolling further right on the Transportation Tour Summary table, the outputs available for tour times are listed:


If a load balancing scenario has been run (see the Load Balancing Demand input table further above for more details on how to run this), then the Transportation Load Balancing Summary output table will be populated too. Details on amount of product delivered, plus the number of routes, assets and delivered shipments by day of the week can be found in this output table; see the following 2 screenshots:


For each route, the Transportation Activity Report details all the activities that happen in chronological order with details around distance and time and it breaks down how far along the duty and drive times are at each point in the route, which is very helpful to understand when rests and short breaks are happening.
This first screenshot of the Transportation Activity Report shows the basic details of the activities:

Next, the distance, time, and delivered amount of product are detailed on the Transportation Activity Report:

Finally, the last several fields on the Transportation Activity Report details cost, and the thus far accumulated duty and drive times:

As with other algorithms within Cosmic Frog, Maps are very helpful in visualizing baseline and scenario outputs. Here, we will do a step by step walk through of setting up a Hopper specific Map and not cover all the ins and outs of maps. If desired, you can review these resources on Maps in general first:
We will first cover the basics of what we need to know to set up a Hopper specific map:


Click on the Map drop-down to view all options in the list:
After adding a new Map or when selecting an existing Map in the Maps list, the following view will be shown on the right-hand side of the map:

After adding a new Layer to a Map or when selecting an existing Layer in a Map, the following view will be shown on the right-hand side of the map:

By default, the Condition Builder view is shown:
There is also a Conditions text field which is not shown in the screenshot as it is covered by the Table Name drop-down. A filter (“condition”) can be typed into the Conditions text field to only show the records of the table that match the filter. For example, typing “CustomerName like ‘%Secondary%’” in the Conditions field, will only show customers where the Customer Name contains the text ‘Secondary’ anywhere in the name. You can learn more about building conditions in this Writing Syntax for Conditions help article.
Switching from Condition Builder to Layer Style shows the following:

Here, following is shown / configurable:
Switching from Layer Style to Layer Labels shows the following:

Using what we have discussed above, we can create the following map quite easily and quickly (the model used here is one from the Resource Library, named Transportation Optimization):

The steps taken to create this map are:
Let’s also cover 2 maps of a model where both pickups and deliveries are being made, from “backhaul” and to “linehaul” customers, respectively. When setting the LIFO (Is Last In First Out) field on the Transportation Assets table to True, this leads to routes that contain both pickup and delivery stops, but all the pickups are made at the end (e.g. modeling backhaul):

Two example routes are being shown in the screenshot above and we can see that all deliveries are first made to the linehaul customers which have blue icons. Then, pickups are made at the backhaul customers which have orange icons. If we want to design interleaved routes where pickups and deliveries can be mixed, we need to set the LIFO field to False. The following screenshot shows 2 of these interleaved routes:

In the Analytics module of Cosmic Frog, dashboards that show graphs of scenario outputs, sliced and diced to the user’s preferences, can quickly be configured. Like Maps, this functionality is not Hopper specific and other Cosmic Frog technologies use these extensively too. We will cover setting up a Hopper specific visualization, but not all the details of configuring dashboards. Please review these resources on Analytics in Cosmic Frog first if you are not yet familiar with these:
We will do a quick step by step walk through of how to set up a visualization of comparing scenario costs by cost type in a new dashboard:

The steps to set this up are detailed here, note that the first 4 bullet points are not shown in the screenshot above:
There are several models in the Resource Library that transportation optimization users may find helpful to review. How to use resources in the Resource Library is described in the help center article “How to Use the Resource Library”.
Teams is an exciting new feature set designed to enhance collaboration within Supply Chain Design, enabling companies to foster a more connected and efficient working environment. With Teams, users can join a shared workspace where all team members have seamless access to collective models and files. This ensures that every piece of work remains synchronized, providing a single source of truth for your data. When one team member updates a file, those changes instantly reflect for all other members, eliminating inconsistencies and ensuring that everyone stays aligned.
Beyond simply improving collaboration, Teams offers a structured and flexible way to organize your projects. Instead of keeping all your files and models confined to a personal account, you can now create distinct teams tailored to different projects, departments, or business functions. This means greater clarity and easier navigation between workspaces, ensuring that the right content is always at your fingertips.
Consider the possibilities:
Teams introduces a more intuitive and structured way to collaborate, organize, and access your work—ensuring that your team members always have the latest updates and a streamlined experience. Get started today and transform the way you work together!
This documentation contains a high-level overview of the Teams feature set, details the steps to get started, gives examples of how Teams can be structured, and covers best practices. More detailed documentation for Organization Administrators and Teams Users is available in the following help center articles:
The diagram below highlights the main building blocks of the Teams feature set:

At a high-level, these are the steps to start using the Teams feature set:
Here follow 5 examples of how teams can be structured, including an example for each and an explanation of why such a setup works well.
Please keep following best practices in mind to ensure optimal use of the Teams feature set:
Once you have set up your teams and added content, you are ready to start collaborating and unlocking the full potential of Teams within Optilogic!
Let us know if you need help along the way—our support team (support@optilogic.com) has your back.
Depending on the type of supply chain one is modelling in Cosmic Frog and the questions being asked of it, it may be necessary to utilize some or all the features that enable detailed production modelling. A few business case examples that will often include some level of detailed production modelling include:
In comparison, modelling a retailer who buys all its products from suppliers as finished goods, does not require any production details to be added to its Cosmic Frog model. Hybrid models are also possible, think for example of a supermarket chain which manufactures its own branded products and buys other brands from its suppliers. Depending on the modelling scope, the production of the own branded products may require using some of the detailed production features.
The following diagram shows a generalized example of production related activities at a manufacturing plant, all of which can be modelled in Cosmic Frog:

In this help article we will cover the inputs & outputs of Cosmic Frog’s production modelling features, while also giving some examples of how to model certain business questions. The model in Optilogic’s Resource Library that is used mainly for the screenshots in this article is the Multi-Year Capacity Planning. There is a 20-minute video available with this model in the Resource Library, which covers the business case that is modelled and some detail of the production setup too.
To not make this document too repetitive we will cover some general Cosmic Frog functionality here that applies to all Cosmic Frog technologies and is used extensively for production modelling in Neo too.
To only show tables and fields in them that can be used by the Neo network optimization algorithm, select Optimization in the Technologies Filter from the toolbar at the top in Cosmic Frog. This will hide any tables and fields that are not used by Neo and therefore simplifies the user interface.

Quite a few Neo related fields in the input and output tables will be discussed in this document. Keep in mind however that a lot of this information can also be found in the tooltips that are shown when you hover over the column name in a table, see following screenshot for an example. The column name, technology/technologies that use this field, a description of how this field is used by those algorithm(s), its default value, and whether it is part of the table’s primary key are listed in the tooltip.

There are a lot of fields with names that end in “…UOM” throughout the input tables. How they work will be explained here so that individual UOM fields across the tables do not need to be explained further in this documentation as they all work similarly. These UOM fields are unit of measure fields and often appear to the immediate right of the field that they apply to, like for example Unit Value and Unit Value UOM in the screenshot above. In these UOM fields you can type the Symbol of a unit of measure that is of the required Type from the ones specified in the Units Of Measure input table. For example, in the screenshot above, the unit of measure Type for the Unit Value UOM field is Quantity. Looking in the Units Of Measure input table, we see there are 2 of these specified: Each and Pallet, with Symbol = EA and PLT, respectively. We can use either of these in this UOM field. If we leave a UOM field blank, then the Primary UOM for that UOM Type specified in the Model Settings input table will be used. For example, for the Unit Value UOM field in the screenshot above the tooltip says Default Value = {Primary Quantity UOM}. Looking this up in the Model Settings table shows us that this is set to EA (= each) in our current model. Let’s illustrate this with the following screenshots of 1) the tooltip for the Unit Value UOM field (located on the Products input table), 2) units of measure of Type = Quantity in the Units Of Measure input table and 3) checking what the Primary Quantity UOM is set to in the Model Settings input table, respectively:



Note that only hours (Symbol = HR) is currently allowed as the Primary Time UOM in the Model Settings table. This means that if another Time UOM, like for example minutes (MIN) or days (DAY), is to be used, the individual UOM fields need to be utilized to set these. Leaving these blank would mean HR is used by default.
With few exceptions, all tables in Cosmic Frog contain both a Status field and a Notes field. These are often used extensively to add elements to a model that are not currently part of the supply chain (commonly referred to as the “Baseline”), but are to be included in scenarios in case they will definitely become part of the future supply chain or to see whether there are benefits to optionally include these going forward. In these cases, the Status in the input table is set to Exclude and the Notes field often contains a description along the lines of ‘New Market’, ‘New Line 2026’, ‘Alternative Recipe Scenario 3, ‘Faster Bottling Plant5 China’, ‘Include S6’, etc. When creating scenario items for setting up scenarios, the table can then be filtered for Notes = ‘New Market’ while setting Status = ‘Include’ for those filtered records. We will not call out these Status and Notes fields in each individual input table in the remainder of this document, but we do encourage users to use these extensively as they make creating scenarios very easy. When exploring any Cosmic Frog models in the Resource Library, you will notice the extensive use of these fields too. The following 2 screenshots illustrate the use of the Status and Notes fields for scenario creation: 1) shows several customers on the Customers table where CZ_Secondary_1 and CZ_Secondary_2 are not currently customers that are being served but we want to explore what it takes to serve them in future. Their Status is set to Exclude and the Notes field contains ‘New Market’; 2) a scenario item called ‘Include New Market’ shows that the Status of Customers where Notes = ‘New Market’ is changed to ‘Include’.


The Status and Notes fields are also often used for the opposite where existing elements of the current supply chain are excluded in scenarios in cases where for example manufacturing locations, products or lines are going to go offline in the future. To learn more about scenario creation, please see this short Scenarios Overview video, this Scenario Creation and Maps and Analytics training session video, this Creating Scenarios in Cosmic Frog help article, and this Writing Scenario Syntax help article.
The model that is mostly used for screenshots throughout this help article is as mentioned above the Multi-Year Capacity Planning model that can be found here in the Resource Library. This model represents a European cheese supply chain which is used to make investment decisions around the growth of a non-mature market in Eastern Europe over a 5-year modelling horizon. New candidate DCs are considered to serve the growing demand in Eastern Europe, the model optimizes which are optimal to open and during which of the 5 years of the modelling horizon. The production setup in the model uses quite a few of the detailed modelling features which will be discussed in detail in this document:
Note that in the screenshots of this model, the columns have been re-ordered sometimes, so you may see a different order in your Cosmic Frog UI when opening the same tables of this model.
The 2 screenshots below show the Products and Facilities input tables of this model in Cosmic Frog:

Note that the naming convention of the products lends itself to easy filtering of the table for the raw materials, bulk materials, and finished goods due to the RAW_, BULK_, and FG_ prefixes. This makes the creation of groups and setting up of scenarios quick and easy.

Note that similar to the naming convention of the products, the facilities are also named with prefixes that facilitate filtering of the facilities so groups and scenarios can quickly be created.
Here is a visual representation of the model with all facilities and customers on the map:

The specific features in Cosmic Frog that allow users to model and optimize production processes of varying levels of complexity while using the network optimization engine (Neo) include the following input tables:

We will cover all these production related input tables to some extent in this article, starting with a short description of each of the basic single-period input tables:
These 4 tables feed into each other as follows:

A couple of notes on how these tables work together:
For all products that are explicitly modelled in a Cosmic Frog model, there needs to be at least 1 policy specified on the Production Policies table or the Supplier Capabilities table so there is at least 1 origin location for each. This applies to for example raw materials, intermediates, bulk materials, and finished goods. The only exception is if by-products are being modelled, these can have Production Policies associated with them, but do not necessarily need to (more on this when discussing Bills of Materials further below). From the 2 screenshots below of the Production Policies table, it becomes clear that depending on the type of product and the level of detail that is needed for the production elements of the supply chain, production policies can be set up quite differently: some use only a few of the fields, while others use more/different fields.

A couple of notes as follows:
Next, we will look at a few other records on the Production Policies input table:

We will take a closer look at the BOMs and Processes specified on these records when discussing the Bills of Materials and Processes tables further below.
Note that the above screenshot was just for PLT_1 and mozzarella, there are similar records in this model for the other 4 cheeses which can also be made at PLT_1, plus similar records for all 5 cheeses at PLT_2, which includes a new potential production line for future expansion too.
Other fields on the Production Policies table that are not shown in the above 2 screenshots are:
The recipes of how materials/products of different stages convert into each other are specified on the Bills of Materials (BOMs) table. Here the BOMs for the blue cheese (_BLU) products are shown:

Note that the above specified BOMs are both location and end-product agnostic. Their names suggest that they are specific for making the BULK_BLU and FG_BLU products, but only associating these BOMs on a Production Policy which has Product Name set to these makes this connection. We can use these BOMs at any location that they apply. Filtering the Production Policies table for the BULK_BLU and FG_BLU products we can see that 1) BOM_BULK_BLU is indeed used to make BULK_BLU and BOM_FG_BLU to make FG_BLU, and 2) the same BOMs are used at PLT_1 and PLT_2:

It is of course possible that the same product uses a different BOM at a different location. In this case, users can set up multiple BOMs for this product on the BOMs table and associate the correct one at the correct location in the Production Policies table. Choosing a naming convention for the BOM Names that includes the location name (or a code to indicate it) is recommended.
The screenshot above of the Bills of Materials table only shows records with Product Type = Component. Components are input into a BOM and are consumed by it when producing the end-product. Besides Component, Product Type can also be set to End Product or Byproduct. We will explain these 2 product types through the examples in this following screenshot:

Notes:
On the Processes table production processes of varying levels of complexity can be set up, from simple 1 step processes without using any work centers, to multi-step ones that specify costs, processing rates, and use different work centers for each step. The processes specified in the Multi-Year Capacity Planning model are relatively straightforward:

Let us also look at an example in a different model which contains somewhat more complex processes for a car manufacturer where the production process can roughly be divided into 3 steps:

Note that, like BOMs, Processes can in theory be both location and end-product agnostic. However:
Other fields on the Processes table that are not shown in the above 2 screenshots are:
If it is important to capture costs and/or capacities of equipment like production lines, tools, machines that are used in the production process, these can be modelled by using work centers to represent the equipment:

In the above screenshot, 2 work centers are set up at each plant: 1 existing work center and 1 new potential work center. The new work centers (PLT_1_NewLine and PLT_2_NewLine) have Work Center Status set to Closed, so they will not be considered for inclusion in the network when running the Baseline scenario. In some of the scenarios in the model, the Work Center Status of these 2 lines is changed to Consider and in these scenarios one of the new lines or both can be opened and used if it is optimal to do so. The scenario item that makes this change looks like this:

Next, we will also look at a few other fields on the Work Centers table that the Multi-Year Capacity Planning model utilizes:

In theory, it can be optimal for a model to open a considered potential work center in one period of the model (say 2024 in this model), close it again in a later period (e.g. 2025), for it then to open it again later (e.g. 2026), etc. In this case Fixed Startup or Fixed Closing Costs would be applied each time the work center was opened or closed, respectively. This type of behavior can be undesirable and is by default prevented by a Neo Run Parameter called “Open Close At Most Once”, as shown in this screenshot:

After clicking on the Run button, the Run screen comes up. The “Open Close At Most Once” parameter can be found in the Neo (Optimization) Parameters section. By default, it is turned on, meaning that a work center or facility is only allowed to change state once during the model’s horizon, i.e. once from closed to open if the Initial State = Potential or once from open to closed if the Initial State = Existing. There may however be situations where opening and/or closing of work centers and facilities multiple times during the model horizon is allowable. In that case, the Open Close At Most Once parameter can be turned off.
Other fields on the Work Centers table that are not shown in the above screenshots are:
Fixed Operating, Fixed Startup, and Fixed Closing Costs can be stepped costs. These can be entered into the fields on the Work Centers input table directly or can be specified on the Step Costs input table and then used on the Work Centers table in those cost fields. An example of stepped costs set up in the Step Costs input table is shown in the screenshot below where the costs are set up to capture the weekly shift cost for 1 person (note that these stepped costs are not in the Multi-Year Capacity Planning model in the Resource Library, they are shown here as an additional example):

To set for example the Fixed Operating Cost to use this stepped cost, type “WC_Shifts” into the Fixed Operating Cost field on the Work Centers input table.
Many of the input tables in Cosmic Frog have a Multi-Time Period equivalent, which can be used in models that have more than 1 period. These tables enable users to make changes that only apply to specific periods of the model. For example, to:
The multi-time period tables are copies of their single-period equivalents, with a few columns added and removed (we will see examples of these in screenshots further below):
Notes on switching status of records through the multi-period tables and updating records partially:

Three of the 4 production specific input tables that have been discussed above have a multi-time period equivalent: Production Policies, Processes, and Work Centers. There is no equivalent for the Bills Of Materials input table, as BOMs are only used if they are associated on records in the Production Policies table. Using different BOMs during different periods can be achieved by associating those BOMs on the Production Policies single-period table and setting the Status of them to Include for those to be used for most of the periods and to Exclude if they are to be included for certain periods / scenarios. Then add those records for which the Status needs to be switched to the Production Policies multi-period input table (we will walk through an example of this using screenshots in the next section).
The 3 production specific multi-time period input tables do have all of the same fields as their single-period equivalents, with the addition of the Period Name field and additional Status field. We will not discuss each multi-time period table and all its fields in detail here, but rather give a few examples of how each can be used.
Note that from this point onwards the Multi-Year Capacity Planning model was modified and added to for purposes of this help article, the version in the Resource Library does not contain the same data in the Multi-Time Period input tables and production specific Constraint tables that is shown in the screenshots below.
This first example on the Production Policies Multi-Time Period input table shows how the production of the cheddar finished good (FG_CHE) is prevented at plant 1 (PLT_1) in years 4 and 5 of the model:

In the following example, an alternative BOM to make feta (FG_FET) is added and set to be used at Plant 2 (PLT_2) during all periods instead of the original BOM. This is set up to be used in a scenario, so the original records need to be kept intact for the Baseline and other scenarios. To set this up, we need to update the Bills Of Materials, Production Policies, and Production Policies Multi-Time Period table, see the following screenshots and explanations:

On the Bills Of Materials input table, all we need to do is add the records for the new BOM that results in FG_FET. It has 2 records, both named ALTBOM_FG_FET, and instead of using only BULK_FET as the component which is what the original BOM uses, it uses a mix of BULK_FET and BULK_BLU as its components.
Next, we first need to associate this new BOM through the Production Policies table:

Lastly, the records that need to be added to the Production Policies Multi-Time Period table are the following 4 which have all the same values for the key columns as the 4 records in the above screenshot of the Production Policies single-period input table, which contain all the possible ways to produce FG_FET at PLT_2:


In the following example, we want to change the unit cost on 2 of the processes: at Plant 1 (PLT_1), the cost on the new potential line needs to be decreased to 0.005 for cheddar cheese (CHE) and increased to 0.015 for Swiss cheese (SWI). This can be done by using the Processes Multi-Time Period input table:

Note that there is also a Work Center Name field on the Processes Multi-Time Period table (not shown in the screenshot). As this is not a key field on the Processes input tables, it can be left blank here on the multi-time period table. This field will not be changed and the value from the single-time period table Work Center Name field will be used for these 2 records.
In the following example, we want to evaluate if upgrading the existing production lines at both plants from the 3rd year of the modelling horizon onwards, so they have a higher throughput capacity at a somewhat higher fixed operating cost, is a good alternative to opening one of the potential new lines at either plant. First, we add a new periods group to the model to set this up:

On the Groups table, we set up a new group named YEARS3-5 (Group Name) that is of Group Type = Periods and has 3 members: YEAR3, YEAR4 and YEAR5 (Member Name).

Cosmic Frog contains multiple tables through which different types of constraints can be added to network optimization (Neo) models. A constraint limits the model in a certain part of the network. These limits can for example be lower or upper limits in terms of the amount of flow between certain locations or certain echelons, the amount of inventory of a certain product or product group at a specific location or network wide, the amount of production of a certain product or product group at a specific location or network wide, etc. In this section the 3 constraints tables that are production specific will be covered: Production Constraints, Production Count Constraints, and Work Center Count Constraints.
A couple of general notes on all constraints tables:
In this example, we want to add constraints to the model that limit the production of all 5 finished goods together to 90,000 units. Both plants have this same upper production limit across the finished goods, and the limit applies to each year of the modelling horizon (5 yearly periods).

Note that there are more fields on the Production Constraints input table which are not shown in the above screenshot. These are:
In this example, we want to limit the number of products that are produced at PLT_1 to a maximum of 3 (out of the 5 finished goods). This limit applies over the whole 5-year modelling period, meaning that in total PLT_1 can produce no more than 3 finished goods:

Again, note there are more fields on the Production Count Constraints input table which are not shown in the above screenshot. These are:
Next, we will show an example of how to open at least 3 work centers, but no more than 5 out of 8 candidate work centers. These limits apply to all 5 yearly periods in the model together and over all facilities present in the model.

Again, there are more fields on the Work Center Count Constraints table that are not shown in the above screenshot:
After running a network optimization using Cosmic Frog’s Neo technology, production specific outputs can be found in several of the more general output tables, like the Optimization Network Summary, and the Optimization Constraints Summary (if any constraints were applied). Outputs more focused on just production can be found in 4 production specific output tables: the Optimization Production Summary, the Optimization Bills Of Material Summary, the Optimization Process Summary, and the Optimization Work Center Summary. We will cover these tables here, starting with the Optimization Network Summary.
The following screenshot shows the production specific outputs that are contained in the Optimization Network Summary output table:

Other production related fields on this table which are not shown in the screenshot above are:
The Optimization Production Summary output table has a record with the production details for each product that was produced as part of the model run:

Other fields on this output table which are not shown in the screenshot are:
The details of how many components were used and how much by-product produced as a result of any bills of materials that were used as part of the production process can be found on the Optimization Bills Of Material Summary output table:

Note that aside from possibly knowing based on the BOM Name, it is not listed in the Bills Of Material Summary output table what the end product is and how much of it is produced as a result of a BOM. Those details are contained in the Optimization Production Summary output table discussed above.
Other fields on this output table which are not shown in the screenshot are:
The details of all the steps of any processes used as part of the production in the Neo network optimization run can be found in the Optimization Process Summary, see these next 2 screenshots:


Other fields on this output table which are not shown in the screenshots are:
For each Work Center that has its Status set to Include or Consider, a record for each period of the model can be found in the Optimization Work Center Summary output table. It summarizes if the Work Center was used during that period, and, if so, how much and at what cost:

The following screenshot shows a few more output fields on the Optimization Work Center Summary output tables that have non-0 values in this model:

Other fields on this output table which are not shown in the screenshots are:
For all constraints in the model, the Optimization Constraint Summary can be a very handy table to check if any constraints are close to their maximum (or minimum, etc.) value to understand where the current and future bottlenecks are and likely will be. The screenshot below shows the outputs on this table for a production constraint that is applied at each of the 3 suppliers, where neither can produce more than 1 million units of RAW_MILK in any 1 year. In the screenshot we specifically look at the Supplier named SUP_3:

Other fields on this output table which are not shown in the screenshots are:
There are a few other output tables of which the main outputs are not related to production, but still contain several fields that result from productions. These are:
In this help article we have covered how to set up alternative Work Centers at existing locations and use the Work Center Status and Initial State fields to evaluate if including these, and from what period onwards if so, will be optimal. We have also covered how Work Center Count Constraints can be used to pick a certain amount of Work Centers to be opened/used from a set of multiple candidates, either at 1 location or multiple. Here we also want to mention that Facility Count Constraints can be used when making decisions at the plant level. Say that based on market growth in a certain region, a manufacturer decides a new plant needs to be built. There are 3 candidate locations for the plant from which the optimal needs to be picked. This can be set up as follows in Cosmic Frog:
A couple of alternative approaches to this are:
As mentioned above in the section on the Bill Of Materials input table, it is possible to set up a model where there is demand for a product that is the By-product resulting from a BOM. This does require some additional set up, and the below walks through this, while also showcasing how the model can be used to determine how much of any flexible demand for this by-product to fulfill. The screenshots show the set-up of a very simple example model built for this specific purpose.

On the Products table, besides the component (for which there also is demand in this model) that goes into any BOM, we also specify:

The demand for the 3 products is set up on the Customer Demand table and we notice that 1) there is demand for the Component, the End Product, and the By-Product, and 2) the Demand Status for ByProduct_1 is set to Consider, which means it does not need to be fulfilled, it will be (partially) fulfilled if it is optimal to do so. (For Component_1 and EndProduct_1 the Demand Status field is left blank, which means the default value of Include will be used.)

The EndProduct_1 is made through a BOM which consumes Component_1 and also make ByProduct_1 as a Byproduct. For this we need to set up a BOM:

Next, on the Production Policies table, we see that Component_1 can be created without a BOM, and:
In reality, these 2 production policies result in the same consumption of Component_1 and same production amounts of EndProduct_1 and ByProduct_1. Both need to be present however in order to be able to also have demand for ByProduct_1 in the model.
Other model elements that need to be set up are:
Three scenarios were run for this simple example model with the only difference between them the Unit Price for ByProduct_1: Baseline (price of ByProduct_1 = 3), PriceByproduct1 (Unit Price of ByProduct_1 = 1), PriceByproduct2 (Unit Price of ByProduct_1 = 2). Let’s review some of the outputs to understand how this Unit Price affects the fulfillment of the flexible demand for ByProduct_1:

The high-level costs, revenues, profit and served/unserved demand outputs by scenario can be found on the Optimization Network Summary output table:

On the Optimization Production Summary output table, we see that all 3 scenarios used BYP_BOM for the production of EndProduct_1 and ByProduct_1, it could have also picked the other BOM (FG_BOM) and the overall results would have been the same.
As the Optimization Production Summary only shows the production of the end products, we will also have a look at the Optimization Bills Of Material Summary output table:

Lastly, we will have a look at the Optimization Inventory Summary output table:

Note that had the demand for Byproduct_1 been set to Include rather than Consider in this example model, all 3 scenarios would have produced 100 units of it to fulfill the demand, and as a result have produced 200 units of EndProduct_1. 100 of those would have been used to fulfill the demand for EndProduct_1 and the other 100 would have stayed in inventory, like we saw in the Baseline scenario above.
Finding problems with any Cosmic Frog model’s data has just become easier with the release of the Integrity Checker. This tool scans all tables or a selected table in a model and flags any records with potential issues. Field level checks to ensure fields contain the right type of data or a valid value from a drop-down list are included, as are referential integrity checks to ensure the consistency and validity of data relationships across the model’s input tables.
In this documentation we will first cover the Integrity Checker tool’s scope, how to run it, and how to review its results. Next, we will compare the Integrity Checker to other Cosmic Frog data validation tools, and we will wrap up with several tips & tricks to help users make optimal use of the tool.
The Integrity Checker extends cell validation and data entry helper capabilities to support users identify a range of issues relating to referential integrity and data types before running a model. The following types of data and referential integrity issues are being checked for when the Integrity Checker is run:

Here, we provide a high-level description for each of these 4 categories; in the appendix at the end of this help center article more details and examples for each type of check are given. From left to right:
The Integrity Checker can be accessed in two ways while in Cosmic Frog’s Data module: from the pane on the right-hand side that also contains Model Assistant and Scenario Errors or from the Grid drop-down menu. The latter is shown in the next screenshot:

*Please note that in this first version of the Integrity Checker, the Inventory Policies and Inventory Policies Multi-Time Period tables are not included in any checks the Integrity Checker performs. All other tables are.
The second way to access the Integrity Checker is, as mentioned above, from the pane on the right-hand side in Cosmic Frog:

If the Integrity Checker has been run previously on a model, opening it again will show the previous results and gives user the option to re-run it by clicking on a “Rerun Check” button which we will see in screenshots further below.
After starting the Integrity Checker in one of the 2 ways described above, a message indicating it is starting will appear in the Integrity Checker pane on the right-hand side:

While the Integrity Checker is running, the status of the run will be continuously updated, while results will be added underneath as checks on individual tables complete. Only tables which have errors in them will be listed in the results.

Once the Integrity Checker run is finished, its status changes to Completed:

Users can see the errors identified by the Integrity Checker by clicking on one of the table cards which will open the table and the Integrity Checker Errors table beneath it:

Clicking on a record in the Integrity Checker Errors table will filter the table above (here the Transportation Policies table) down to the record(s) with that error:

User can go through each record in the Integrity Checker Errors table at the bottom and filter out the associated records with the errors in the table above to review the errors and possibly fix them. In the next screenshot, user has moved onto the second record in the Integrity Checker Errors table:

We will look at one more error, the one that was found on the Products table:

Finally, the following screenshot shows what it looks like when the Integrity Checker was run on an individual table and in the case no errors are found:

There are additional tools in Cosmic Frog which can help with finding problems in the model’s data and overall construction, the table below gives an overview of how these tools compare to each other to help users choose the most suitable one for their situation:
Please take note of the following so you can make optimal use of the Integrity Checker capabilities:


We saw the next diagram further above in the Integrity Checker Scope section. Here we will expand on each of these categories and provide examples.

From left to right:
Note that the numeric and data type checks sound similar, but they are different: a value in a field can pass the data type check (e.g. a double field contains the value -2000), but not the numeric check (a latitude field can only contain values between -90 and 90, so -2000 would be invalid).
We hope you will find the Integrity Checker to be a helpful additional tool to facilitate your model building in Cosmic Frog! For any questions, please contact Optilogic support on support@optilogic.com.
In a supply chain model, sourcing policies describe how network components create and order necessary materials. In Cosmic Frog, sourcing rules & policies appear in two different table categories:


In this section, we will discuss how to use these Sourcing policy tables to incorporate real-world behavior. In the sourcing policy tables we define 4 different types of sourcing relationships:
First we will discuss the options user has for the simulation policy logic used in these 4 tables and the last section covers the other simulation specific fields that can be found on these sourcing policies tables.
Customer fulfillment policies describe which supply chain elements fulfill customer demand. For a Throg (Simulation) run, there are 3 different policy types that we can select in the “Simulation Policy” column:
If “By Preference” is selected, we can provide a ranking describing which sites we want to serve customers for different products. We can describe our preference using the “Simulation Policy Value” column.
In the following example we are describing how to serve customer CZ_CA’s demand. For Product_1, we prefer that demand is fulfilled by DC_AZ. If that is not possible, then we prefer DC_IL to fulfill demand. We can provide rankings for each customer and product combination.
Under this policy, the model will source material from the highest ranked site that can completely fill an order. If no sites can completely fill an order, and if partial fulfillment is allowed, the model will partially fill orders from multiple sources in order of their preference.

If “Single Source” is selected, the customer must receive the given product from 1 specific source, 1 of the 3 DCs in this example.
The “Allocation” policy is similar to the “By Preference” policy, in that it sources from sites in order of a preference ranking. The “Allocation” policy, however, does not look to see whether any sites can completely fill an order before doing partial fulfillment. Instead, it will source as much as possible from source 1, followed by source 2, etc. Note that the “Allocation” and “By Preference” policies will only be distinct if partial fulfillment is allowed for the customer/product combination.

Consider the following example, customer CZ_MA can source the 3 products it puts orders in for from 3 DCs using the By Preference simulation policy. For each product the order of preference is set the same: DC_VA is the top choice, then DC_IL, and DC_AZ is the third (last) choice. Also note that in the Customers table, CZ_MA has been configured so that it is allowed to partially fill orders and line items for this customer.

The first order of the simulation is one that CZ_MA places (screenshot from the Customer Orders table), it orders 20 units of Product_1, 600 units of Product_2, and 160 units of Product_3:

The inventory at the DCs for the products at the time this orders comes in is the same as the initial inventory as this customer order is the first event of the simulation:

When the simulation policy is set to By Preference, we will look to fill the entire order from the highest priority source possible. The first choice is DC_VA, so we check its inventory: it has enough inventory to fill the 20 units of Product_1 (375 units in stock) and the 160 units of Product_3 (500 units in stock), but not enough to fill the 600 units of product_2 (150 units in stock). Since the By Preference policy prefers to single source, it looks at the next priority source, DC_IL. DC_IL does have enough inventory to fulfill the whole order as it has 750 units of Product_1, 1000 units of Product_2, and 300 units of Product_3 in stock.
Now, if we change all the By Preference simulation policies to Allocation via a scenario and run this scenario, the outcomes are different. In this case, as many units as possible are sourced from the first choice DC, DC_VA in this case. This means sourcing 20 units of Product_1, 150 units of Product_2 (all that are in stock), and 160 units of Product_3 from DC_VA. Then next, we look at the second choice source, DC_IL, to see if we can fill the rest of the order that DC_VA cannot fill: the 450 units left of Product_1, which DC_IL does have enough inventory to fill. These differences in sourcing decisions for these 2 scenarios can for example be seen in the Simulation Shipment Report output table:

Replenishment policies describe how internal (i.e. non-customer) supply chain elements source material from other internal sources. For example, they might describe how a distribution center gets material from a manufacturing site. They are analogous to customer fulfillment policies, except instead of requiring a customer name, they require a facility name.

Procurement policies describe how internal (i.e. non-customer) supply chain elements source material from external suppliers. They are analogous to replenishment policies, except instead of using internal sources (e.g. manufacturing sites), they use external suppliers in the Source Name field.

Production policies allow us to describe how material is generated within our supply chain.

There are 4 simulation policies regarding production:
Besides setting the Simulation Policy on each of these Sourcing Policies tables, each has several other fields that the Throg Simulation engine uses as well, if populated. All 4 Sourcing Policies tables contain a Unit Cost and a Lot Size field, plus their UOM fields. The following screenshot shows these fields on the Replenishment Policies table:

The Customer Fulfillment Policies and Replenishment Policies tables both also have an Only Source From Surplus field which can be set to False (default behavior when not set) or True. When set to True, only sources which have available surplus inventory are considered as the source for the customer/facility – product combination. What is considered surplus inventory can be configured using the Surplus fields on the Inventory Policies input table.
Finally, the Production Policies table also has following additional fields:
Inventory policies describe how inventory is managed across facilities in our supply chain. These policies can include how and when to replenish, how stock is picked out of inventory, and many other important rules.
In general, we add inventory policies using the Inventory Policies table in Cosmic Frog.

In this documentation we will cover the types of inventory simulation policies available and also other settings contained in the Inventory Policies table.
An (R,Q) policy is a commonly used inventory management approach. Here, when inventory drops below a value of R units, the policy is to order Q units. In Cosmic Frog, when an (R,Q) policy is selected, we can define R and Q in “SimulationPolicyValue1” and “SimulationPolicyValue2”, respectively. We can define the unit of measure (e.g. pallets, volume, individual units, etc.) for both parameters in their corresponding simulation policy value UOM column.
In the following example, MFG_STL has an (R,Q) inventory policy of (100,1900) for Product_2, measured in terms of individual units (i.e. “each”).

(s,S) policies are like (R,Q) policies in that they define a reorder point and how much to reorder. In an(s,S) policy, when inventory is below s units, the policy is to “order up to” S units. In other words, if x is the current inventory level, and x < s, the policy is to order (S-x) units of inventory.
In the example below, DC_VA has an (s,S) inventory policy of (150,750) for Product_1. If inventory dips below 150, the policy is to order so that inventory would replenish to 750 units.

(s,S) policies may also be referred to as (Min,Max) policies; both policy names are accepted in the Anura schema and both behave as described above.
A (T,S) inventory policy is like an (s,S) inventory policy in that whenever inventory is replenished, it is replenished up to level S. Under an (s,S) inventory policy, we check the inventory level in each period when making reorder decisions. In contrast, under a (T,S) inventory policy, the current inventory level is only checked every T periods. During one of these checks, if the inventory level is below S, then inventory is replenished up to level S.
In the example below, DC_VA manages Product_1 using a (T,S) inventory policy. The DC checks the inventory level every 5 days. If inventory is below 750 units during any of these checks, inventory is replenished up to 750 units.

As the name suggests a Do Nothing inventory policy does not trigger any replenishment orders. This policy can for example be used for products that are being phased out or at manufacturing locations where production occurs based on a schedule.
In the example below, MFG_STL uses the Do Nothing inventory policy for the 3 products it manufactures.

On the inventory policies table, other fields available to the user to model inventory include those to set initial inventory, how often inventory is reviewed, and the inventory carrying cost percentage:

When Only Source From Surplus is set to True on a customer fulfillment or a replenishment policy, the Surplus fields on the Inventory Policies table can be used to specify what is considered surplus inventory for a facility – product combination:

Note that if all inventory needs to be pushed out of a location, Push replenishment policies need to be set up for that location (where the location is the Source), and Surplus Level needs to be set to 0.
Inventory Policy Value fields can also be expressed in terms of the number of days of supply to enable the modelling of inventory where the levels go up or down when (forecasted) demand goes up or down. Please see the help center article “Inventory – Days of Supply (Simulation)” to learn more about how this can be set up and the underlying calculations.
Transportation policies describe how material flows throughout a supply chain. In Cosmic Frog, we can define our transportation policies using the Transportation Policies (required) and Transportation Modes (optional) tables. The Transportation Policies table will be covered in this documentation. In general, we can have a unique transportation policy for each combination of origin, destination, product, and transport mode.

Typically in simulation models, transportation policies are defined over the group of all products (which can be done by leaving Product Name blank as is done in the screenshot above), unless some products need to be prevented from being combined into shipments together on the same mode. If Transportation Policies list products explicitly, these products will not be combined in shipments.
Here, we will first cover the available transportation policies; other transportation characteristics that can be specified in the Transportation Policies table will be discussed in the sections after.
Currently supported transportation simulation policies are:
Selecting “On Volume”, “On Weight”, or “On Quantity” as a simulation policy means that either the volume, weight, or quantity of the shipment will determine which transportation mode is selected. In this case, the “Simulation Policy Value” defines the lowest volume that will go by that mode. We can use multiple lines to define multiple breakpoints for this policy.

Please note that:
If “By Preference” is selected, we can provide a ranking describing which transportation mode we want to use for different origin-destination-product combinations. We can describe our preference using the “Simulation Policy Value” column.

This screenshot shows that all MFG to DC transportation lanes only have 1 Mode of Container and the Simulation Policy is set to By Preference for all of them. If there are multiple Modes available, the By Preference policy will select them pending availability in the order of preference specified by the Simulation Policy Value field, the lowest value being the most preferred mode. If there were 2 modes available and the policy set to By Preference, where 1 mode has a simulation policy value of 1 and the other of 2, the Mode with simulation policy value = 1 will be used if available, if it is not available, the mode with simulation policy value = 2 will be used.
In the following example, the “Container” mode is preferred over the “Truck” mode for the MFG_CA to DC_IL route. Note that since the “Product Name” column is left blank, this policy applies to all products using this route.

Selecting “By Due Date” is like “By Preference” in that different modes can be ranked via the “Simulation Policy Value”. However, selecting “By Due Date” adds the additional component of demand timing into its selection. This policy selects the highest preference option that can meet the due date of the shipment. The following screenshot shows that the By Due Date simulation policy is used on certain DC to CZ lanes where 2 Modes are used, Truck and Parcel:

Costs associated with transportation can be entered in the Transportation Policies table Fixed Cost and Unit Cost fields. Additionally, the distance and time travelled using a certain Mode can be specified too:

Maximum flow on Lanes (origin-destination-product combinations) and/or Modes (origin-destination-product-mode combinations) can also be specified in the Transportation Policies table:

The Lane Capacity field and its UOM field specify the maximum flow on the Lane, while the Lane Capacity Period and its UOM field are used to indicate over what period of time this capacity applies. In this example, the MFG_CA to DC_AZ lane (first record) has a maximum capacity of 30 shipments every 13 weeks. Once 30 shipments have been shipped on this lane in a 13 week period, this lane cannot be used anymore during those 13 weeks; it is available for shipping again from the first day of the next 13 week period. If a lane’s capacity is reached, it depends on the simulation logic set up what happens. It can for example lead to the simulation making different sourcing decisions: if By Preference sourcing is used and the lane capacity on the lane of the preferred source to the destination has been reached for the period, this source is not considered available anymore and the next preferred source will be checked for availability, etc.
Analogous to the 4 fields to set Lane Capacity shown and discussed above, there are also 4 fields in the Transportation Policies table to set the Lane Mode Capacity where the capacity is specifically applied to a mode and not the whole lane in case multiple Modes exist on the lane: Lane Mode Capacity and its UOM field, and Lane Mode Capacity Period and its UOM field.
There are a few other fields on the Transportation Policies table that the Throg simulation engine will take into account if populated:
In a supply chain model, sourcing policies describe how network components create and order necessary materials. In Cosmic Frog, sourcing policies and rules appear in two different table categories:


In this section, we describe how to use the model elements tables to define sourcing rules for customers and facilities. Specifically, we can decide if each element is single sourced, allows backorders, and/or allows partial fulfillment.
Single source policies can be defined on either the order level or the line-item level. Setting “Single Source Orders” to “True” for a location means that for each order placed by that location, every item in that order must come from a single source. Setting this value to “False” does not prohibit single sourcing, it just removes the requirement.

Setting “Single Source Line Items” to “True” only requires each individual line-item come from a single source. In other words, even if this is “True”, an individual order can have multiple sources, as long as each line item is single sourced.
If “Single Source Orders” is set to “True” and “Single Source Line Items” is set to “False”, the “Single Source Orders” value takes precedence.
In case an order cannot be fulfilled by the due date (as set on the Customer Orders table in the case of Customers), it is possible to allow backorders where the order will still be filled, but it will be late, by setting the “Allow Backorders” value to “True”. A time limit can be set on this by using the “Backorder Time Limit” field and its UOM field, set to 7 days in the below screenshot. This means that the orders are allowed to be backordered, but if after 7 days the order still is not filled, it is cancelled. Leaving Backorder Time Limit blank means there is no time limit, and the order can be filled late indefinitely.

We can also decide to allow partial fulfillment of orders or individual line-items. If “Allow Partial Fill Orders” is set to “False”, orders need to be filled in full. If set to “True”, then only filling part of an order on time (by the due date) is allowed. What happens with the unfulfilled part of the order depends on if backorders are allowed. If so (“Allow Backorders” = “True”), then the remaining quantity of a partially filled order can be satisfied in the future with additional shipments. If a time limit on backorders is set and is reached on a partially filled order, the remaining quantity will be cancelled. “Partial Fill Orders” and “Partial Fill Line Items” behave similarly to the single sourcing policies, where it is possible to for example allow partially filling orders, but not partially filling line items. If “Partial Fill Orders” is set to “True”, then “Partial Fill Line Items” will also be forced to “True”.

The Transportation Modes table is an often used optional input table to run a simulation. Mode attributes like fill levels and capacities are specified in this table to control the size of shipments, which will be explained first in this documentation. Rules of precedence when using multiple fill level / capacity fields and when using On Volume / Weight / Quantity transportation simulation policies will be covered also.

The same capacity and fill level fields as for Volume are also available in this table for Quantity and Weight (not shown in the screenshot above).
When utilizing more than 1 of the Fill Level fields, the one that is reached first is applied. For example, if a shipment’s weight has reached the weight fill level, but its volume has not yet reached the volume fill level, the shipment is allowed to be dispatched.
Similarly, if more than 1 Capacity field has been populated, the one that is reached first is applied. For example, if a shipment’s volume has reached the volume capacity but not yet the weight capacity, it cannot be filled up further and will be dispatched.
As mentioned above, when transportation simulation policies of On Quantity / Weight / Volume are being used, the fill levels and capacities of these Modes are specified in the simulation policy value field on the Transportation Policies table. If also using the Transportation Modes table to set any fill level and/or capacity for these modes, user needs to take note of the effects this may have:
Simulations are generally (mostly) driven by demand specified as customer orders. These orders can be entered in the Customer Orders and/or the Customer Order Profiles input tables. The Customer Orders table typically contains historical transactional demand records to simulate a historical baseline. The Customer Order Profiles table on the other hand contains descriptions of customer order behaviors from which the simulation engine (Throg) generates orders that follow these profiles.
In this documentation we cover both these input table, the Customer Orders table and the Customer Order Profiles table.
To achieve the level of granularity needed and the time-based events to mimic reality as best as possible, every customer order to be simulated is explicitly defined in the customer orders table; this includes line items, order and due dates, and order quantities:

Users can utilize the following additional fields available on the Customer Orders table if required. The single sourcing, allow partial fill, and allow backorder settings behave the same as those that can be set on the Customers table (see this help article), except these here apply to individual orders/individual line items rather than to all orders at the customer over the whole simulation horizon. Note that if these are set here on the Customer Orders table, these values take precedence over any values set for the particular customer in the Customers table:
Rather than specifying individual orders and line items, the Customer Order Profiles table generates these individual orders from profiles which can for example disaggregate monthly demand forecasts into assumed or inferred profiles, using variability to randomize characteristics like quantities and time between orders.


Note that by using start and end dates for profiles, users can control the portion of the simulation horizon in which a profile is used. This enables users to for example capture seasonal demand behaviors by defining a profile for Customer A/Product X in winter, and another profile for the same customer-product combination in summer.
Two scenarios were run, 1 named “CZ_CO P4 profile a” where customer order profile a to generate orders at CZ_CO for Product_4 is included and 1 named “CZ_CO P4 profile b” where customer order profile b to generate orders at CZ_CO for Product_4 is included. These are the profiles shown in the 2 screenshots above. In the Simulation Order Report output table one can see the individual orders generated by these profiles during the simulation runs of these 2 scenarios:

When running models in Cosmic Frog, users can choose the size of the resource the model’s scenario(s) will be run on in terms of available memory (RAM in Gb) and number of CPU cores. Depending on the complexity of the model and the number of elements, policies and constraints in the model, the model will need a certain amount of memory to run to completion successfully. Bigger, more complex models typically need to be run using a resource that has more memory (RAM) available as compared to smaller, less complex models. The bigger the resource that is being used, the higher the billing factor which leads to using more of the available cloud compute hours available to the customer (the total amount of cloud compute time available to the customer is part of customer’s Master License Agreement with Optilogic). Ideally, users choose a resource size that is just big enough to run their scenario(s) without the resource running out of memory, while minimizing the amount of cloud compute time used. This document guides users in choosing an initial resource size and periodically re-evaluating it to ensure optimal usage of the customer’s available cloud compute time.
Once a model has been built and the user is ready to run 1 or multiple scenarios, they can click on the green Run button at the right top in Cosmic Frog which opens the Run Settings screen. The Run Settings screen is documented in the Running Models & Scenarios in Cosmic Frog Help Center article. On the right-hand side of the Run Settings screen, user can select the Resource Size that will be used for the scenario(s) that are being kicked off to run:


In this section, we will guide users on choosing an initial resource size for the different engines in Cosmic Frog, based on some model properties. Before diving in, please keep following in mind:
There are quite a few model factors that influence how much memory a scenario needs to solve a Neo run. These include the number of model elements, policies, periods, and constraints. The type(s) of constraints used may play a role too. The main factors, in order of impact on memory usage, are:
These numbers are those after expansion of any grouped records and application of scenario items, if any.
The number of lanes can depend on the Lane Creation Rule setting in the Neo (Optimization) Parameters:

Note that for lane creation, expansion of grouped records and application of scenario item(s) need to be taken into account too to get at the number of lanes considered in the scenario run.
Users can use the following list to choose an initial resource size for Neo runs. First, calculate the number of demand records multiplied with the number of lanes in your model (after expansion of grouped records and application of scenario items). Next, find the range in the list, and use the associated recommended initial resource size:
# demand records * # lanes: Recommended Initial Resource Size
A good indicator for Throg and Dendro runs to base the initial resource size selection on is the order of magnitude of the total number of policies in the model. To estimate the total number of policies in the model, add up the number of policies contained in all policies tables. There are 5 policies tables in the Sourcing category (Customer Fulfillment Policies, Replenishment Policies, Production Policies, Procurement Policies, and Return Policies), 4 in the Inventory category (Inventory Policies, Warehousing Policies, Order Fulfillment Policies, and Inventory Policies Advanced), and the Transportation Policies table in the Transportation category. The policy counts of each table should be those after expansion of any grouped records and application of scenario items, if any. The list below shows the minimum recommended initial resource size based on the total number of policies in the model to solve models using the Throg or Dendro engine.
Number of Policies: Minimum Resource
For Hopper runs, memory is the most important factor in choosing the right resource, and the main driver of memory requirements is the number of origin-destination (OD) pairs in the model. OD pairs are determined primarily by all possible facility-to-customer, facility-to-facility, and customer-to-customer lane combinations.
Most Hopper models have many more customers compared to facilities, and so we often can use the number of customers in a model as a guide for resource size. The list below shows the minimum recommended initial resource size to solve models using Hopper.
Customers: Minimum Resource Size
Most Triad models should solve very quickly, typically under 10 minutes. Still, choosing the right resource size will ensure your Triad model solves successfully, without paying for unneeded compute resources.
As with Hopper, memory is the most important factor in resource selection. In Triad, the main driver of memory requirements is the number of customers, with a smaller secondary effect from the number of greenfield facilities.
The list below shows the minimum recommended initial resource size to solve models using Triad where the number of facilities is assumed to be between 1 and 10:
Customers: Minimum Resource Size
Please note:
After running a scenario with the initially selected resource size, users can evaluate if it is the best resource size to use or if a smaller or larger one is more appropriate. The Run Manager application on Optilogic’s platform can be used to assess resource size:


Using this knowledge that the RAM required at peak usage is just over 1 Gb, we can conclude that going down to resource size 3XS, which has 2Gb of RAM available should still work OK for this scenario. The expectation is that going further down to 4XS, which has 1Gb of RAM available, will not work as the scenario will likely run out of memory. We can test this with 2 additional runs. These are the Job Usage Metrics after running with resource size 3XS:

As expected, the scenario runs fine, and the memory usage is now at about 54% (of 2Gb) at peak usage.
Trying with resource size 4XS results in an error:

Note that when a scenario runs out of memory like this one here, there are no results for it in the output tables in Cosmic Frog if it is the first time the scenario is run. If the scenario has been run successfully before, then the previous results will still be in the output tables. To ensure that a scenario has run successfully within Cosmic Frog, user can check the timestamp of the outputs in the Optimization Network Summary (Neo), Transportation Summary (Hopper), or Optimization Greenfield Output Summary (Triad) output tables, or review the number of error jobs versus done jobs at the top of Cosmic Frog (see next screenshot). If either of these 2 indicate that the scenario may not have run, then double-check in the Run Manager and review the logs there to find the cause.

In the status bar at the top of Cosmic Frog, user can see that there were 2 error jobs and 13 done jobs within the last 24 hours.
In conclusion, for this scenario we started with a 2XS resource size. Using the Run Manager, we reviewed the percentage of memory used at peak usage in the Job Usage Metrics and concluded that a smaller 3XS resource size with 2Gb of RAM should still work fine for this scenario, but an even smaller 4XS resource size with 1Gb of RAM would be too small. Test runs using the 3XS and 4XS resource sizes confirmed this.
Transportation lanes are a necessary part of any supply chain. These lanes represent how product flows throughout our supply chain. In network optimization, transportation lanes are often referred to as arcs or edges.
In general, lanes in our supply chain are generated from the transportation policies and sourcing policies provided in our data tables.

Transportation policies are stored in the TransportationPolicies table. Sourcing policies are stored in the following tables:
From the data in these tables, the software automatically generates the lanes (i.e. arcs or edges) in our network before sending it to the optimization solver. We can control how these lanes are generated as a parameter of our Neo model.
Neo models can follow 4 different lane creation policies:

If the “Transportation Policy Lanes Only” rule is selected, Cosmic Frog will only generate transportation lanes based on data in the TransportationPolicies table. If a lane between two sites is not explicitly defined here, product will not be able to directly flow between those sites. Note that any additional information specified in a Sourcing Policy table (unit cost, policy rule etc.) will still be respected for the lane so long as it exists in the Transportation Policies table.

If the “Sourcing Policy Lanes Only” rule is selected, Cosmic Frog will only generate transportation lanes based on data in the Sourcing tables. Even if an origin-destination path is defined in the TransportationPolicies table, product will not be able to flow via this lane unless there is a specific sourcing policy defining how the destination site gets product from the origin site. Note that any additional information specified in a Transportation Policies table (cost, policy rule, multiple modes etc.) will still be respected for the lane so long as it exists in a Sourcing Policy table.

If the “Intersection” rule is selected, Cosmic Frog will only generate transportation lanes if they are defined in both the transportation policy table and one of the sourcing policy tables.
For users converting models from Supply Chain Guru©, the default SCG© lane creation rule is “Intersection”.

If the “Union” rule is selected, Cosmic Frog will generate transportation lanes if they are defined in either the transportation policy table or one of the sourcing policy tables.

Here we will cover the options a Cosmic Frog user has for modeling transportation costs when using the Neo Optimization engine. The different fields that can be populated and how the calculations under the hood work will be explained in detail.
There are many ways in which transportation can be costed in real life supply chains. The Transportation Policies table contains 4 cost fields to help users model costs as close as possible to reality. These fields are: Unit Cost, Fixed Cost, Duty Rate and Inventory Carrying Cost Percentage. Not all these costs need to be used: the one(s) that are applicable should be populated and the others can be left blank. The way some of these costs work depends on additional information specified in other fields, which will be explained as well.
Note that in the screenshots throughout this documentation some fields in the Cosmic Frog tables have been moved so they could be shown together in a screenshot. You may need to scroll right to see the same fields in your Cosmic Frog model tables and they may be in a different order.
We will first discuss the input fields with the calculations and some examples; at the end of the document an overview is given of how the cost inputs translate to outputs in the optimization output tables.
This field is used for transportation costs that increase when the amount of product being transported increases and/or the transportation distance or time increases. As there are quite a few different measures based on which costs can depend on the amount of product that is transported (e.g. $2 per each, or $0.01 per each per mile, or $10 per mile for a whole shipment of 1000 units, etc.) there is a Unit Cost UOM field that specifies how the cost specified in the Unit Cost field should be applied. In a couple of cases, the Average Shipment Size and Average Shipment Size UOM fields must be specified too as we need to know the total number of shipments for the total Unit Cost calculation. The following table provides an overview of the Unit Cost UOM options and explains how the total Unit Costs are calculated for each UOM:

With the settings as in the screenshot above, total Unit Costs will be calculated as follows for beds, pillows, and alarm clocks going from DC_Reno to CUST_Phoenix:
The Unit Cost field can contain a single numeric value (as in the examples above), a step cost specified in the Step Costs table, a rate specified in the Transportation Rates table, or a custom cost function.
If stepped costs are used as the Unit Cost for Transportation Policies that use Groups in the Product Name field, then the Product Name Group Behavior field determines how these stepped costs are applied:
See following screenshots for an example of using stepped costs in the Unit Cost field and the difference in cost calculations for when Product Name Group Behavior is set to Enumerate vs Aggregate:

On the Step Costs table (screenshot above), the stepped costs we will be using in the Unit Cost field on the Transportation policies table are specified. All records with the same Step Cost Name (TransportUnitCost_2 here) make up 1 set of stepped costs. The Step Cost Behavior is set to Incremental here, meaning that discounted costs apply from the specified throughput level only, not to all items once we go over a certain throughput. So, in this example, the per unit cost for units 0 through 10,000 is $1.75, $1.68 for units 10,001 through 25,000, $1.57 for units 25,001 through 50,000, and $1.40 for all units over 50,000.
The configuration in the Transportation Policies table looks as follows:

The following screenshot shows the outputs on the Optimization Flow Summary table of 2 scenarios that were run with these stepped costs, 1 scenario used the Enumerate option for the Product Name Group Behavior and the other 1 used the Aggregate option. The cost calculations are explained below the screenshot.

The Fixed Cost field can be used to apply a fixed cost to each shipment for the specified origin-destination-product-mode combination. An average shipment size needs to be specified to be able to calculate the number of shipments from the amount of product that is being transported. When calculating the number of shipments, the result can contain fractions of shipments, e.g. 2.8 or 5.2. If desirable, these can be rounded up to the next integer (e.g. 3 and 6 respectively) by setting the Fixed Cost Rule field to Treat As Full. Note however that using this setting can increase model runtimes, and using the default Prorate setting is recommended in most cases.
In summary, The Fixed Cost field therefore works together with the Fixed Cost Rule, Average Shipment Size, and Average Shipment Size UOM fields. The following table shows how the calculations work:
The Fixed Cost field can contain a single numeric value, or a step cost specified in the Step Costs table.
Following example shows how Fixed Costs are calculated on the DC_Scranton – CUST_Augusta lane and illustrates the difference between setting the Fixed Cost Rule to Prorate vs Treat As Full

This setup in the Transportation Policies table means that the cost for 1 shipment with on average 1,000 units on it is $100. 2 scenarios were run with this cost setup, 1 where Fixed Cost Rule was set to Prorate and 1 where it was set to Treat As Full. Following screenshot shows the outputs of these 2 scenarios:

For Fixed Costs on Transportation Policies that use Groups in the Product Name field, the Product Name Group Behavior field determines how these fixed costs are applied:
See following screenshots for an example of using Fixed Costs where the Fixed Cost Rule is set to Treat As Full and the difference in cost calculations for when Product Name Group Behavior is set to Enumerate vs Aggregate:

The transportation policy from DC_Birmingham to CUST_Baton Rouge uses the AllProducts group as the ProductName. This Group contains all 3 products being modelled: beds, pillows, and alarm clocks. The costs on this policy are a fixed cost of $100 per shipment, where an average shipment contains 1,000 units. The Fixed Cost Rule is set to Treat As Full meaning that the number of shipments will be rounded up to the next integer. Depending on the Product Name Group Behavior field this is done for the flow of each product individually (when set to Enumerate) or done for the flow of all 3 products together (when set to Aggregate):

When products are imported or exported from/to different countries, there may be cases where duties need to be paid. Cosmic Frog enables you to capture these costs by using the Duty Rate field on the Transportation Policies table. In this field you can specify the percentage of the Product Value (as specified on the Products table) that will be incurred as duty. If this percentage is for example 9%, you need to enter a value of 9 into the Duty Rate field. The calculation of total duties on a lane is as follows: Flow Quantity * Product Value * Duty Rate.
The following screenshots show the Product Value of beds, pillows and alarm clocks in the Products table, the Duty Rate set to 10% on the DC_Birmingham to CUST_Nashville lane in the Transportation Policies table, and the resulting Duty Costs in the Optimization Flow Summary table, respectively.



Alarm clocks have a Product Value of $30. With a Duty Rate of 10% and moving 24,049 from DC_Birmingham to CUST_Nashville, the resulting Duty Cost = 24,049 * $30 * 0.1 = $72,147.
If in transit inventory holding costs need to be calculated, the Inventory Carrying Cost Percentage field on the Transportation Policies table can be used. The value entered here will be used as the percentage of product value (specified on the Products table) to incur the in transit holding costs. If the Inventory Carrying Cost Percentage is 13%, then enter a value of 13 into this field. This percentage is interpreted as an annual percentage, so the in transit holding cost is then prorated based on transit time. The calculation of the in transit holding costs becomes: Flow Quantity * Product Value * Inventory Carrying Cost Percentage * Transit Time (in days) / 365.
Note that there is also an Inventory Carrying Cost Percentage field in the Model Settings table. If this is set to a value greater than 0 and there is no value specified in the Transportation Policies table, the value from the Model Settings table is automatically used for inventory carrying cost calculations, including in transit holding costs. If there are values specified in both tables, the one(s) in the Transportation Policies table take precedence for In Transit Holding Cost calculations.
The following screenshots show the Inventory Carrying Cost Percentage set to 20% on the DC_Birmingham to CUST_Nashville lane in the Transportation Policies table, and the resulting In Transit Holding Costs in the Optimization Flow Summary table, respectively. The Product Values are as shown in the screenshot of the Products table in the previous section on Duty Rates.


For Pillows, the Product Value set on the Products Table is $100. When 120,245 units are moved from DC_Birmingham to CUST_Nashville, which takes 3.8909 hours (214 MI / 55 MPH), the In Transit Holding Costs are calculated as follows: 120,245 (units) * $100 (product value) * 0.2 (Carrying Cost Percentage) * (3.8909 HR (transport time) / 24 (HRs in a day)) / 365 (days in a year) = $1,068.18.
The following table gives an overview of how the inputs into the 4 cost fields on the Transportation Policies table translate to outputs in multiple optimization output tables. The table contains the field names in the output tables and shows from which input field they result.
Note that the 4 different types of transportation costs are also included in the Landed Cost (Optimization Demand Summary table) and Parent Node Cost (Optimization Cost To Serve Parent Information Report table) calculations.
The SQL Editor helps users write, edit, and execute SQL (Structured Query Language) queries within Optilogic’s platform. It provides direct access to database objects such as tables and views stored within the platform. In this documentation, the Anura Supply Chain Model Database (Cosmic Frog’s database) will be used as the database example.
Anura model exploration and editing are enabled through the three windows of the SQL Editor:

The Anura database is stored in PostgreSQL and exclusively supports PostgreSQL query statements to ensure optimized performance. Visit https://www.postgresql.org/ for more detailed information.
To enable the SQL editor, select a table or view from a database. Once selected, the SQL Editor will prepopulate a Select query, and the Metadata Explorer displays the table schema to enable initial data exploration.

The Database Browser offers several tools to explore your databases and display key information.

The Query Editor enables users to create and execute custom SQL queries and view the results. Reserved words are highlighted in blue to assist in SQL editing. This window is not enabled until a model table or view has been selected from the database browser; once selected, the user is able to customize this query to run in the context of the selected database.

The Metadata Explorer provides a set of tools to efficiently create and store SQL queries.

SQL is a powerful language that allows you to manipulate and transform tabular data. The query basics overview will help guide you through creating basic SQL queries.

Example 1: Filter Criteria - Customers with status set to include without latitude
SELECT A.CustomerName, A.Status, A.Region
FROM customers A
Where A.Latitude IS NOT NULL and A.Status = ‘Include’
Example 2: Summarizing Records - Regions with 2 or more geocoded customers
SELECT A.Region, A.Status, Count(*) AS Cnt
FROM customers A
Where A.Latitude IS NOT NULL
Group By A.Region, A.Status
Having Count(*) > 1
Order by Cnt DescOften, your model analysis will require you to use data stored in more than one table. To include multiple tables in a single SQL query, you will have to use table joins to list the tables and their relationships.
If you are unsure if all joined values are present in both tables, leverage a Left or Right join to ensure you don’t unintentionally exclude records.

Example 1: Inner Join - Join Customer Demand and Customers to add Region to Demand
SELECT A.CustomerName, A.ProductName, B.Region, A.Quantity
FROM customerdemand A INNER JOIN Customers B
on A.CustomerName = B.CustomerName
Example 2: Left Join - Find Customer Demand records missing Customer record
SELECT A.CustomerName, A.ProductName, B.Region, A.Quantity
FROM customerdemand A Left JOIN Customers B
on A.CustomerName = B.CustomerName
Where B.CustomerName is Null
Example 3: Inner Join & Aggregation – Summarize Demand by Region
SELECT B.Region, A.ProductName, SUM(Cast (A.Quantity as Int)) Quantity
FROM customerdemand A INNER JOIN Customers B
on A.CustomerName = B.CustomerName
Group By B.Region, A.ProductNameWhen data is separated into two or more tables due to categorical differences in the data, a join won’t work because there is a common structure, not a relationship between the tables. A UNION is a type of join that allows you to merge the results of two separate table queries into a single unified output. Ensure each query has the same number of columns in the same order.
Example 1: UNION – Create a unified view of all customers and facilities that are geocoded
SELECT A.CustomerName as SiteName, A.City, A.Region, A.Country, A.Latitude, A.Longitude, 'Cust' as Type
FROM customers A
UNION
SELECT B.FacilityName as SiteName, B.City, B.Region, B.Country,B.Latitude, B.Longitude, 'Facility' as Type
FROM Facilities BAs queries grow in complexity, it is often easiest to reset the table references by creating a sub-query. A sub-query allows you to create a new virtual table and reference this abbreviated name and structure as you build out a query in phases.
Example 1: Subquery +UNION – Create a unified view of all customers and facilities that are geocoded
SELECT C.SiteName, C.city, C.Region, C.Country, C.Latitude, C.Longitude, C.Type
FROM (
SELECT A.CustomerName as SiteName, A.City, A.Region, A.Country, A.Latitude, A.Longitude, 'Cust' as Type
FROM customers A
UNION
SELECT B.FacilityName as SiteName, B.City, B.Region, B.Country,B.Latitude, B.Longitude, 'Facility' as Type
FROM Facilities B
) C
WHERE C.Latitude IS NOT NULLAs data tables grow, it is often more efficient to use a table filter to find missing values than a left join and null filter criteria.
Example 1: Table Search Filter – CustomerDemand without a Customer match
SELECT * FROM customerdemand A
WHERE NOT EXISTS (SELECT B.CustomerName FROM Customers B WHERE A.CustomerName = B.CustomerName)The Analytics module in Cosmic Frog allows you to display data from tables and views. Custom queries can be stored as views, enabling the analytics module to reference this virtual table to display results. Creating a view follows a very similar query construct as a sub-query, but rather than layering in a select statement, you add CREATE VIEW viewname as ( query).
Once created, a view can be selected with the Analytics module of Cosmic Frog.

Example 1: Create View – Creating an all-site view
CREATE VIEW V_All_Sites as
(
SELECT C.SiteName, C.city, C.Region, C.Country, C.Latitude, C.Longitude, C.Type
FROM (
SELECT A.CustomerName as SiteName, A.City, A.Region, A.Country, A.Latitude, A.Longitude, 'Cst' as Type
FROM customers A
UNION
SELECT B.FacilityName as SiteName, B.City, B.Region, B.Country,B.Latitude, B.Longitude, 'Fac' as Type
FROM Facilities B
) C
)
Example 2: Delete View – Delete V_all_sites view
Drop VIEW v_all_sitesSQL queries can also modify the contents and structure of data tables. This is a powerful capability, and the results, if improperly applied, cannot be undone.
Table updates & modifications can be completed within Cosmic Frog, with the added benefit of the context of allowed column values. This can also be done within the SQL editor by executing UPDATE and ALTER TABLE SQL statements.
Example 1: Modifying Tables – Adding Additional Notes Columns
ALTER TABLE Customers
ADD Notes_1 character varying (250)
Example 2: Modifying Values – Updating Notes Columns
UPDATE Customers
SET Notes_1 = CONCAT(Country , '-' , Region)
Example 3: Modifying Tables – Delete New Notes Columns
ALTER TABLE Customers
DROP COLUMN Notes_1
Example 4: Copying Tables – Copy Customers Table
SELECT *
INTO Customers_1
FROM Customers
Example 5: Deleting Tables – Delete Customers Table
DROP TABLE Customers_1
DROP TABLE Customers_1
Visit https://www.postgresqltutorial.com/ for more information on PostgreSQL query syntax.
A confirmation email is sent following account creation, however this email could potentially be blocked due to an organization’s IT policies. If you are failing to receive your confirmation email, please make sure that www.optilogic.com is whitelisted, as well as the following email address: support=www.optilogic.com@mail.www.optilogic.com.
If possible, please request that a wildcard whitelist be established for all URL’s that end in *.optilogic.app.
After confirming that these have been whitelisted, try and send another confirmation email. If the problem persists, please send a note in to support@optilogic.com.
One of Cosmic Frog’s great competitive features is the ability to quickly run many sensitivity analysis scenarios in parallel on Optilogic’s Cloud-based platform. This built-in Sensitivity at Scale (S@S) functionality lets a user run sensitivity on demand quantity and transportation costs with 1 click of a button, on any scenario using any of Cosmic Frog’s engines. In this documentation, we will walk through how to kick-off a S@S run, where to track the status of the scenarios, and show some example outputs of S@S scenarios once they have completed running.
Kicking off a S@S analysis is simply done by clicking on the green S@S button on the right-hand side in the toolbar at the top of Cosmic Frog:

After clicking on the S@S button, the Run Sensitivity at Scale screen comes up:

Please note that the parameters that are configured on the Run Settings screen (which comes up when clicking on the Run button at the right top of Cosmic Frog) are used for the Sensitivity at Scale scenario runs.
The scenarios are then created in the model, and we can review their setup by switching to the Scenarios module within Cosmic Frog:

As an example of the sensitivity scenario items that are being created and assigned to the sensitivity scenarios as part of the S@S process, let us have a look at one of these newly created scenario items:

Once the sensitivity scenarios have been created, they are kicked off to all be run simultaneously. Users can have a look in the Run Manager application on the Optilogic platform to track their progress:

Once a S@S scenario finishes, its outputs are available for review in Cosmic Frog. As with other models and scenarios, users can review outputs through output tables, maps, and graphs/charts/dashboards in the Analytics module. Here we will just show the Optimization Network Summary output table and a cost comparison chart as example outputs. Depending on the model and technology run, users may want to look at different outputs to best understand them.

To understand how the costs are divided over the different cost types and how they compare by scenario, we can look at following Supply Chain Cost Detail graph in the Analytics module:

Optimization (NEO) will read from all 5 of input tables in the Sourcing section of Cosmic Frog.
We are able to use these tables to define the sourcing logic that describes costs and where a product can be introduced into the network through production at a Facility (Production Policies) or by way of a Supplier (Supplier Capabilities). We can also define additional rules around how a product must be sourced using the Max Sourcing Range and Optimization Policy fields in the Customer Fulfillment, Replenishment, and Procurement Policies tables.
The Max Sourcing Range field can be used to specify the maximum flow distance allowed for a listed location / product combination. If flow distances are not specified in the Distance field of the Transportation Policies table, a straight-line distance will be calculated based on the Origin / Destination geocoordinates. This will take into account the Circuity Factor specified in the Model Settings as a multiplication factor to estimate real road distances. Any transportation distances that exceed the Max Sourcing Range will result in the arcs being dropped from consideration.
There are 4 allowable entries for Optimization Policy. For any given Destination / Product combination, only a single Optimization Policy entry will be supported meaning you can not have one source listed with a policy of Single Source and another as By Ratio (Auto Scale).
This is the default entry that will be used if nothing is specified. To Optimize places no additional logic onto the sourcing requirement and will use the least cost option available.
For the listed destination / product combination, only one of the possible sources can be selected.
This option allows for sources to be split by the defined ratios that are entered into the Optimization Policy Value field. All of the entries into this Policy Value field will be automatically scaled, and the flow ratios will be followed for all inbound flow to the listed destination / product combination.
For example, there are 3 potential sources for a single Customer location. There is a flow split enforced of 50-30-20 from DC_1, DC_2, DC_3 respectively. This can be entered as Policy Values of 50, 30, and 20:

The same sourcing logic could be achieved by entering values of 5, 3, 2 or even 15, 9, 6. All values will be automatically scaled for each valid source that has been defined for a destination / product combination.
Similar to the Auto Scale option, By Ratio (No Scale) allows for sources to be split by the defined ratios entered into the Optimization Policy Value field. However, no scaling will be performed and the Optimization Policy Value fields will be treated as absolute sourcing percentages where an entry of 50 means that exactly 50% of the inbound flow will come from the listed source.
For example, there are 3 possible sources for a single Customer location and we want to enforce that DC_1 will account for exactly 50% of the flow while the remainder can come from any valid location. We can specify that DC_1 will have a Policy Value of 50 while leaving our other options open for the model to optimize.

If Policy Values add up to less than 100 for a listed destination / product combination, another sourcing option must be available to fulfill the remaining percentage.
If Policy Values add up to more than 100 for a listed destination / product combination, the percentages will be scaled to 100 and used as the only possible sources.
You can create a free account on the Optilogic platform, which includes Cosmic Frog, in just a few clicks. This document shows you two ways in which you can do this. Use the first option if you have Single Sign On (SSO) enabled for your Google (Gmail) or Microsoft account and you want to use this to log into the Optilogic platform.
This video posted on the Optilogic Training website also covers account creation and then goes into how to navigate Cosmic Frog, a good starting point for new users.
To create your free account, go to signup.optilogic.app. This will automatically re-direct you to a Cosmic Frog Log In page:

First, we will walk through the steps of continuing with Microsoft where the user has Single Sign On enabled for their Microsoft account and has clicked on “Continue with Microsoft”. In the next section we will similarly go through the steps for using SSO with a Google account.
After the user has clicked on “Continue with Microsoft”, the following page will be brought up. Click on Accept to continue if the information displayed is correct.

You will see the following message about linking your Microsoft account to your Optilogic account:

Go into your email and find the email with subject “Link Microsoft”, and click on the Link Account button at the bottom of this email:

Should you not have received this email, you can click on “Resend Email”. If you did receive it and you have clicked on the Link Account button, you will be immediately logged into www.optilogic.com and will see the Home screen within the platform, which will look similar to the below screenshot:

From now on, you have 2 options when logging into the Optilogic platform via cosmicfrog.com (see the first screenshot in this documentation): you can log in by clicking on the “Continue with Microsoft” option which will immediately log you in or you can type your credentials into the username / email and password fields to manually log in.
After the user has clicked on “Continue with Google”, the following page will be brought up. If you have multiple Google email addresses, click on the one you want to use for logging into the Optilogic platform. If the email you want to use is not listed, you can click on “Use another account” and then enter the email address.

If the email you choose to use is not signed in on the device you are on currently, you will be asked for your password next. Please provide it and continue. If it is the first time you are using the email address to log into the Optilogic platform, you will be asked to verify it in the next step:

The default verification method associated with the Google account will be suggested, which in the example screenshot above is to send the verification code to a phone number. If other ways to verify the Google account have been set up, you can click on “More ways to verify” to change the verification method. If you are happy with the suggested method, click on Send. Once you have hit Send, the following form will come up:

You can again switch to another verification method in this screen by clicking on “More ways to verify”, or, if you have received the verification code, you can just enter it into the “Enter the code” field and click on Next. This will log you into the Optilogic platform and you will now see the Home screen within the platform, which will look similar to the last screenshot in the previous section (“Steps for the “Continue with Microsoft” Option”).
From now on, you have 2 options when logging into the Optilogic platform via cosmicfrog.com (see the first screenshot in this documentation): you can log in by clicking on the “Continue with Google” option which will immediately log you in after you have selected the Google email address to use, or you can type your credentials into the username / email and password fields to manually log in.
To create your free account, go to www.optilogic.com and click on the yellow “Create a Free Account” button.

The following form will be brought up, please fill out your First Name, Last Name, Email Address, and Phone Number. Then click on Next Step.

Your entered information will be shown back to you, and you can just click on Next Step again. Next, a form where you can set your Username and Password will come up. Click on Next Step again once this form is filled out.

In the final step you will be asked to fill out your Company Name, Role, Industry, and Company Size. Click on Submit after you have filled out these details.

A submission confirmation will pop up with instructions to verify your email address. Once you have verified your email address you can immediately start using your free account!
Cosmic Frog for Excel Applications provide alternative interfaces for specific use cases as companion applications to the full Cosmic Frog Supply chain design product. For example, they can be used to access a subset of the Cosmic Frog functionality in a simplified manner or provide specific users who are not experienced in working with Cosmic Frog models access to a subset of inputs and/or outputs of a full-blown Cosmic Frog model that are relevant to their position.
Several example use cases are:
It is recommended to review the Cosmic Frog for Excel App Builder before diving into this documentation, as basic applications can quickly and easily be built with it rather than having to edit/write code, which is what will be explained in this help article. The Cosmic Frog for Excel App Builder can be found in the Resource Library and is also explained in the “Getting Started with the Cosmic Frog for Excel App Builder” help article.
Here we will discuss how one can set up and use a Cosmic Frog for Excel Application, which will include steps that use VBA (Visual Basic for Applications) in Excel and scripting using the programming language Python. This may sound daunting at first if you have little or no experience using these. However, by following along with this resource and the ones referenced in this document, most users will be able to set up their own App in about a day or 2 by copy-pasting from these resources and updating the parts that are specific to their use case. Generative AI engines like Chat GPT and perplexity can be very helpful as well to get a start on VBA and Python code. Cosmic Frog functionality will not be explained much in this documentation, the assumption is that users are familiar with the basics of building, running, and analyzing outputs of Cosmic Frog models.
In this documentation we are mainly following along with the Greenfield App that is part of the Resource Library resource “Building a Cosmic Frog for Excel Application”. Once we have gone through this Greenfield app in detail, we will discuss how other common functionality that the Greenfield App does not use can be added to your own Apps.
There are several Cosmic Frog for Excel Applications that have been developed by Optilogic available in the Resource Library. Links to these and a short description of each of them can be found in the penultimate section “Apps Available in the Resource Library” of this documentation.
Throughout the documentation links to other resources are included; in the last section “List of All Resources” a complete list of all resources mentioned is provided.
The following screenshot shows at a high-level what happens when a typical Cosmic Frog for Excel App is used. The left side represents what happens in Excel, and on the right side what happens on the Optilogic platform.

A typical Cosmic Frog for Excel Application will contain at least several worksheets that each serve a specific purpose. As mentioned before, we are using the MicroAPP_Greenfield_v3.xlsm App from the Building a Cosmic Frog for Excel Application resource as an example. The screenshots in this section are of this .xlsm file. Depending on the purpose of the App, users will name and organize worksheets differently, and add/remove worksheets as needed too:




To set up and configure Cosmic Frog for Excel Applications, we mostly use .xlsm Excel files, which are macro-enabled Excel workbooks. When opening an .xlsm file that for example has been shared with you by someone else or has been downloaded from the Optilogic Resource Library (Help Article on How To Use the Resource Library), you may find that you see either a message about a Protected View where editing needs to be enabled or a Security Warning that Macros have been disabled. Please see the Troubleshooting section towards the end of this documentation on how to resolve these warnings.
To set up Macros using Visual Basic for Applications (VBA), go to the Developer tab of the Excel ribbon:

If the Developer option is not available in the ribbon, then go to File > Options > Customize Ribbon, select Developer from the list on the left and click on the Add >> button, then click on OK. Should you not see Options when clicking on File, then click on “More…” instead, which will then show you Options too.
Now that you are set up to start building Macros using VBA: go to the Developer tab, enable Design Mode and add controls to your sheets by clicking on Insert, and selecting any controls to insert from the drop-down menu. For example, add a button and assign a Macro to it by right clicking on the button and selecting Assign Macro from the right-click menu:



To learn more about Visual Basic for Applications, see this Microsoft help article Getting started with VBA in Office, it also has an entire section on VBA in Excel.
It is possible to add custom modules to VBA in which Sub procedures (“Subs”) and functions to perform specific tasks have been pre-defined and can be called in the rest of the VBA code used in the workbook where the module has been imported into. Optilogic has created such a module, called Optilogic.bas. This module provides 8 standard functions for integration into the Optilogic platform.
You can download Optilogic.bas from the Building a Cosmic Frog for Excel Application resource in the Resource Library:

You can then import it into the workbook you want to use it in:

Right click on Modules in the VBA Project of the workbook you are working in and then select Import File…. Browse to where you have saved Opitlogic.bas and select it. Once done, it will appear in the Modules section, and you can double click on it to open it up:


These Optilogic specific Sub procedures and the standard VBA for Excel functionality enable users to create the Macros they require for their Cosmic Frog for Excel Applications.
App Keys are used to authenticate the user from the Excel App on the Optilogic platform. To get an App Key that you can enter into your Excel Apps, see this Help Center Article on Generating App and API Keys. During the first run of an App, the App Key will be copied from the cell it is entered into to an app.key file in the same folder as the Excel .xlsm file, and it will be removed from the worksheet. This is done by using the Manage_App_Key Sub procedure described in the “Optilogic.bas VBA Module” section above. User can then keep running the App without having to enter the App Key again unless the workbook or app.key file is moved elsewhere.
It is important to emphasize that App Keys should not be saved into Excel Apps as they can easily be accidentally shared when the Excel App itself is shared. Individual users need to authenticate with their own App Key.
When sharing an App with someone else, one easy way to do so is to share all contents of the folder where the Excel App is saved (optionally, zipped up). However, one needs to make sure to remove the app.key file from this folder before doing so.
A Python Job file in the context of Cosmic Frog for Excel Applications is the file that contains the instructions (in Python script format) for the operations of the App that take place on the Optilogic Platform.
Notes on Job files:
For Cosmic Frog for Excel Apps, a .job file is typically created and saved in the same folder as the Macro-enabled Excel workbook. As part of the Run Macro in that Excel workbook, the .job file will be uploaded to the Optilogic platform too (together with any input & settings data). Once uploaded, the Python code in the .job file will be executed, which may do things like loading the data from any uploaded CSV files into a Cosmic Frog model, run that Cosmic Frog model (a Greenfield run in our example), and retrieve certain outputs of interest from the Cosmic Frog model once the run is done.
For a Python job that uses functionality from the cosmicfrog library to run, a requirements.txt file that just contains the text “cosmicfrog” (without the quotes) needs to be placed in the same folder as the .job file. Therefore, this file is typically created by the Excel Macro and uploaded together with any exported data & settings worksheets, the app.key file, and the .job file itself so they all land in the same working folder on the Optilogic platform. Note that the Optilogic platform will soon be updated so that using a requirements.txt file will not be needed anymore and the cosmicfrog library will be available by default.
Like VBA, users and creators of Cosmic Frog for Excel Apps do not need to be experts in Python code, and will mostly be able to do the things they want to by copy-pasting from existing Apps and updating only the parts that are different for their App. In the greenfield.job section further below we will go through the code of the python Job for the Greenfield App in more detail, which can be a starting point for users to start making changes to for their own Apps. Next, we will provide some more details and references to quickly equip you with some basic knowledge, including what you can do with the cosmicfrog Python library.
There are a lot of helpful resources and communities online where users can learn everything there is to know about using & writing Python code. A great place to start is on the Python for Beginners page on python.org. This page also mentions how more experienced coders can get started with Python.
Working locally on any Python scripts/Jobs has the advantage that you can make use of code completion features which helps with things like auto-completion, showing what arguments functions need, catch incorrect syntax/names, etc. An example set up to achieve this is for example one where Python, Visual Studio Code, and an IntelliSense extension package for Python for Visual Studio Code are installed locally:
Once you are set up locally and are starting to work with Python files in Visual Studio Code, you will need to install the pandas and cosmicfrog libraries to have access to their functionality. You do this by typing following in a terminal in Visual Studio Code:
More experienced users may start using additional Python libraries in their scripts and will need to similarly install them when working locally to have access to their functionality.
If you want to access items on the Optilogic platform (like Cosmic Frog models) while working locally, you will likely need to whitelist your IP address on the platform, so the connections are not blocked by a firewall. You can do this yourself on the Optilogic platform:

A great resource on how to write Python scripts for Cosmic Frog models is this “Scripting with Cosmic Frog” video. In this video, the cosmicfrog Python library, which adds specific functionality to the existing Python features to work with Cosmic Frog models, is covered in some detail already. The next set of screenshots will show an example using a Python script named testing123.py on our local set-up. The first screenshot shows a list of functions available from the cosmicfrog Python library:

When you continue typing after you have typed “model.” the code completion feature will auto-generate a list of functions you may be getting at. In the next screenshot ones that start with or contain a “g” as I have only typed a “g” so far. This list will auto-update the more you type. You can select from the list with your cursor or arrow up/down keys and hitting the Tab key to auto-complete:

When you have completed typing the function name and next type a parenthesis ‘(‘ to start entering arguments, a pop-up will come up which contains information about the function and its arguments:

As you type the arguments for the function, the argument that you are on and the expected format (e.g. bool for a Boolean, str for string, etc.) will be in blue font and a description of this specific argument appears above the function description (e.g. above box 1 in the above screenshot). In the screenshot above we are on the first argument input_only which requires a Boolean as input and will be set to False by default if the argument is not specified. In the screenshot below we are on the fourth argument (original_names) which is now in blue font; its default is also False, and the argument description above the function description has changed now to reflect the fourth argument:

The next screenshot shows 2 examples of using the get_tablelist function of the FrogModel module:

As mentioned above, you can also use Atlas on the Optilogic platform to create and run Python scripts. One drawback here is that it currently does not have code completion features like IntelliSense in Visual Studio Code.
The following simple test.py Python script on Atlas will print the first Hopper output table name and its column names:


After running the Greenfield App, we can see the following files together in the same folder on our local machine:

On the Optilogic platform, a working folder is created by the Run Greenfield Macro. This folder is called “z Working Folder for Excel Greenfield App”. After running the Greenfield App, we can see following files in here:

Parts of the Excel Macro and Python .job file will be different from App to App based on the App’s purpose, but a lot of the content will be the same or similar. In this section we will step through the Macro that is behind the Run Greenfield button in the Cosmic Frog for Excel Greenfield App that is included in the “Building a Cosmic Frog for Excel Application” resource, where it will be explained what is happening at a high level each step of the way and mention if this part is likely to be different and in need of editing for other Apps or if it would typically stay the same across most Apps. After stepping through this Excel Macro in this section, we will the same for the Greenfield.job file in the next section.
The next screenshot shows the first part of the VBA code of the Run Greenfield Macro:

Note that throughout the Macro you will see text in green font. These are comments to describe what the code is doing and are not code that is executed when running the Macro. You can add comments by simply starting the line with a single quote and then typing your comment. Comments can be very helpful for less experienced users to understand what the VBA code is doing.
Next, the file path to the workbook is retrieved:

This piece of code uses the Get_Workbook_File_Path function of the Optilogic.bas VBA module to get the file path of the current workbook. This function first tries to get the path without user input. If it finds that the path looks like the Excel workbook is stored online in for example a Cloud folder, it will use user input in cell B3 on the Admin worksheet to get the file path instead. Note that specifying the file path is not necessary if the App runs fine without it, which means it could get the path without the user input. Only if user gets the message “Local file path to this Excel workbook is invalid. It is possible the Excel workbook is in a cloud drive, or you have provided an invalid local path. Please review setup step 4 on Admin sheet.”, the local file path should be entered into cell B3 on the Admin worksheet.
This code can be left as is for other Apps if there is an Admin worksheet (the variable pathsheetName indicated with 1 in screenshot above) where in cell B3 the file path (the variable pathCell indicated with 2 in screenshot above) can be specified. Of course, the worksheet name and cell can be updated if these are located elsewhere in the App. The message the user gets in this case (set as pathusrMsg indicated with 3 in the screenshot above) may need to be edited accordingly too.
The following code takes care of the App Key management:

The Manage_App_Key function from the Optilogic.bas VBA module is used here to retrieve the App Key from cell B2 on the Admin worksheet and put it into a file named app.key which is saved in the same location as the workbook when the App is run for the first time. The key is then removed from cell B2 and replaced with the text “app key has been saved; you can keep running the App”. As long as the app.key file and the workbook are kept together in the same location, the App will keep working.
Like the previous code on getting the local file path of the workbook, this code can be left as is for other Apps. Only if the location of where the App Key needs to be entered before the first run is different from cell B2 on the worksheet named Admin, the keysheetName and keyCell variables (indicated with 1 and 2 in the screenshot above) need to be updated accordingly.
This App has a greenfield.job file associated with it that contains the Python script which will be run on the Optilogic platform when the App is run. The next piece of code checks that this greenfield.job file is saved in the same location as the Excel App, and it also sets the name of the folder to be created on the Optilogic platform where files will get uploaded to:

This code can be left as is for other Cosmic Frog for Excel Apps, except following will likely need updating:
The Greenfield settings are set in the next step. The ones the user can set on the Settings worksheet are taken from there and others are set to a default value:

Next, the Greenfield Settings and the other input data are written into .csv files:


The firstSpaceIndex variable is set to the location of the first space in the resource size string.
Looking in the Greenfield App on the Customers worksheet we see that this means that the Customer Name (column A), Latitude (column B), Longitude (column C), and Quantity (column D) columns will be exported. The Customers.csv file will contain the column names on the first row, plus 96 rows with data as the last populated row is row 97. Here follows a screenshot showing the Customers worksheet in the Excel App (rows 6-93 hidden) and the first 11 lines in the Customers.csv file that was exported while running the Greenfield App:

Other Cosmic Frog for Excel Applications will often contain data to be exported and uploaded to the Optilogic platform to refresh model data; the Export_CSV_File function can be used in the same way to export similar and other tabular data.
As mentioned in the “Python Job File and requirements.txt” section earlier, a requirements.txt file placed in the same folder as the .job file that contains the Python script is needed so the Python script can run using functionality from the cosmicfrog Python library. The next code snippet checks if this file already exists in the same location as the Excel App, and if not creates it there, plus writes the text cosmicfrog into it.

This code can be used as is by other Excel Apps.
The next step is to upload all the files needed to the Optilogic platform:

Besides updating the local/platform file names and paths as appropriate, the Upload_File_To_Optilogic Sub procedure will be used by most if not all Excel Apps: even if the App is only looking at outputs from model runs and not modifying any input data or settings, the function is still required to upload the .job, app.key, and requirements.txt files.
The next bit of code uses 2 more of the Optilogic.bas VBA module functions to run and monitor the Python job on the Optilogic platform:

This piece of code can stay as is for most Apps, just make sure to update the following if needed:
The last piece of code before some error handling downloads the results (2 .csv files) from the Optilogic platform using the Download_File_From_Optilogic function from the Optilogic.bas VBA module:

This piece of code can be used as is with the appropriate updates for worksheet names, cell references, file names, path names, and text of status updates and user messages. Depending on the number of files to be downloaded, the part of the code setting the names of the output files and doing the actual download (bullet 2 above) can be copy-pasted and updated as needed.
The last piece of VBA code of the Macro shown in the screenshot below has some error handling. Specifically, when the Macro tries to retrieve the local path of the Macro-enabled .xlsm workbook and it finds it looks like it is online, an error will pop up and the user will be requested to put the file path name in cell B3 on the Admin worksheet. If the Macro hits any other errors, a message saying “An unexpected error occurred: <error number> <error description>” will pop up. This piece of code can be left as is for other Cosmic Frog for Excel Applications.

We have used version 3 of the Greenfield App which is part of the Building a Cosmic Frog for Excel Application resource in the above. There is also a stand-alone newer version (v6) of the Cosmic Frog for Excel – Greenfield application available in the Resource Library. In addition to all of the above, this App also:
This functionality is likely helpful for a lot of other Cosmic Frog for Excel Apps and will be discussed in section “Additional Common App Functionality” further below. We especially recommend using the functionality to prevent Excel from locking up in all your Apps.
Now we will go through the greenfield.job file that contains the Python script to be run on the Optilogic platform in detail.

This first piece of code takes care of importing several python libraries and modules (optilogic, pandas, time; lines 1, 2, and 5). There is another library, cosmicfrog, that is imported through the requirements.txt file that has been discussed before in the section titled “Python Job File and requirements.txt”. Modules from these libraries are imported here as well (FrogModel from cosmicfrog on line 3 and pioneer.API from optilogic on line 4). Now the functionality of these libraries and their modules can be used throughout the code of the script that follows. The optilogic and cosmicfrog libraries are developed by Optilogic and contain specific functionality to work with Cosmic Frog models (e.g. the functions discussed in the section titled “Working with Python Locally” above and on the Optilogic platform.
For reference:
This first piece of code can be left as is in the script files (.job files locally, .py files on the Optilogic platform) for most Cosmic Frog for Excel Applications. More advanced users may import different libraries and modules to use functionality beyond what the standard Python functionality plus the optilogic, cosmicfrog, pandas, and time libraries & modules together offer.
Next, a check_job_status function is defined that will keep checking a job until it is completed. This will be used when running a job to know if the job is done and ready to move onto the next step, which will often be downloading the results of the run. This piece of code can be kept as is for other Cosmic Frog for Excel Applications.

The following screenshot shows the next snippet of code that defines a function called wait_for_jobs_to_complete. It uses the check_job_status to periodically check if the job is done, and once done, moves onto the next piece of code. Again, this can be kept as is for other Apps.

Now it is time to create and/or connect to the Cosmic Frog model we want to use in our App:

Note that like the VBA code in the Excel Macro, we can add comments describing what the code is doing to our Python script too. In Python, comments need to start with the number (/hash) sign # and the font of comments automatically becomes green in the editor that is being used here (Visual Studio Code using the default Dark Modern color theme).
After clearing the tables, we will now populate them with the date from the Excel workbook. First, the uploaded Customers.csv file that contain the columns Customer Name, Latitude, Longitude, and Quantity is used to update both the Customers and the CustomerDemand tables:

It is very dependent on the App that you are building how much of the above code you can use as is, but the concepts of reading csv files, renaming, and dropping columns as needed and writing tables into the Cosmic Frog model will be frequently used. The following piece of code also writes the Facilities and Suppliers data into the Cosmic Frog tables. Again, the concepts used here will be useful for other Apps too, it may just not be exactly the same depending on the App and the tables that are being written to:

Next up, the Settings.csv file is used to populate the Greenfield Settings table in Cosmic Frog and to set 2 variables for resource size and scenario name:

Now that the Greenfield App Cosmic Frog model is populated with all the data needed, it is time to kick off the model and run a Greenfield analysis:

Besides updating any tags as desired (bullet 2b above), this code can be kept exactly as is for other Excel Apps.
Lastly, once the model is done running, the results are retrieved from the model and written into .csv files, which will then be downloaded by the Excel Macro:

When the greenfield_job.py file starts running on the Optilogic platform, we can monitor and see the progress of the job in the Run Manager App:

The Greenfield App (version 3) that is part of the Building a Cosmic Frog for Excel Application resource covers a lot of common features users will want to use in their own Apps. In this section we will discuss some additional functionality users may also wish to add to their own Apps. This includes:
A newer version of the Greenfield App (version 6) can be found here in the Resource Library. This App has all the functionality version 3 has, plus: 1) it has an updated look with some worksheets renamed and some items moved around, 2) has the option to cancel a Run after it has been kicked off and has not completed yet, 3) it prevents locking up of Excel while the App is running, 4) reads a few CSV output files back into worksheets in the same workbook, and 5) uses a Python library called folium to create Maps that a user can open from the Excel workbook, which will then open the map in the user’s default browser. Please download this newer Greenfield App if you want to follow along with the screenshots in this section. First, we will cover how a user can prevent locking of Excel during a run and how to add a cancel button which can stop a run that has not yet completed.
The screenshots call out what is different as compared to version 3 of the App discussed above. VBA code that is the same is not covered here. The first screenshot is of the beginning of the RunGreenfield_Click Macro that runs when the user hits the Run Greenfield button in the App:

The next screenshot shows the addition of code to enable the Cancel button once the Job has been uploaded to the Optilogic platform:

If everything completes successfully, a user message pops up, and the same 3 lines of code are added here too to enable the Run Greenfield buttons, disable the Cancel button, and keep other applications accessible:

Finally, a new Sub procedure CancelRun is added that is assigned to the Cancel button and will be executed when the Cancel button is clicked on:

This code gets the Job Key (unique identifier of the Job) from cell C9 on the Start worksheet and then uses a new function added to the Optilogic.bas VBA module that is named Cancel_Job_On_Optilogic. This function takes 2 arguments: the Job Key to identify the run that needs to be cancelled and the App Key to authenticate the user on the Optilogic platform.
Version 6 of the Greenfield App reads results from the Facility Summary, Customer Summary, and Flow Summary back into 3 worksheets in the workbook. A new Sub procedure named ImportCSVDataToExistingSheet (which can be found at the bottom of the RunGreenfield Macro code) is used to do this:

The function is used 3 times: to import 1 csv file into 1 worksheet at a time. The function takes 3 arguments:
We will discuss a few possible options on how to visualize your supply chain and model outputs on maps when using/building Cosmic Frog for Excel Applications.
This table summarizes 3 of the mapping options: their pros, cons, and example use cases:
There is standard functionality in Excel to create 3D Maps. You can find this on the Insert tab, in the Tours groups (next to Charts):

Documentation on how to get started with 3D Maps in Excel can be found here. Should your 3D Maps icon be greyed out in your Excel workbook, then this thread on the Microsoft Community forum may help troubleshoot this.
How to create an Excel 3D Map in a nutshell:
With Excel 3D Maps you can visualize locations on the map and for example base their size on characteristics like demand quantity. You can also create heat maps and show how location data changes over time. Flow maps that show lines between source and destination locations cannot be created with Excel 3D Maps. Refer to the Microsoft documentation to get a deeper understanding of what is possible with Excel 3D Maps.
The Cosmic Frog for Excel – Geocoding App in the Resource Library uses Excel 3D Maps to visualize customer locations that the App has geocoded on a map:

Here, the geocoded customers are shown as purple circles which are sized based on their total demand.
A good option to for example visualize Hopper (= transportation optimization) routes on a map is the ArcGIS Excel Add-in. If you do not have the add-in, you can get it from within Excel as follows:

You may be asked to log into your Microsoft account when adding this in and/or when starting to use the Add-in. Should you experience any issues while trying to get the Add-in added to Excel, we recommend closing all Office applications and then only open one Excel workbook through which you add the Add-in.
To start using the add-in and create ArcGIS maps in Excel:

Excel will automatically select all data in the worksheet that you are on. You can ensure the mapping of the data is correct or otherwise edit it:

After adding a layer, you can further configure it through the other icons at the top of the Layers window:

The other configuration options for the Map are found on the left-hand side of the Map configuration pane:

As an example, consider the following map showing the stops on routes created by the Hopper engine (Cosmic Frog’s transportation optimization technology). The data in this worksheet is from the Transportation Stop Summary Hopper output table:

As a next step we can add another layer to the map based on the Transportation Segment Summary Hopper output table to connect the source-destination pairs with each other using flow lines. For this we need to use the Esri JSON Geometry Location types mentioned earlier. An example Excel file containing the format needed for drawing polylines can be found in the last answer of this thread on the Esri community website: https://community.esri.com/t5/arcgis-for-office-questions/json-formatting-in-arcgis-for-excel/td-p/1130208, on the PolylinesExample1 worksheet. From this Excel file we can see that the format needed to draw a line connecting 2 locations:
{“paths”: [[<point1_longitude>,<point1latitude>],[<point2_longitude>,<point2_latitude>]],”spatialReference”: {“wkid”: 4326}}
Where wkid indicates the well-known ID of the spatial reference to be used on the map (see above for a brief explanation and a link to a more elaborate explanation of spatial references). Here it is set to 4326, which is WGS 1984.
The next 2 screenshots show the data from a Segments Summary and an added layer to the map to show the lines from the stops on the route:


Note that for Hopper outputs with multiple routes, we now need to filter both the worksheet with the Stops information and the worksheet with the Segments information for the same route(s) to synchronize them. A better solution is to bring the stopID and Delivered Quantity information from the Stops output into the Segments output, so we only have 1 worksheet with all the information needed and both layers are generated from the same data. Then filtering this set of data will update both layers simultaneously.
Here, we will discuss a Python library called folium, which gives users the ability to create maps that can show flows, tooltips, and has options to customize/auto-size location shapes and flow lines. We will use the example of the Cosmic Frog for Excel – Greenfield App (version 6) again where maps are created as .html files as part of the greenfield_job.py Python script that runs on the Optilogic platform. They are then downloaded as part of the results and from within Excel, users can click on buttons to show flows or customers which then opens the .html files in user’s default browser. We will focus on the differences with version 3 of the Greenfield App that are related to maps and folium. We will discuss the changes/addition to both the VBA code in the Excel Run Greenfield Macro and the additions to greenfield_job.py. First up in the VBA code, we need to add folium to the requirements.txt file so that the Python script can make use of the library once it is uploaded to the Optilogic platform:

To do so, a line to the VBA code is added to write “folium” into requirements.txt.
As part of downloading all the results from the Optilogic platform after the Greenfield run has completed, we need to add downloading the .html map files that were created:

In this version of the Greenfield app, there is a new Results Summary worksheet that has 3 buttons at the top:

Each of these buttons has a Sub procedure assigned to it, let’s look at the one for the “Show Flows with Customers” button:

The map that is opened will look something like this, where a tooltip comes up when hovering over a flow line. (How to create and configure the map using folium will be discussed next.)

The additions to the greenfield.job file to make use of folium and creating the 3 maps will now be covered:

First, at the beginning of the script, we need to add “import folium” (line 6), so that the library’s functionality can be used throughout the script. Next, the 3 Greenfield output tables that are used to create the 3 maps are read in, and a few data type changes are made to get the data ready for mapping:

This is repeated twice, once for the Optimization Greenfield Customer Summary output table and once for the Optimization Greenfield Flow Summary output table.
The next screenshot shows the code where the map to show Facilities is created and the Markers of them are configured based on if the facility is an Existing Facility or a Greenfield Facility:

In the next bit of code, df_Res_Flows dataframe is used to draw lines on the map between origin and destination locations:

Lastly, the customers from the Optimization Greenfield Customer Summary output table are added to the map that already contains facilities and flow lines, and is saved as greenfield_flows_customers_map.html:

Here are some additional pointers that may be useful when building your own Cosmic Frog for Excel applications:


You may run into issues where Macros or scripts are not running as expected. Here we cover some common problems you may come across and their solutions.
When opening an Excel .xslm file you may find that you see following message about the view being protected, you can click on Enable Editing if you trust the source:

Enabling content is not necessarily sufficient to also be able to run any Macros contained in the .xlsm file, and you may see following message after clicking on the Enable Editing button:

Closing this message box and then trying to run a Macro will result in the following message.

To resolve this, it is not always sufficient to just close and reopen the workbook and enable macros as the message suggests. Rather, go to the folder where the .xlsm file is saved in File Explorer, right click on it, and select Properties:

At the bottom in the General tab, check the Unblock checkbox and then click on OK.

Now, when you open the .xlsm file again, you have the option to Enable Macros, do so by clicking on the button. From now on, you will not need to repeat any of these steps when closing and reopening the .xlsm file; Macros will work fine.

It is also possible that instead of the Enable Editing warning and warnings around Macros not running discussed above, you will see a message that Macros have been disabled, as in the following screenshot. In this case, please click on the Enable Content button:

Depending on your anti-virus software and its settings, it is possible that the Macros in your Cosmic Frog for Excel Apps will not run as they are blocked by the anti-virus software. If you get “An unexpected error occurred: 13 Type mismatch”, this may be indicative of the anti-virus software blocking the Macro. Work with your IT department to allow the running of Macros.
If you are running Python scripts locally (say from Visual Studio Code) that are connecting to Cosmic Frog models and/or uploading files to the Optilogic platform, you may be unsuccessful and get warnings with the text “WARNING – create_engine_with_retry: Database not ready, retrying”. In this case, the likely cause is that your IP address needs to be added to the list of firewall exceptions within the Optilogic platform, see the instructions on how to do this in the “Working with Python Locally” section further above.
You will find that if you export cells that contain formulas from Excel to CSV that these are exported as 0’s and not as the calculated value. Possible solutions for this are 1) to export to a format other than CSV, possibly .xslx, or 2) to create an extra column in your data where the results of the cells with formulas are copy-pasted as values into and export this column instead of the one with the formulas (this way the formulas stay intact for a next run of the App). You could use the record Macro option to get a start on the VBA code for copy-pasting values from a certain column into a certain column so that you do not have to manually do this each time you run the App, but it becomes part of the Macro that runs when the App runs. An example of VBA code that copy-pastes values can be seen in this screenshot:

When running an App that has been run previously, there are likely output files in the folder where the App is located, for example CSV files that are opened by the user to view the results or are read back into a worksheet in the App. When running the App again, it is important to not have these output files open, otherwise an error will be thrown when the App gets to the stage of downloading the output files since open files cannot be overwritten.
There are currently several Cosmic Frog for Excel Applications available in the Resource Library, with more being added over time. Check back frequently and search for “Cosmic Frog for Excel” in the search bar to find all available Apps. A short description for each App that is available follows here:
As this documentation contains many links to references and resources, we will list them all here in one place:
Greenfield analysis (GF) is a method for determining the optimal location of facilities in a supply chain network. The Greenfield engine in Cosmic Frog is called Triad and this name comes from the oldest known species of frogs – Triadobatrachus. You can think of it as the starting point for the evolution of all frogs, and it serves as a great starting point for modeling projects too! We can use Triad to identify 3 key parameters:
GF is a great starting point for network design—it solves quickly and can reduce the number of candidate site locations in complicated design problems. However, a standard GF requires some assumptions to solve (e.g. single time period, single product). As a result, the output of a Triad model is best suited as initial information for building a more robust Cosmic Frog optimization (Neo) or simulation (Throg) model.
You can run GF in any Cosmic Frog model. Running a GF model only requires two input tables to be populated:

A third important table for running GF is the Greenfield Settings table in the Functional Tables section of the input tables. We call our GF approach “Intelligent Greenfield” because of the different parameters available by configuring this settings table. The Greenfield Settings table is always populated with defaults and users can change these as needed. See the Greenfield Setting Explained help article for an explanation of the fields in this table.
A greenfield analysis starts with clicking the “Run” button at the right top of the Cosmic Frog application, just like a Neo or Throg model.

After clicking on the Run button, the Run screen comes up:

Besides making changes to values in the Customers and/or Customer Demand tables, GF scenarios often make changes to 1 or multiple settings on the Greenfield Settings table. The next screenshot shows an example of this:

To improve the solve speed of a Triad model, we can use customer clustering. Customer clustering reduces the size of the supply chain by grouping customers within a given geometric range into a single customer. We can set the clustering radius (in miles) in the Greenfield Settings table in the Customer Cluster Radius column.

Clustering is optional, and leaving this column blank is the same as turning off clustering.
While grouping customers can significantly improve the run time of the model, clustering may result in a loss of optimality. However, Greenfield is typically used as a starting point for a future Neo optimization model, so small losses in optimality at this phase are typically manageable.