We love for our users to connect, keep up to date, learn from and share with other Cosmic Frog users & experts through the Frogger Pond Community! If you have an Optilogic account (see this page on how to create your free account if you do not have one yet), you can use that same account to log into the Frogger Pond Community.
Here, we will describe what the Frogger Pond Community consists of, how to interact with, search, sort, and contribute to Topics, and how to manage your account. Recommended reads for new users are included in the last section too.
When you login to the Frogger Pond Community, the homepage you see will look similar to the screenshot below:

Once you have clicked on a topic that you want to read and possibly interact with, you will see something similar to the following screenshot:

After you click on the “+ New Topic” button, the following window will pop-up at the bottom of your browser:

The third icon at the top right of the homepage opens a Menu that you can use to quickly navigate to different parts of the Frogger Pond Community.

When you click on your profile picture at the top right of the homepage, a small window that gives you quick access to (from left to right) Notifications (bell icon), Bookmarks (bookmark icon), Messages (envelope icon), and Preferences (person icon) opens up:

If you click on the upside-down caret at the bottom of notifications, bookmarks, or messages or click on any item in the preferences list, you will be taken to that area of your account. In the following screenshot, we have gone to the Messages section of the user account:

Using a platform you may not be familiar with can be overwhelming. To help new users getting started, we recommend reading the following items. These are also mentioned in the “Welcome to Optilogic!” message all new users of the Frogger Pond Community receive.
We look forward to your questions & contributions over at the Frogger Pond!
Running a model is simple and easy. Just click the run button and watch your Python model execute. Watch the video to learn more.
To leverage the power of hyperscaling, click the “Run as Job” button.
Watch the video to learn how to import, export, geocode, and work with data within Cosmic Frog:
If you want to follow along, please download the data set available here:
Before you get started in Cosmic Frog watch this short video to learn how to navigate the software.
There can be many different causes for an ODBC connection error, this article contains a couple of specific error messages along with resolution steps.
Resolution: Please make sure that the updated PSQL driver has been installed. This can be checked through your ODBC Data Sources Administrator

Latest versions of the drivers are located here: https://www.postgresql.org/ftp/odbc/releases/ from here, click on the latest parent folder, which as of June 20, 2024 will be REL-16_00_0005. Select and download the psqlodbc_x64.msi file. When installing, use the default settings from the installation wizard.
Resolution: Please confirm the PSQL drivers are updated as shown in the previous resolution. If this error is being thrown while running an Alteryx workflow specifically, please disable the AMP Engine for the Alteryx workflow.

When running Cosmic Frog models and other jobs on the Optilogic platform, cloud resources are used. Usage of these resources is billed based on the billing factor of the resource used for the job. Each Optilogic customer has an amount of cloud compute hours included in their Master License Agreement (MLA). Users may want to check how many of these hours have been used up and in this documentation 2 ways to do so will be covered. In the last section we will touch on how to best track hours at the team/company level.
The first option for hours tracking that will be covered is through the Usage tab in the user’s Account:

If a user is asked by their manager to report the hours they have used on the Optilogic platform, they can go here and use the Custom Time Window Preset option to align the start and end date of the reporting period with the dates of the MLA. They can then report back the number shown as the Total Billed Compute Time (box 4 in the above screenshot).
Through the Run Manager application on the Optilogic platform, user can also analyze their jobs run, including retrieving the Total Billed Compute Time:

After clicking on the View Charts icon, a screen similar to the following screenshot will be shown:


If a user needs to report their hours used on the Optilogic platform, they can download this jobs.csv file and:
Currently, only tracking of usage hours at the individual user level is available as described above. To get total team or company usage, a manager can ask their users to use 1 of the above 2 methods to report their Total Billed Compute Time and the manager can then add these up to get the total used hours so far. Tracking at the team/company level is planned to be made available on the Optilogic platform later in 2024.
A confirmation email is sent following account creation, however this email could potentially be blocked due to an organization’s IT policies. If you are failing to receive your confirmation email, please make sure that www.optilogic.com is whitelisted, as well as the following email address: support=www.optilogic.com@mail.www.optilogic.com.
If possible, please request that a wildcard whitelist be established for all URL’s that end in *.optilogic.app.
After confirming that these have been whitelisted, try and send another confirmation email. If the problem persists, please send a note in to support@optilogic.com.
You can clear output data from all model output tables in one quick action. Navigate to the Scenarios tab and from the scenario drop-down menu select the Delete Scenario Results option. This can also be accessed by right-clicking on any scenario name. Next, from the window on the right-hand side of the screen you can select the scenario(s) that you want to delete output data for. Once selected, click the Delete button and all output data will be cleared for the selected scenarios.

Cosmic Frog’s new breakpoints feature enables users to create maps which relay even more supply chain data in just one glance. Lines and points can now be styled based on field values from the underlying input or output table the lines/points are drawn from.
In this Help Center article, we will cover where to find the breakpoints feature for both point and line layers and how to configure them. A basic knowledge of how to configure maps and their layers in Cosmic Frog is assumed; users unfamiliar with maps in Cosmic Frog are encouraged to first read the “Getting Started with Maps” Help Center article.
First, we will walk through how to apply breakpoints to map layers of type = line, which are often used to show flows between locations. With breakpoints we can style the lines between origins and destinations for example based on how much is flowing in terms of quantity, volume or weight. It is also possible to style the lines on other numeric fields, like costs, distances or time.
Consider the following map showing flows (dark green lines) to customers (light green circles):

Next, we will go to the Layer Style pane on which breakpoints can be turned on and configured:

Once the Breakpoints toggle has been turned on (slide right, the color turns blue), the breakpoint configuration options become visible:

One additional note is that one can use tab to navigate through the cells in the Breakpoints table.
The next screenshot shows breakpoints based on the Flow Quantity field (in the Optimization Flow Summary) for which the Max Values have been auto generated:


Users can customize the style of each individual breakpoint:

Please note:
Configuring and applying breakpoints on point layers is very similar to those on line layers. We will walk through the steps in the next 4 screenshots in slightly less detail. In this example we will base the size of the customer locations on the map on the total demand they have been served:

Next, we again look at the Layer Style pane of the layer:


Lastly, user would like to gradually increase the color of the customer circles from light to dark green and the size from small to bigger based on the breakpoint the customer belongs to:

As always, please feel free to reach out to Optilogic support at support@optilogic.com should you have any questions.
There are four main methods for adding data to Atlas:
1. Drag/Drop

2. Upload tool

3. OneDrive

4. Leverage API code
‘Connection Info’ is required when connecting 3rd party tools such as Alteryx, Tableau, PowerBI, etc. to Cosmic Frog.
Anura version 2.7 makes use of Next-Gen database infrastructure. As a result, connection strings must be updated to maintain connectivity.
Severed connections will produce an error when connecting your 3rd party tool to Cosmic Frog.
Steps to restore connection:
Cosmic Frog ‘HOST’ value can be copied from Cloud Storage browser.

Sometimes, the cause of infeasibility is not immediately clear. Especially as we build more and more complex models with several constraints, infeasibility could come from several different sources.
One great tool is Infeasibility Diagnostic Engine. In the “Run” menu, you can select this tool under the Neo engine dropdown.

Running the infeasibility diagnostic engine can allow the model to optimize even if the current constraints cause infeasibility. Generally, this engine adds slack variables to each of the constraints in our model, which allow it to solve.
Adding slack variables means that the infeasibility diagnostic is solving an optimization problem focused on feasibility, not cost. The goal of this augmented model is to minimize the slack variables. By minimizing the slack variables needed to solve the model, this diagnostic tool can find:
If constraint relaxation is enough to allow the model to solve, the model will show as “Done” in the Run Manager.

The “slack” results will show in the OptimizationConstraintSummary table.


In this example there are no transportation lanes to CUST_Phoenix, so we cannot fulfill that demand constraint. The model is telling us that a way to make this feasible is to change the demand constraint to be 0.
In general, infeasibility is a very complex topic, so the infeasibility diagnostic tool cannot catch or relax all potential infeasibility causes, but it is a very useful place to start.
We take data protection seriously. Below is an overview of how backups work within our platform, including what’s included, how often backups occur, and how long they’re kept.
Every backup—whether created automatically or manually—contains a complete snapshot of your database at the time of the backup. This includes everything needed to fully restore your data.
We support two types of backups at the database level:
Often called “snapshots,” “checkpoints,” or “versions” by users:
We use a rolling retention policy that balances data protection with storage efficiency. Here’s how it works:
Retention Tier - Time Period - What’s Retained
Short-Term - Days 1–4 - Always keep the 4 most recent backups
Weekly - Days 5–7 - Keep 1 additional backup
Bi-Weekly - Days 8–14 - Keep the newest and oldest backups
Monthly - Days 15–30 - Keep the newest and oldest backups
Long-Term - Day 31+ - Keep the newest and oldest backups
This approach ensures both recent and historical backups are available, while preventing excessive storage use.
In addition to per-database backups, we also perform server-level backups:
These backups are designed for full-server recovery in extreme scenarios, while database-level backups offer more precise restore options.
To help you get the most from your backup options, we recommend the following:
If you have additional questions about backups or retention policies, please contact our support team at support@optilogic.com.
When running geocoding through the default Mapbox provider, all of the available location data from the Customers, Facilities and Suppliers table will be used to try and determine the latitude and longitude coordinates. Mapbox will use all of these components and perform the best mapping possible and will return a latitude / longitude coordinate along with a confidence score. By default, Cosmic Frog will only accept scores with a confidence score of 100. You can optionally turn this option off and the top confidence score will then be returned by Mapbox.

More information on how Mapbox calculates latitude and longitude coordinates can be found here: Mapbox Geocoding Documentation.
If you’d like to use an alternate provider instead of Mapbox, setup instructions can be found here: Using Alternate Geocoding Providers.
To create a scenario you need to do three things:
From the Scenario tab in Cosmic Frog select the blue button called “Scenario” and click on “New Scenario.”

Type the name of the scenario you would like to create in the panel window.

From the same drop down as “New Scenario” select “New Item” to create a scenario item. Enter the name of your scenario item in the window. After you press enter the Scenario Item window will be active where you will select the following:

After you have created and saved the Scenario Item you need to assign that item to a scenario. On the right hand side of your screen there is a table called “Assign Scenarios.” From here you can check/uncheck the Scenarios where you wish to use the new Scenario Item.

Utilities enable powerful modelling capabilities for use cases like integration to other services or data sources, repeatable data transformation or anything that can be supported by Python! System Utilities are available as a core capability in Cosmic Frog for use cases like LTL rate lookups, TransitMatrix time & distance generation, and copying items like Maps and Dashboards from one model to another. More useful System Utilities will become available in Cosmic Frog over time. Some of these System Utilities are also available in the Resource Library where they can be downloaded from, and then customized and made available to modelers for specific projects or models. In this Help Article we will cover both how to use use System Utilities as well as how to customize and deploy Custom Utilities.
The “Using and Customizing Utilities” resource in the Resource Library includes a helpful 15-minute video on Cosmic Frog Model Utilities and users are encouraged to watch this.
In this Help Article, System Utilities will be covered first, before discussing the specifics of creating one’s own Utilities. Finally, how to use and share Custom Utilities will be explained as well.
Users can access utilities within Cosmic Frog by going to the Utilities section via the Module Menu drop-down:

Once in the Utilities section, user will see the list of available utilities:

The appendix of this Help Article contains a table of all System Utilities and their descriptions.
Utilities vary in complexity by how many input parameters a user can configure and range from those where no parameters need to be set at all to those where many can be set. Following screenshot shows the Orders to Demand utility which does not require any input parameters to be set by the user:

The Copy map to a model utility shown in the next screenshot does require several parameters to be set by the user:

When the Run Utility button has been clicked, a message appears beneath it briefly:

Clicking on this message will open the Model Activity pane to the right of the tab(s) with open utilities:


Users will not only see activities related to running utilities in the Model Activity list. Other actions that are executed within Cosmic Frog will be listed here too, like for example when user has geocoded locations by using the Geocode tool on the Customers / Facilities / Suppliers tables or when user makes a change in a master table and chooses to cascade these changes to other tables.
Please note that the following System Utilities have separate Help Articles where they are explained in more detail:
The utilities that are available in the Resource Library can be downloaded by users and then customized to fit the user’s specific needs. Examples are to change the logic of a data transformation, apply similar logic but to a different table, etc. Or users may even build their own utilities entirely. If a user updates a utility or creates a new one, they can share these back with other users so they can benefit from them as well.
Utilities are Python scripts that contain a specific structure which will be explained in this section. They can be edited directly in the Atlas application on the Optilogic platform or users can download the Python file that is being used as a starting point and edit it using an IDE (Integrated Development Environment) installed on their computer. A rich text editor geared towards coding, like for example Visual Studio Code, will work fine too for most. An advantage of working locally is that user can take advantage of code completion features (auto-completion while typing, showing what arguments functions need, catch incorrect syntax/names, etc.) by installing an extension package like for example IntelliSense (for Visual Studio Code). The screenshots of the Python files underlying the utilities that follow in this documentation are taken while working with them in Visual Studio Code locally and on a machine that has the IntelliSense extension package installed.
A great resource on how to write Python scripts for Cosmic Frog models is this “Scripting with Cosmic Frog” video. In this video, the cosmicfrog Python library, which adds specific functionality to the existing Python features to work with Cosmic Frog models, is covered in some detail.
We will start by looking at the Python file of the very simple Hello World utility. In this first screenshot, the parts that can stay the same for all utilities are outlined in green:

Next, onto the parts of the utility’s Python script that users will want to update when customizing / creating their own scripts:

Now, we will discuss how input parameters, which users can then set in Cosmic Frog, can be added to the details function. After that we will cover different actions that can be added to the run function.
If a utility needs to be able to take any inputs from a user before running it, these are created by adding parameters in the details function of the utility’s Python script:

We will take a closer look at a utility that uses parameters and map the arguments of the parameters back to what the user sees when the utility is open in Cosmic Frog, see the next 2 screenshots: the numbers in the script screenshot are matched to those in the Cosmic Frog screenshot to indicate what code leads to what part of the utility when looking at it in Cosmic Frog. These screenshots use the Copy dashboard to a model utility of which the Python script (Copy dashboard to a model.py) was downloaded from the Resource Library.

Note that Python lists are 0-indexed, meaning that the first parameter (Destination Model in this example) is referenced by typing params[0], the second parameter (Replace of Append dashboards) by typing params[1], etc. We will see this in the code when adding actions to the run function below too.
Now let’s have a look at how the above code translates to what a user sees in the Cosmic Frog user interface for the Copy dashboard to a model System Utility (note that the numbers in this screenshot match with those in the above screenshot):

The actions a utility needs to perform are added to the run function of the Python script. These will be different for different types of utilities. We will cover the actions the Copy dashboard to a model utility uses at a high level and refer to Python documentation if user is interested in understanding all the details. There are a lot of helpful resources and communities online where users can learn everything there is to know about using & writing Python code. A great place to start is on the Python for Beginners page on python.org. This page also mentions how more experienced coders can get started with Python. Also note that text in green font that follows a hash sign are comments to add context to code.



For a custom utility to be showing in the My Utilities category of the utilities list in Cosmic Frog, it needs to be saved under My Files > My Utilities in the user’s Optilogic account:

Note that if a Python utility file is already in user’s Optilogic account, but in a different folder, user can click on it and drag it to the My Utilities folder.
For utilities to work, a requirements.txt file which only contains the text cosmicfrog needs to be placed in the same My Files > My Utilities folder (if not there already):

A customized version of the Copy dashboard to a model utility was uploaded here, and a requirements.txt file is present in the same folder too.
Once a Python utility file is uploaded to My Files > My Utilities, it can be accessed from within Cosmic Frog:

If users want to share custom utilities with other users, they can do so by right-clicking on it and choosing the “Send Copy of File” option:

The following form then opens:

When a custom utility has been shared with you by another user, it will be saved under the Sent To Me folder in your Optilogic account:

Should you have created a custom utility that you feel a lot of other users can benefit from and you are allowed to share outside of your organization, then we encourage you to submit it into Optilogic’s Resource Library. Click on the Contribute button at the left top of the Resource Library and then follow the steps as outlined in the “How can I add Python Modules to the Resource Library?” section towards the end of the “How to use the Resource Library” help article.
Utility names and descriptions by category:
Running a model from Atlas via the SDK requires the following inputs:
Model data can be stored in two common locations:
1. Cosmic Frog model (denoted by a .frog suffix) stored within Postgres SQL database in the platform
2. .CSV files stored within Atlas
For #1 simply enter the database name within single quotes. For example to run a model called Satellite_Sleep I will need to enter this data in the run_model.py file
For #2 you will need to create a folder within Atlas to store your .CSV modeling files:


Each Atlas account is loaded with a run_model.py file located within the SDK folder in your Model Library

Double click the file to open it within Atlas and enter the following data:


Please find a list of model errors that might appear in the Job Error Log and their associated resolutions.
Resolution – Please check to see if a scenario item was built where an action does not reference a column name. Scenario Actions should be of the form ColumnName = …..


Resolution – Please check to see if there is a string in the scenario condition that is missing an apostrophe to start or close the string


Resolution – This indicates an intermittent drop in connection when reading / writing to the database. Re-running the scenario should resolve this error. If the error persists, please reach out to support@optilogic.com.
The validation error report table will be populated for all model solves and contains a lot of helpful information for both reviewing and troubleshooting models. A set of results will be saved for each scenario and every record will report the table and column where the issue has been identified, a description of the error, the value that was identified as the problem along with the action taken by the solver. In the instance where multiple records are generated for the same type of error, you will see an example record populated as the identified value and a count of the errors will be displayed – detailed records can be printed using debug mode. Each record will also be rated on the severity of its impact on a scale from 0 – 3 with 3 being the highest.
The validation error report can serve as a great tool to help troubleshoot an infeasible model. The best approach for utilizing this table is to first sort on the Severity column so that the highest level issues are displayed. If severity 3 issues are present, they must be addressed. If no severity 3 issues exist, the next steps would be to review any severity 2 issues to consider policy impact. It is also helpful to search for any instances where rows are being dropped as these dropped rows will likely influence other errors. To do this, filter the Action field for ‘Row Dropped’.
While the report can be helpful in trying to proactively diagnose infeasible models, it won’t always have all of the answers. To learn more about the dedicated engine for infeasibility diagnosis please check out this article: Troubleshooting With The Infeasibility Diagnostic Engine
Severity 3 records capture instances of syntax issues that are preventing the model from being built properly. This can be presented as 2 types:
Severity 3 records can also be instances where model infeasibility can be detected before the model solve begins, in the instance where a severity 3 error is detected the model solve will be cancelled immediately. There are 3 common instances of these Severity 3 errors:
Severity 2 records capture sources of potential infeasibility and while not always a definitive reason for a problem being infeasible, they will highlight potential issues with the policy structure of a model. There are 2 common instances of Severity 2 errors:
These severity 2 errors don’t necessarily indicate a problem, they can often times be caused by grouped-based policies and members overlapping from one group to another. It is still a good idea to review these gaps in policies and make sure that all of the lanes which should be considered in your solve are in fact being read into the solver.
Severity 1 records will capture issues in model data that can cause rows from input tables to be dropped when writing the solver files for a model. These can be issues on allowed numerical ranges, a negative quantity for customer demand as an example. Detailed policy linkages that do not align can also cause errors, for example a process which uses a specific workcenter that doesn’t exist at the assigned facility. There are other causes of severity 1 errors, and it is important to review these errors for any changes that are made to your input data. Be sure to check the Action column to see if a default value was applied and note the new value that was used, or if a row was being dropped.
Severity 1 records will also note other issues that have been identified in the model input data. These can be unused model elements such as a customer which is included but has no demand, or transportation modes that exist in the model but aren’t assigned to any lanes. There can also be records generated for policies that have no cost charged against them to let you know that a cost is missing. Duplicate data records will also be flagged as severity 1 errors – the last row read in by the solver will be persisted. If you have duplicated data with different detailed information these duplicates should be addressed in your input tables so that you can make sure the correct information is read in by the solver.
A point to look out for is any severity 1 issue that is related to an invalid UOM entry – these will cause records to be dropped and while the dropped records don’t always directly cause model infeasibilities themselves, they can often times be at the root of other issues identified as higher severity or through infeasibility diagnosis runs.
Severity 0 records are for informational purposes only and are generated when records have been excluded upstream, either in the input tables directly or by other corrective actions that the solver has taken. These records are letting you know that the solver read in the input records but they did not correspond to anything that was in the current solve. An example of this would be if you only included a subset of your facilities for a scenario, but still had included records for an out-of-scope MFG location. If the MFG is excluded in the facilities table but its production policies are still included, a record will be written to the table to let you know that these production policy rows were skipped.
For various reasons, many supply chains need to deal with returns. This can for example be due to packaging materials coming back to be reused at plants or DCs, retail customers returning finished goods that they are not happy with, defective products, etc. Previously, these returns could mostly be modelled within Cosmic Frog NEO (Network Optimization) models by using some tricks and workarounds. But with the latest Cosmic Frog release, returns are now supported natively, so that the reuse, repurposing, or recycling of these retuned products to help companies reduce costs, minimize waste, and improve overall supply chain efficiency can be taken into account easily.
This documentation will first provide an overview of how returns work in a Cosmic Frog model and then walk through an example model of a retailer which includes modelling the returns of finished goods. The appendix details all the new returns-related fields in several new tables and some of the existing tables.
When modelling returns in Cosmic Frog:
Users need to use 2 new input tables to set up returns:

The Return Ratios table contains the information on how much return-product is returned for a certain amount of product delivered to a certain destination:

The Return Policies table is used to indicate where returned products need to go to and the rules around multiple possible destinations. Optionally, costs can be associated with the returns here and a maximum distance allowed for returns can be entered on this table too.

Note that both these tables have Status and Notes fields (not shown in the screenshots), like most Cosmic Frog input tables have. These are often used for scenario creation where the Status is set to Exclude in the table itself and changed to Include in select scenarios based on text in the Notes field.
All columns on these 2 returns-related input table are explained in more detail in the appendix.
In addition to populating the Return Policies and Return Ratios tables, users need to be aware that additional model structure needed for the returned products may need to be put in place:
The Optimization Return Summary output table is a new output table that will be generated for Neo runs if returns are included in the modelling:

This table and all its fields are explained in detail in the appendix.
The Optimization Flow Summary output table will contain additional records for models that include returns; they can be identified by filtering the Flow Type field for “Return”:

These 2 records show the return flows and associated transportation costs for the Bag_1 and Bag_2 products from CZ_001, going to DC_Cincinnati, that we saw in the Optimization Return Summary table screenshot above.
In addition to the new Optimization Return Summary output table, and new records of Flow Type = Return in the Optimization Flow Summary output table, following existing output tables now contain additional fields related to returns:
The example Returns model can be copied from the Resource Library to a user’s Optilogic account (see this help center article on how to use the Resource Library). It models the US supply chain of a fashion bag retailer. The model’s locations and flows both to customers and between DCs are shown in this screenshot (returns are not yet included here):

Historically, the retailer had 1 main DC in Cincinnati, Ohio, where all products were received and all 869 customers were fulfilled from. Over time, 4 secondary DCs were added based on Greenfield analysis, 2 bigger ones in Clovis, California, and Jersey City, New Jersey, and 2 smaller ones in West Palm Beach, Florida, and Las Lomas, Texas. These secondary DCs receive product from the Cincinnati DC and serve their own set of customers. The main DC in Cincinnati and the 2 bigger secondary DCs (Clovis, CA, and Jersey City, NJ) can handle returns currently: returns are received there and re-used to fulfill demand. However, until now, these returns had not been taken into account in the modelling. In this model we will explore following scenarios:
Other model features:
Please note that in this model the order of columns in the tables has sometimes been changed to put those containing data together on the left-hand side of the table. All columns are still present in the table but may be in a different position than you are used to. Columns can be reset to their default position by choosing “Reset Columns” from the menu that comes up when clicking on the icon with 3 vertical dots to the right of a column name.
After running the baseline scenario (which does not include returns), we take a look at the Financials: Scenario Cost Comparison chart in the Optimization Scenario Comparison dashboard (in Cosmic Frog’s Analytics module):

We see that the biggest cost currently is the production cost at 394.7M (= procurement of all product into Cincinnati), followed by transportation costs at 125.9M. The total supply chain cost of this scenario is 625.3M.
In this scenario we want to include how returns currently work: Cincinnati, Clovis, and Jersey City customers return their products to their local DCs whereas West Palm Beach and Las Lomas customers return their products to the main DC in Cincinnati. To set this up, we need to add records to the Return Policies, Return Ratios, and Transportation Policies input tables. To not change the Baseline scenario, all new records will be added with Status = Exclude, and the Notes field populated so it can be used to filter on in scenario items that change the Status to Include for subsets of records. Starting with the Return Policies table:

Next, we need to add records to the Transportation Policies table so that there is at least 1 lane available for each site-product-destination combination set up in the return policies table. For this example, we add records to the Transportation Policies table that match the ones added to the Return Policies table exactly, while additionally setting Mode Name = Returns, Unit Cost = 0.04 and Unit Cost UOM = EA-MI (the latter is not shown in the screenshot below), which means the transportation cost on return lanes is 0.04 per unit per mile:

Finally, we also need to indicate how much product is returned in the Return Ratios table. Since we want to model different ratios by individual customer and individual product, this table does not use any groups. Groups can however be used in this table too for the Site Name, Product Name, Period Name, and Return Product Name fields.

Please note that adding records to these 3 tables and including them in the scenarios is sufficient to capture returns in this example model. For other models it is possible that additional tables may need to be used, see the Other Input Tables section above.
Now that we have populated the input tables to capture returns, we can set up scenario S1 which will change the Status of the appropriate records in these tables from Exclude to Include:

After running this scenario S1, we are first having a look at the map, where we will be showing the DCs, Customers and the Return Flows for scenario S1. This has been set up in the map named Supply Chain (S1) in the model from the Resource Library. To set this map up, we first copied the Supply Chain (Baseline) map and renamed it to Supply Chain (S1). Then clicked on the map’s name (Supply Chain (S1)) to open it and in the Map Filters form that is showing on the right-hand side of the screen changed the scenario to “S1 Include Returns” in the Scenario drop-down. To configure the Return Flows, we added a new Map Layer, and configured its Condition Builder form as follows (learn more about Maps and how to configure them in this Help Center article):

The resulting map is shown in this next screenshot:

We see that, as expected, the bulk of the returns are going back the main DC in Cincinnati: from its local customers, but also from the customers served by the 2 smaller DCs in Las Lomas and West Palm Beach DCs. The customers served by the Clovis and Jersey City DCs return their products to their local DCs.
To assess the financial impact of including returns in the model, we again look at the Financials: Scenario Cost Comparison chart in the Optimization Scenario Comparison dashboard, comparing the S1 scenario to the Baseline scenario:

We see that including returns in S1 leads to:
Seeing that the main driver for the overall supply chain costs being higher when including returns are the high transportation costs for returning products, especially those travelling long distances from the Las Lomas and West Palm Beach customers to the Cincinnati DC sparks the idea to explore if it would be more beneficial for the Las Lomas and/or West Palm Beach customers to return their products to their local DC, rather than the Cincinnati DC. This will be modelled in the next three scenarios.
Building upon scenario S1, we will run 2 scenarios (S2 and S3) where it will be examined if it is beneficial cost-wise for West Palm Beach customers to return their products to their local West Palm Beach DC (S2) and for Las Lomas customers to return their products to their local Las Lomas DC (S3) rather than to the Cincinnati DC. In order to be able to handle returns, the fixed operating costs at these DCs are increased by 0.5M to 3.5M:

Scenarios S2 and S3 are run, and first we look at the map to check the return flows for the West Palm Beach and Las Lomas customers, respectively (copied the map for S1, renamed it, and then changed the scenario by clicking on the map’s name and selecting the S2/S3 scenario from the Scenario drop-down in the Map Filters pane on the right-hand side):


As expected, due to how we set up these scenarios, now all returns from these customers go to their local DC, rather than to DC-Cincinnati which was the case in scenario S1.
Let us next look at the overall costs for these 2 scenarios and compare them back to the S1 and Baseline scenarios:

Besides some smaller reductions in the inbound and outbound costs in S2 and S3 as compared to S1, the transportation costs are reduced by sizeable amounts: 6.9M (S2 compared to S1) and 9.4M (S3 compared to S1), while the production (= procurement) costs are the same across these 3 scenarios. The reduction in transportation costs outweighs the 0.5M increase in fixed operating costs to be able to handle returns at the West Palm Beach and Las Lomas DCs. Also note that both scenario S2 and S3 have a lower total cost than the Baseline scenario.
Since it is beneficial to have the West Palm Beach and Las Lomas DCs handle returns, scenario S4 where this capability is included for both DCs is set up and run:

The S4 scenario increases the fixed operating costs at both these DCs from 3M to 3.5M (scenario items “Incr Operating Cost S2” and “Incr Operating Cost S3”), sets the Status of all records on the Return Ratios table to Include (the Include Return Ratios scenario item), and sets the Status to Include for records on the Return Policies and Transportation Policies tables where the Notes field contains the text “S4” (the “Include Return Policies S4” and “Include Return TPs S4” items), which are records where customers all ship their returns back to their local DC. We first check on the map if this is working as expected after running the S4 scenario:

We notice that indeed there are no more returns going back to the Cincinnati DC from Las Lomas or West Palm Beach customers.
Finally, we expect the costs of this scenario to be the lowest overall since we should see the combined reductions of scenarios S2 and S3:

Between S1 and S4:
In addition to looking at maps or graphs, users can also use the output tables to understand the overall costs and flows, including those of the returns included in the network.
Often, users will start by looking at the overall cost picture using the Optimization Network Summary output table, which summarizes total costs and quantities at the scenario level:

For each scenario, we are showing the Total Supply Chain Cost and Total Return Quantity fields here. As mentioned, the Baseline did not include any returns, whereas scenarios S1-4 did, which is reflected in the Total Return Quantity values. There are many more fields available on this output table, but in the next screenshot we are just showing the individual cost buckets that are used in this model (all other cost fields are 0):

How these costs increase/decrease between scenarios has been discussed above when looking at the “Financials: Scenario Cost Comparison” chart in the “Optimization Scenario Comparison” dashboard. In summary:
Please note that on this table, there is also a Total Return Cost field. It is 0 in this example model. It would be > 0 if the Unit Cost field on the Return Policies table had been populated, which is a field where any specific cost related to the return can be captured. In our example Returns model, the return costs are entirely captured by the transportation costs and fixed operating costs specified.
The Optimization Return Summary output table is a new output table that has been added to summarize returns at the scenario-returning site-product-return product-period level:

Looking at the first record here, we understand that in the S1 scenario, CZ_001 was served 8,850 units of Bag_1, while 876.15 units of Bag_1 were returned.
Lastly, we can also see individual return flows in the Optimization Flow Summary table by filtering the Flow Type field for “Return”:

Note that the product name for these flows is of the product that is being returned.
The example Returns model described above assumes that 100% of the returned Bag_1 and Bag_2 products can be reused. Here we will discuss through screenshots how the model can be adjusted to take into account that only 70% of Bag_1 returns and 50% of Bag_2 returns can be reused. To achieve this, we will need to add an additional “return” product for each finished good, set up bills of materials, and add records to the policies tables for the required additional model structure.
The tables that will be updated and for which we will see a screenshot each below are: Products, Groups, Return Policies, Return Ratios, Transportation Policies, Warehousing Policies, Bills of Materials, and Production Policies.
Two products are added here, 1 for each finished good: Bag_1_Return and Bag_2_Return. This way we can distinguish the return product from the sellable finished goods, apply different policies/costs to them, and convert a percentage back into the sellable items. The naming convention of adding “_Return” to the finished good name makes for easy filtering and provides clarity around what the product’s role is in the model. Of course, users can use different naming conventions.
The same unit value as for the finished goods is used for the return products, so that inventory carrying cost calculations are consistent. A unit price (again, same as the finished goods) has been entered too, but this will not actually be used by the model as these “_Return” products are not used to serve customer demand.

To facilitate setting up policies where the return products behave the same (e.g. same lanes, same costs, etc.), we add an “All_Return_Products” group to the Groups table, which consists of the 2 return products:

In the Return Policies table, the Return Product Name column needs to be updated to reflect that the products that are being returned are the “_Return” products. Previously, the Return Product Name was set to the All_Products group for each record, and it is now updated to the All_Return_Products group. Updating a field in all records or a subset of filtered records to the same value can be done using the Bulk Update Column functionality, which can be accessed by clicking on the icon with 3 vertical dots to the right of the column name and then choosing “Bulk Update this Column” in the list of options that comes up.

We keep the ratios of how much product comes back for each unit of Bag_1 / Bag_2 sold the same, however we need to update the Return Product Name field on all records to reflect that it is the “_Return” product that comes back. Since this table does not use groups because the return ratios are different for different customer-finished good combinations, the best way to update this table is to also use the bulk update column functionality:
Note that only 4 of the 1,738 records in this table are shown in the screenshot below.

Here, the records representing the lane back from the customers to the DC they send returns back to need to be updated so that the products going back are the “_Return” ones. Since the transportation costs of the return products are the same, we can keep using the grouped policies and just bulk update the Product Name column of the records where Mode Name equals Returns: change the values from the All_Products group to the All_Return_Products group.

We want to apply the same inbound and outbound handling costs for the return products as we do for the finished goods, so a record is added for the “All_Return_Products” group at All_DCs in the Warehousing Policies table:

We can use the Bills of Materials (BOM) table to convert the “_Return” products back into the finished goods, applying the desired percentage that will be suitable for reuse. For Bag_1, we want to set up that 70% of the returns can be reused as finished goods, this is done by setting up a BOM as follows (the first 2 records in the screenshot below):
Similarly, we set up the BOM “Reuse_Bag_2” where 1 unit of Bag_2_Return results in 0.5 units of Bag_2 (the 3rd and 4th record in the screenshot):

For the BOMs to be used, they need to be associated with the appropriate location-product combinations through production policies. So, we add 2 records to the Production Policies table, which set that at All_DCs the finished goods can be produced using the 2 BOMs. The Unit Cost set on this table represents the cost of inspecting each returned bag and deciding whether it can be reused.

With all the changes made on the input side, we can run the S1 Include Returns scenario (which was copied and renamed to “S1 Include Returns w BOM”). We will briefly look at how these changes affect the outputs.
In the Optimization Return Summary output table, users will notice that the Product Name is still either Bag_1 or Bag_2, but that the Return Product Name is either Bag_1_Return (for Bag_1) or Bag_2_Return (for Bag_2). The quantities are the same as before, since the return ratios are unchanged.

When looking at records of Flow Type = Return, we now see that the Product Name on these flows is that of the “_Return” products.

In this output table, we see that Bag_1 and Bag_2 are no longer only originating from the main DC in Cincinnati, but also at the 2 bigger local DCs that accept returns (Clovis, CA, and Jersey City, NJ) where a percentage of the returns is converted back into sellable finished goods through the BOMs.

In this appendix we will cover all fields on the 2 new input tables and the 1 new output table.
Depending on the type of supply chain one is modelling in Cosmic Frog and the questions being asked of it, it may be necessary to utilize some or all the features that enable detailed production modelling. A few business case examples that will often include some level of detailed production modelling include:
In comparison, modelling a retailer who buys all its products from suppliers as finished goods, does not require any production details to be added to its Cosmic Frog model. Hybrid models are also possible, think for example of a supermarket chain which manufactures its own branded products and buys other brands from its suppliers. Depending on the modelling scope, the production of the own branded products may require using some of the detailed production features.
The following diagram shows a generalized example of production related activities at a manufacturing plant, all of which can be modelled in Cosmic Frog:

In this help article we will cover the inputs & outputs of Cosmic Frog’s production modelling features, while also giving some examples of how to model certain business questions. The model in Optilogic’s Resource Library that is used mainly for the screenshots in this article is the Multi-Year Capacity Planning. There is a 20-minute video available with this model in the Resource Library, which covers the business case that is modelled and some detail of the production setup too.
To not make this document too repetitive we will cover some general Cosmic Frog functionality here that applies to all Cosmic Frog technologies and is used extensively for production modelling in Neo too.
To only show tables and fields in them that can be used by the Neo network optimization algorithm, select Optimization in the Technologies Filter from the toolbar at the top in Cosmic Frog. This will hide any tables and fields that are not used by Neo and therefore simplifies the user interface.

Quite a few Neo related fields in the input and output tables will be discussed in this document. Keep in mind however that a lot of this information can also be found in the tooltips that are shown when you hover over the column name in a table, see following screenshot for an example. The column name, technology/technologies that use this field, a description of how this field is used by those algorithm(s), its default value, and whether it is part of the table’s primary key are listed in the tooltip.

There are a lot of fields with names that end in “…UOM” throughout the input tables. How they work will be explained here so that individual UOM fields across the tables do not need to be explained further in this documentation as they all work similarly. These UOM fields are unit of measure fields and often appear to the immediate right of the field that they apply to, like for example Unit Value and Unit Value UOM in the screenshot above. In these UOM fields you can type the Symbol of a unit of measure that is of the required Type from the ones specified in the Units Of Measure input table. For example, in the screenshot above, the unit of measure Type for the Unit Value UOM field is Quantity. Looking in the Units Of Measure input table, we see there are 2 of these specified: Each and Pallet, with Symbol = EA and PLT, respectively. We can use either of these in this UOM field. If we leave a UOM field blank, then the Primary UOM for that UOM Type specified in the Model Settings input table will be used. For example, for the Unit Value UOM field in the screenshot above the tooltip says Default Value = {Primary Quantity UOM}. Looking this up in the Model Settings table shows us that this is set to EA (= each) in our current model. Let’s illustrate this with the following screenshots of 1) the tooltip for the Unit Value UOM field (located on the Products input table), 2) units of measure of Type = Quantity in the Units Of Measure input table and 3) checking what the Primary Quantity UOM is set to in the Model Settings input table, respectively:



Note that only hours (Symbol = HR) is currently allowed as the Primary Time UOM in the Model Settings table. This means that if another Time UOM, like for example minutes (MIN) or days (DAY), is to be used, the individual UOM fields need to be utilized to set these. Leaving these blank would mean HR is used by default.
With few exceptions, all tables in Cosmic Frog contain both a Status field and a Notes field. These are often used extensively to add elements to a model that are not currently part of the supply chain (commonly referred to as the “Baseline”), but are to be included in scenarios in case they will definitely become part of the future supply chain or to see whether there are benefits to optionally include these going forward. In these cases, the Status in the input table is set to Exclude and the Notes field often contains a description along the lines of ‘New Market’, ‘New Line 2026’, ‘Alternative Recipe Scenario 3, ‘Faster Bottling Plant5 China’, ‘Include S6’, etc. When creating scenario items for setting up scenarios, the table can then be filtered for Notes = ‘New Market’ while setting Status = ‘Include’ for those filtered records. We will not call out these Status and Notes fields in each individual input table in the remainder of this document, but we do encourage users to use these extensively as they make creating scenarios very easy. When exploring any Cosmic Frog models in the Resource Library, you will notice the extensive use of these fields too. The following 2 screenshots illustrate the use of the Status and Notes fields for scenario creation: 1) shows several customers on the Customers table where CZ_Secondary_1 and CZ_Secondary_2 are not currently customers that are being served but we want to explore what it takes to serve them in future. Their Status is set to Exclude and the Notes field contains ‘New Market’; 2) a scenario item called ‘Include New Market’ shows that the Status of Customers where Notes = ‘New Market’ is changed to ‘Include’.


The Status and Notes fields are also often used for the opposite where existing elements of the current supply chain are excluded in scenarios in cases where for example manufacturing locations, products or lines are going to go offline in the future. To learn more about scenario creation, please see this short Scenarios Overview video, this Scenario Creation and Maps and Analytics training session video, this Creating Scenarios in Cosmic Frog help article, and this Writing Scenario Syntax help article.
The model that is mostly used for screenshots throughout this help article is as mentioned above the Multi-Year Capacity Planning model that can be found here in the Resource Library. This model represents a European cheese supply chain which is used to make investment decisions around the growth of a non-mature market in Eastern Europe over a 5-year modelling horizon. New candidate DCs are considered to serve the growing demand in Eastern Europe, the model optimizes which are optimal to open and during which of the 5 years of the modelling horizon. The production setup in the model uses quite a few of the detailed modelling features which will be discussed in detail in this document:
Note that in the screenshots of this model, the columns have been re-ordered sometimes, so you may see a different order in your Cosmic Frog UI when opening the same tables of this model.
The 2 screenshots below show the Products and Facilities input tables of this model in Cosmic Frog:

Note that the naming convention of the products lends itself to easy filtering of the table for the raw materials, bulk materials, and finished goods due to the RAW_, BULK_, and FG_ prefixes. This makes the creation of groups and setting up of scenarios quick and easy.

Note that similar to the naming convention of the products, the facilities are also named with prefixes that facilitate filtering of the facilities so groups and scenarios can quickly be created.
Here is a visual representation of the model with all facilities and customers on the map:

The specific features in Cosmic Frog that allow users to model and optimize production processes of varying levels of complexity while using the network optimization engine (Neo) include the following input tables:

We will cover all these production related input tables to some extent in this article, starting with a short description of each of the basic single-period input tables:
These 4 tables feed into each other as follows:

A couple of notes on how these tables work together:
For all products that are explicitly modelled in a Cosmic Frog model, there needs to be at least 1 policy specified on the Production Policies table or the Supplier Capabilities table so there is at least 1 origin location for each. This applies to for example raw materials, intermediates, bulk materials, and finished goods. The only exception is if by-products are being modelled, these can have Production Policies associated with them, but do not necessarily need to (more on this when discussing Bills of Materials further below). From the 2 screenshots below of the Production Policies table, it becomes clear that depending on the type of product and the level of detail that is needed for the production elements of the supply chain, production policies can be set up quite differently: some use only a few of the fields, while others use more/different fields.

A couple of notes as follows:
Next, we will look at a few other records on the Production Policies input table:

We will take a closer look at the BOMs and Processes specified on these records when discussing the Bills of Materials and Processes tables further below.
Note that the above screenshot was just for PLT_1 and mozzarella, there are similar records in this model for the other 4 cheeses which can also be made at PLT_1, plus similar records for all 5 cheeses at PLT_2, which includes a new potential production line for future expansion too.
Other fields on the Production Policies table that are not shown in the above 2 screenshots are:
The recipes of how materials/products of different stages convert into each other are specified on the Bills of Materials (BOMs) table. Here the BOMs for the blue cheese (_BLU) products are shown:

Note that the above specified BOMs are both location and end-product agnostic. Their names suggest that they are specific for making the BULK_BLU and FG_BLU products, but only associating these BOMs on a Production Policy which has Product Name set to these makes this connection. We can use these BOMs at any location that they apply. Filtering the Production Policies table for the BULK_BLU and FG_BLU products we can see that 1) BOM_BULK_BLU is indeed used to make BULK_BLU and BOM_FG_BLU to make FG_BLU, and 2) the same BOMs are used at PLT_1 and PLT_2:

It is of course possible that the same product uses a different BOM at a different location. In this case, users can set up multiple BOMs for this product on the BOMs table and associate the correct one at the correct location in the Production Policies table. Choosing a naming convention for the BOM Names that includes the location name (or a code to indicate it) is recommended.
The screenshot above of the Bills of Materials table only shows records with Product Type = Component. Components are input into a BOM and are consumed by it when producing the end-product. Besides Component, Product Type can also be set to End Product or Byproduct. We will explain these 2 product types through the examples in this following screenshot:

Notes:
On the Processes table production processes of varying levels of complexity can be set up, from simple 1 step processes without using any work centers, to multi-step ones that specify costs, processing rates, and use different work centers for each step. The processes specified in the Multi-Year Capacity Planning model are relatively straightforward:

Let us also look at an example in a different model which contains somewhat more complex processes for a car manufacturer where the production process can roughly be divided into 3 steps:

Note that, like BOMs, Processes can in theory be both location and end-product agnostic. However:
Other fields on the Processes table that are not shown in the above 2 screenshots are:
If it is important to capture costs and/or capacities of equipment like production lines, tools, machines that are used in the production process, these can be modelled by using work centers to represent the equipment:

In the above screenshot, 2 work centers are set up at each plant: 1 existing work center and 1 new potential work center. The new work centers (PLT_1_NewLine and PLT_2_NewLine) have Work Center Status set to Closed, so they will not be considered for inclusion in the network when running the Baseline scenario. In some of the scenarios in the model, the Work Center Status of these 2 lines is changed to Consider and in these scenarios one of the new lines or both can be opened and used if it is optimal to do so. The scenario item that makes this change looks like this:

Next, we will also look at a few other fields on the Work Centers table that the Multi-Year Capacity Planning model utilizes:

In theory, it can be optimal for a model to open a considered potential work center in one period of the model (say 2024 in this model), close it again in a later period (e.g. 2025), for it then to open it again later (e.g. 2026), etc. In this case Fixed Startup or Fixed Closing Costs would be applied each time the work center was opened or closed, respectively. This type of behavior can be undesirable and is by default prevented by a Neo Run Parameter called “Open Close At Most Once”, as shown in this screenshot:

After clicking on the Run button, the Run screen comes up. The “Open Close At Most Once” parameter can be found in the Neo (Optimization) Parameters section. By default, it is turned on, meaning that a work center or facility is only allowed to change state once during the model’s horizon, i.e. once from closed to open if the Initial State = Potential or once from open to closed if the Initial State = Existing. There may however be situations where opening and/or closing of work centers and facilities multiple times during the model horizon is allowable. In that case, the Open Close At Most Once parameter can be turned off.
Other fields on the Work Centers table that are not shown in the above screenshots are:
Fixed Operating, Fixed Startup, and Fixed Closing Costs can be stepped costs. These can be entered into the fields on the Work Centers input table directly or can be specified on the Step Costs input table and then used on the Work Centers table in those cost fields. An example of stepped costs set up in the Step Costs input table is shown in the screenshot below where the costs are set up to capture the weekly shift cost for 1 person (note that these stepped costs are not in the Multi-Year Capacity Planning model in the Resource Library, they are shown here as an additional example):

To set for example the Fixed Operating Cost to use this stepped cost, type “WC_Shifts” into the Fixed Operating Cost field on the Work Centers input table.
Many of the input tables in Cosmic Frog have a Multi-Time Period equivalent, which can be used in models that have more than 1 period. These tables enable users to make changes that only apply to specific periods of the model. For example, to:
The multi-time period tables are copies of their single-period equivalents, with a few columns added and removed (we will see examples of these in screenshots further below):
Notes on switching status of records through the multi-period tables and updating records partially:

Three of the 4 production specific input tables that have been discussed above have a multi-time period equivalent: Production Policies, Processes, and Work Centers. There is no equivalent for the Bills Of Materials input table, as BOMs are only used if they are associated on records in the Production Policies table. Using different BOMs during different periods can be achieved by associating those BOMs on the Production Policies single-period table and setting the Status of them to Include for those to be used for most of the periods and to Exclude if they are to be included for certain periods / scenarios. Then add those records for which the Status needs to be switched to the Production Policies multi-period input table (we will walk through an example of this using screenshots in the next section).
The 3 production specific multi-time period input tables do have all of the same fields as their single-period equivalents, with the addition of the Period Name field and additional Status field. We will not discuss each multi-time period table and all its fields in detail here, but rather give a few examples of how each can be used.
Note that from this point onwards the Multi-Year Capacity Planning model was modified and added to for purposes of this help article, the version in the Resource Library does not contain the same data in the Multi-Time Period input tables and production specific Constraint tables that is shown in the screenshots below.
This first example on the Production Policies Multi-Time Period input table shows how the production of the cheddar finished good (FG_CHE) is prevented at plant 1 (PLT_1) in years 4 and 5 of the model:

In the following example, an alternative BOM to make feta (FG_FET) is added and set to be used at Plant 2 (PLT_2) during all periods instead of the original BOM. This is set up to be used in a scenario, so the original records need to be kept intact for the Baseline and other scenarios. To set this up, we need to update the Bills Of Materials, Production Policies, and Production Policies Multi-Time Period table, see the following screenshots and explanations:

On the Bills Of Materials input table, all we need to do is add the records for the new BOM that results in FG_FET. It has 2 records, both named ALTBOM_FG_FET, and instead of using only BULK_FET as the component which is what the original BOM uses, it uses a mix of BULK_FET and BULK_BLU as its components.
Next, we first need to associate this new BOM through the Production Policies table:

Lastly, the records that need to be added to the Production Policies Multi-Time Period table are the following 4 which have all the same values for the key columns as the 4 records in the above screenshot of the Production Policies single-period input table, which contain all the possible ways to produce FG_FET at PLT_2:


In the following example, we want to change the unit cost on 2 of the processes: at Plant 1 (PLT_1), the cost on the new potential line needs to be decreased to 0.005 for cheddar cheese (CHE) and increased to 0.015 for Swiss cheese (SWI). This can be done by using the Processes Multi-Time Period input table:

Note that there is also a Work Center Name field on the Processes Multi-Time Period table (not shown in the screenshot). As this is not a key field on the Processes input tables, it can be left blank here on the multi-time period table. This field will not be changed and the value from the single-time period table Work Center Name field will be used for these 2 records.
In the following example, we want to evaluate if upgrading the existing production lines at both plants from the 3rd year of the modelling horizon onwards, so they have a higher throughput capacity at a somewhat higher fixed operating cost, is a good alternative to opening one of the potential new lines at either plant. First, we add a new periods group to the model to set this up:

On the Groups table, we set up a new group named YEARS3-5 (Group Name) that is of Group Type = Periods and has 3 members: YEAR3, YEAR4 and YEAR5 (Member Name).

Cosmic Frog contains multiple tables through which different types of constraints can be added to network optimization (Neo) models. A constraint limits the model in a certain part of the network. These limits can for example be lower or upper limits in terms of the amount of flow between certain locations or certain echelons, the amount of inventory of a certain product or product group at a specific location or network wide, the amount of production of a certain product or product group at a specific location or network wide, etc. In this section the 3 constraints tables that are production specific will be covered: Production Constraints, Production Count Constraints, and Work Center Count Constraints.
A couple of general notes on all constraints tables:
In this example, we want to add constraints to the model that limit the production of all 5 finished goods together to 90,000 units. Both plants have this same upper production limit across the finished goods, and the limit applies to each year of the modelling horizon (5 yearly periods).

Note that there are more fields on the Production Constraints input table which are not shown in the above screenshot. These are:
In this example, we want to limit the number of products that are produced at PLT_1 to a maximum of 3 (out of the 5 finished goods). This limit applies over the whole 5-year modelling period, meaning that in total PLT_1 can produce no more than 3 finished goods:

Again, note there are more fields on the Production Count Constraints input table which are not shown in the above screenshot. These are:
Next, we will show an example of how to open at least 3 work centers, but no more than 5 out of 8 candidate work centers. These limits apply to all 5 yearly periods in the model together and over all facilities present in the model.

Again, there are more fields on the Work Center Count Constraints table that are not shown in the above screenshot:
After running a network optimization using Cosmic Frog’s Neo technology, production specific outputs can be found in several of the more general output tables, like the Optimization Network Summary, and the Optimization Constraints Summary (if any constraints were applied). Outputs more focused on just production can be found in 4 production specific output tables: the Optimization Production Summary, the Optimization Bills Of Material Summary, the Optimization Process Summary, and the Optimization Work Center Summary. We will cover these tables here, starting with the Optimization Network Summary.
The following screenshot shows the production specific outputs that are contained in the Optimization Network Summary output table:

Other production related fields on this table which are not shown in the screenshot above are:
The Optimization Production Summary output table has a record with the production details for each product that was produced as part of the model run:

Other fields on this output table which are not shown in the screenshot are:
The details of how many components were used and how much by-product produced as a result of any bills of materials that were used as part of the production process can be found on the Optimization Bills Of Material Summary output table:

Note that aside from possibly knowing based on the BOM Name, it is not listed in the Bills Of Material Summary output table what the end product is and how much of it is produced as a result of a BOM. Those details are contained in the Optimization Production Summary output table discussed above.
Other fields on this output table which are not shown in the screenshot are:
The details of all the steps of any processes used as part of the production in the Neo network optimization run can be found in the Optimization Process Summary, see these next 2 screenshots:


Other fields on this output table which are not shown in the screenshots are:
For each Work Center that has its Status set to Include or Consider, a record for each period of the model can be found in the Optimization Work Center Summary output table. It summarizes if the Work Center was used during that period, and, if so, how much and at what cost:

The following screenshot shows a few more output fields on the Optimization Work Center Summary output tables that have non-0 values in this model:

Other fields on this output table which are not shown in the screenshots are:
For all constraints in the model, the Optimization Constraint Summary can be a very handy table to check if any constraints are close to their maximum (or minimum, etc.) value to understand where the current and future bottlenecks are and likely will be. The screenshot below shows the outputs on this table for a production constraint that is applied at each of the 3 suppliers, where neither can produce more than 1 million units of RAW_MILK in any 1 year. In the screenshot we specifically look at the Supplier named SUP_3:

Other fields on this output table which are not shown in the screenshots are:
There are a few other output tables of which the main outputs are not related to production, but still contain several fields that result from productions. These are:
In this help article we have covered how to set up alternative Work Centers at existing locations and use the Work Center Status and Initial State fields to evaluate if including these, and from what period onwards if so, will be optimal. We have also covered how Work Center Count Constraints can be used to pick a certain amount of Work Centers to be opened/used from a set of multiple candidates, either at 1 location or multiple. Here we also want to mention that Facility Count Constraints can be used when making decisions at the plant level. Say that based on market growth in a certain region, a manufacturer decides a new plant needs to be built. There are 3 candidate locations for the plant from which the optimal needs to be picked. This can be set up as follows in Cosmic Frog:
A couple of alternative approaches to this are:
As mentioned above in the section on the Bill Of Materials input table, it is possible to set up a model where there is demand for a product that is the By-product resulting from a BOM. This does require some additional set up, and the below walks through this, while also showcasing how the model can be used to determine how much of any flexible demand for this by-product to fulfill. The screenshots show the set-up of a very simple example model built for this specific purpose.

On the Products table, besides the component (for which there also is demand in this model) that goes into any BOM, we also specify:

The demand for the 3 products is set up on the Customer Demand table and we notice that 1) there is demand for the Component, the End Product, and the By-Product, and 2) the Demand Status for ByProduct_1 is set to Consider, which means it does not need to be fulfilled, it will be (partially) fulfilled if it is optimal to do so. (For Component_1 and EndProduct_1 the Demand Status field is left blank, which means the default value of Include will be used.)

The EndProduct_1 is made through a BOM which consumes Component_1 and also make ByProduct_1 as a Byproduct. For this we need to set up a BOM:

Next, on the Production Policies table, we see that Component_1 can be created without a BOM, and:
In reality, these 2 production policies result in the same consumption of Component_1 and same production amounts of EndProduct_1 and ByProduct_1. Both need to be present however in order to be able to also have demand for ByProduct_1 in the model.
Other model elements that need to be set up are:
Three scenarios were run for this simple example model with the only difference between them the Unit Price for ByProduct_1: Baseline (price of ByProduct_1 = 3), PriceByproduct1 (Unit Price of ByProduct_1 = 1), PriceByproduct2 (Unit Price of ByProduct_1 = 2). Let’s review some of the outputs to understand how this Unit Price affects the fulfillment of the flexible demand for ByProduct_1:

The high-level costs, revenues, profit and served/unserved demand outputs by scenario can be found on the Optimization Network Summary output table:

On the Optimization Production Summary output table, we see that all 3 scenarios used BYP_BOM for the production of EndProduct_1 and ByProduct_1, it could have also picked the other BOM (FG_BOM) and the overall results would have been the same.
As the Optimization Production Summary only shows the production of the end products, we will also have a look at the Optimization Bills Of Material Summary output table:

Lastly, we will have a look at the Optimization Inventory Summary output table:

Note that had the demand for Byproduct_1 been set to Include rather than Consider in this example model, all 3 scenarios would have produced 100 units of it to fulfill the demand, and as a result have produced 200 units of EndProduct_1. 100 of those would have been used to fulfill the demand for EndProduct_1 and the other 100 would have stayed in inventory, like we saw in the Baseline scenario above.
One of Cosmic Frog’s great competitive features is the ability to quickly run many sensitivity analysis scenarios in parallel on Optilogic’s Cloud-based platform. This built-in Sensitivity at Scale (S@S) functionality lets a user run sensitivity on demand quantity and transportation costs with 1 click of a button, on any scenario using any of Cosmic Frog’s engines. In this documentation, we will walk through how to kick-off a S@S run, where to track the status of the scenarios, and show some example outputs of S@S scenarios once they have completed running.
Kicking off a S@S analysis is simply done by clicking on the green S@S button on the right-hand side in the toolbar at the top of Cosmic Frog:

After clicking on the S@S button, the Run Sensitivity at Scale screen comes up:

Please note that the parameters that are configured on the Run Settings screen (which comes up when clicking on the Run button at the right top of Cosmic Frog) are used for the Sensitivity at Scale scenario runs.
The scenarios are then created in the model, and we can review their setup by switching to the Scenarios module within Cosmic Frog:

As an example of the sensitivity scenario items that are being created and assigned to the sensitivity scenarios as part of the S@S process, let us have a look at one of these newly created scenario items:

Once the sensitivity scenarios have been created, they are kicked off to all be run simultaneously. Users can have a look in the Run Manager application on the Optilogic platform to track their progress:

Once a S@S scenario finishes, its outputs are available for review in Cosmic Frog. As with other models and scenarios, users can review outputs through output tables, maps, and graphs/charts/dashboards in the Analytics module. Here we will just show the Optimization Network Summary output table and a cost comparison chart as example outputs. Depending on the model and technology run, users may want to look at different outputs to best understand them.

To understand how the costs are divided over the different cost types and how they compare by scenario, we can look at following Supply Chain Cost Detail graph in the Analytics module:

The following instructions show how to establish a local connection, using Power BI, to an Optilogic model that resides within our platform. These instructions will show you how to:
To make a local connection you must first open a Firewall connection between your current IP address and the Optilogic platform. Navigate to the Cloud Storage app – note that the app selection found on the left-hand side of the screen might need to be expanded. Check to see if your current IP address is authorized and if not, add a rule to authorize this IP address. You can optionally set an expiration date for this authorization.

If you are working from a new IP Address, a banner notification should be displayed to let you know that the new IP Address will need to be authorized.
From the Databases section of the Cloud Storage page, click on the database that you want to connect to. Then, click on the Connection Strings button to display all of the required connection information.

We have connection information for the following formats:
To select the format of your connection information, use the drop-down menu labeled Select Connection String:

For this example, we will copy and paste the strings for the ‘PSQL’ connection. The screen should look something like the following:

You can click on any of the parameters to copy them to your clipboard, and then paste them into the relevant field when establishing the PSQL ODBC connection.
Many tools, including Alteryx, use Open Database Connectivity (ODBC) to enable a connection to the Cosmic Frog model database. To access the Cosmic Frog model, you will need to download and install the relevant ODBC drivers. Latest versions of the drivers are located here: https://www.postgresql.org/ftp/odbc/releases/
From here, click on the latest parent folder, which as of June 20, 2024 will be REL-16_00_0005. Select and download the psqlodbc_x64.msi file.
When installing, use the default settings from the installation wizard.
Within Windows, open the ODBC Data Sources App (hint search: “ODBC” in your Windows spotlight search).
Click “Add” to create a new connection and then select “PostgreSQL ANSI(x64)” then click “Finish.”

Enter the details from your Cloud Storage connection — (hint: click to copy/paste)
You may click “Test” to confirm the connection works or click “Save.”

Open Power BI and select “Get data from another source”

Enter “ODBC” in the Get Data window and select connect

Select your Database connection from the dropdown and click OK

Enter your username and password one last time from the Cloud Storage page

Select the tables you wish to see and use within PowerBI

Create Dashboards of Data from Cosmic Frog tables!
