These instructions assume you have already made a connection to your database as described in Connecting to Optilogic with Alteryx.
Once connected you will see the schemas and tables in the Cosmic Frog database. In this case we will select some columns from the Customers table. Drag and drop the tables required to the left hand “Main” pane. Click the columns required. Click OK.

At this point you can run your workflow and it will be populated with the data from the connected database.

You can customize existing dashboards to fit your needs.


Inside of a dashboard, add a new visualization:

or edit an existing visualization:

The most common elements of visualizations are values and labels.
Values represent the data you want to be presented in the visualization. Typically, values are aggregated representations of your data (e.g. sum, average, etc.).
Labels refer to the labels on the visualization axes and consequently the groups by which you want to aggregate your values.
To build a visualization, we drag fields (i.e. database columns) into these elements.

Other elements include categories which allow for additional grouping and filters which allow users to adjust inclusion and exclusion criteria while viewing the dashboard.
You can use the Analytics dropdown button to create a new dashboard.

You can use the “+” button to add your first visualization to the dashboard.

Being able to assess the Risk associated with your supply chain has increasingly become more important in a quickly changing world with high levels of volatility. Not only does Cosmic Frog calculate an overall supply chain risk score for each scenario that is run, but it also gives you details about the risk at the location and flow level, so you can easily identify the highest and lowest risk components of your supply chain and use that knowledge to quickly set up new scenarios to reduce the risk in your network.
By default, any Neo optimization, Triad greenfield or Throg simulation model run will have the default risk settings, called OptiRisk, applied using the DART risk engine. See also the Getting Started with the Optilogic Risk Engine documentation. Here we will cover how a Cosmic Frog user can set up their own risk profile(s) to rate the risk of the locations and flows in the network and that of the overall network. Inputs and outputs are covered and in the last section notes & tips & additional resources are listed.
The following diagram shows the Cosmic Frog risk categories, their components, and subcomponents.

A description of these risk components and subcomponents follows here:
Custom risk profiles are set up and configured using the following 9 tables in the Risk Inputs section of Cosmic Frog’s input tables. These 9 input tables can be divided into 5 categories:

Following is a summary of these table categories; more details on individual tables will be discussed below:
We will cover some of the individual Risk Input tables in more detail now, starting with the Risk Rating Configurations table:

In the Risk Summary Configurations table, we can set the weights for the 4 different risk components that will be used to calculate the overall Risk Score of the supply chain. The 4 components are: customers, facilities, suppliers, and network. In the screenshot below, customer and supplier risk are contributing 20% each to the overall risk score while facility and network risk are contributing 30% each to the overall risk score.

These 4 weights should add up to 1 (=100%). If they do not add up to 1, Cosmic Frog will still run and automatically scale the Risk Score up or down as needed. For example, if the weights add up to 0.9, the final Risk Score that is calculated based on these 4 risk categories and their weights will be divided by 0.9 to scale it up to 100%. In other words, the weight of each risk category is multiplied by 1/0.9 = 1.11 so that the weights then add up to 100% instead of 90%. If you do not want to use a certain risk category in the Risk Score calculation, you can set its weight to 0. Note that you cannot leave a weight field blank. These rules around automatically scaling weights up or down to add up to 1 and setting a weight to 0 if you do not want to use that specific risk component or subcomponent also apply to the other “… Risk Configurations” tables.
Following are 2 screenshots of the Facility Risk Configurations table on which the weights and risk bands to calculate the Risk Score for individual facility locations are specified. A subset of these same risk components is also used for customers (Customer Risk Configurations table) and suppliers (Supplier Risk Configurations table). We will not discuss those 2 tables in detail in this documentation since they work in the same way as described here for facilities.



The first band is from 0.0 (first Band Lower Value) to 0.2 (the next Band Lower Value), meaning between 0% and 20% of total network throughput at an individual facility. The risk score for 0% of total network throughput is 1.0 and goes up to 2.0 when total network throughput at the facility goes up to 20%. For facilities with a concentration (= % of total network throughput) between 0 and 20%, the Risk Score will be linearly interpolated from the lower risk score of 1.0 to the higher risk score of 2.0. For example, a facility that has 5% of total network throughput will have a concentration risk score of 1.25. The next band is for 20%-30% of total network throughput, with an associated risk between 2.0 and 3.5, etc. Finally, if all network throughput is at only 1 location (Band Lower Value = 1.0), the risk score for that facility is 10.0. The risk scores for any band run from 1, lowest risk, to 10, highest risk.
Following screenshot shows the Geographic Risk Configurations table with 2 of its risk subcomponents, biolab distance and economic:

As an example here in the red outline, the Biolab Distance Risk is specified by setting its weight to 0.05 or 5% and specifying which band definition on the Risk Band Definitions table should be used, which is “BioLab and Nuclear Distance Band Template”. The Definition of this band template is as follows when looked up in the Risk Band Definitions table:

This band definition says that if a location is within 0-10 miles to a Biolab of Safety Level 4, the Risk Score is 10.0. A distance of 10-20 miles has an associated Risk Score between 10 and 7.75, etc. If a location is 130 miles or farther from a biolab of safety Level 4, the Risk Score is 1.0.
The other 5 subcomponents of Geographic Risk are defined in a similar manner on this table: with a Risk Weight field and a Risk Band field that specifies which Band Definition on the Risk Band Definitions table is to be used for that risk subcomponent. The following table summarizes the names of the Band Definitions used for these geographic risk subcomponents and what the unit of measure is for the Band Values with an example:
Similar to the Geographic Risk component, the Utilization Risk component also has its own table, Utilization Risk Configurations, where its 3 risk subcomponents are configured. Again, each of the subcomponents has a Risk Weight field and a Risk Band field associated with it. The following table summarizes the names of the Band Definitions used for these utilization risk subcomponents and what the unit of measure is for the Band Values with an example:
Lastly on the Risk Inputs side, the Network Risk Configurations table specifies the components of Network Risk in a similar manner: with a Risk Weight and a Risk Band field for each risk component. The following table summarizes the names of the Band Definitions used for these network risk subcomponents and what the unit of measure is for the Band Values with an example:
Risk outputs can be found in some of the standard Output Summary Tables and in the risk specific Output Risk Tables:

The following screenshot shows the Optimization Risk Metrics Summary output table for a scenario called “Include Opt Risk Profile”. It shows both the OptiRisk and Risk Rating Template Optimization risk score outputs:

On the Optimization Customer Risk Metrics, Optimization Facility Risk Metrics, and Optimization Supplier Risk Metrics tables, the overall risk score for each customer, facility, and supplier can be found, respectively. They also show the risk scores of each risk component, e.g. for customers these components are Concentration Risk, Source Count Risk, and Geographic risk. The subcomponents for Geographic risk are further detailed in the Optimization Geographic Risk Metrics output table, where for each customer, facility, and supplier the overall geographic risk score and the risk scores of each of the geographic risk subcomponents are listed. Similarly, on the Facility Risk Metrics and Supplier Risk Metrics tables, the Utilization Risk score will be listed for each location, whilst the Optimization Utilization Risk Metrics table will detail the risk scores of the subcomponents of this risk (throughput utilization, storage utilization, and work center utilization).
Let’s walk through an example of how the risk score for the facility Plant_France_Paris_9904000 was calculated using a few screenshots of input and output tables. This first screenshot shows the Optimization Geographic Risk Metrics output table for this facility:

The geographic risk score of this facility is calculated as 4.0 and the values for all the geographic risk subcomponents are listed here too, for example 8.2 for biolab distance risk and 3.6 for political risk. The overall geographic risk of 4.0 was calculated using the risk score of each geographic risk subcomponent and the weights that are set on the Geographic Risk Configurations input table:

Geographic Risk Score = (biolab distance risk * biolab distance risk weight + economic risk * economic risk weight + natural disaster risk * natural disaster risk weight + nuclear distance risk * nuclear distance risk weight + political risk * political risk weight + epidemic risk * epidemic risk weight) / 0.8 = (8.2 * 0.05 + 5.3 * 0.2 + 2.8 * 0.3 + 2.3 * 0.05 + 3.6 * 0.1 + 4.5 * 0.1) / 0.8 = 4.0. (We need to divide by 0.8 since the weights do not add up to 1, but only to 0.8).
Next, in the Optimization Facility Risk Metrics output table we can see that the overall facility risk score is calculated as 6.8, which is the result of combining the concentration risk score, source count risk score, and geographic risk score using the weights set on the Facility Risk Configurations input table.

This screenshot shows the Facility Risk Configurations input table and the weights for the different risk components:

Facility Risk Score = (geographic risk * geographic risk weight + concentration risk * concentration risk weight + source count risk * source count risk weight) / 0.6 = 4.0 * 0.3 + 9.3 * 0.2 + 10.0 * 0.1) / 0.6 = 6.8. (We need to divide by 0.6 since the weights do not add up to 1, but only to 0.6).
A few things about Risk in Cosmic Frog that are good to keep in mind:
For a detailed walk-through of all map features, please see the "Getting Started with Maps" Help Center article.
You can edit and filter the base data located with a map using the Map Filter menu. This menu opens when the map name is highlighted or selected. From here you are able to select the following items to be shown in the map:
Note: leaving the product filter blank will include all products in the model


The following instructions show how to establish a local connection, using Power BI, to an Optilogic model that resides within our platform. These instructions will show you how to:
To make a local connection you must first open a Firewall connection between your current IP address and the Optilogic platform. Navigate to the Cloud Storage app – note that the app selection found on the left-hand side of the screen might need to be expanded. Check to see if your current IP address is authorized and if not, add a rule to authorize this IP address. You can optionally set an expiration date for this authorization.

If you are working from a new IP Address, a banner notification should be displayed to let you know that the new IP Address will need to be authorized.
From the Databases section of the Cloud Storage page, click on the database that you want to connect to. Then, click on the Connection Strings button to display all of the required connection information.

We have connection information for the following formats:
To select the format of your connection information, use the drop-down menu labeled Select Connection String:

For this example, we will copy and paste the strings for the ‘PSQL’ connection. The screen should look something like the following:

You can click on any of the parameters to copy them to your clipboard, and then paste them into the relevant field when establishing the PSQL ODBC connection.
Many tools, including Alteryx, use Open Database Connectivity (ODBC) to enable a connection to the Cosmic Frog model database. To access the Cosmic Frog model, you will need to download and install the relevant ODBC drivers. Latest versions of the drivers are located here: https://www.postgresql.org/ftp/odbc/releases/
From here, click on the latest parent folder, which as of June 20, 2024 will be REL-16_00_0005. Select and download the psqlodbc_x64.msi file.
When installing, use the default settings from the installation wizard.
Within Windows, open the ODBC Data Sources App (hint search: “ODBC” in your Windows spotlight search).
Click “Add” to create a new connection and then select “PostgreSQL ANSI(x64)” then click “Finish.”

Enter the details from your Cloud Storage connection — (hint: click to copy/paste)
You may click “Test” to confirm the connection works or click “Save.”

Open Power BI and select “Get data from another source”

Enter “ODBC” in the Get Data window and select connect

Select your Database connection from the dropdown and click OK

Enter your username and password one last time from the Cloud Storage page

Select the tables you wish to see and use within PowerBI

Create Dashboards of Data from Cosmic Frog tables!

To create a scenario you need to do three things:
From the Scenario tab in Cosmic Frog select the blue button called “Scenario” and click on “New Scenario.”

Type the name of the scenario you would like to create in the panel window.

From the same drop down as “New Scenario” select “New Item” to create a scenario item. Enter the name of your scenario item in the window. After you press enter the Scenario Item window will be active where you will select the following:

After you have created and saved the Scenario Item you need to assign that item to a scenario. On the right hand side of your screen there is a table called “Assign Scenarios.” From here you can check/uncheck the Scenarios where you wish to use the new Scenario Item.

You can clear output data from all model output tables in one quick action. Navigate to the Scenarios tab and from the scenario drop-down menu select the Delete Scenario Results option. This can also be accessed by right-clicking on any scenario name. Next, from the window on the right-hand side of the screen you can select the scenario(s) that you want to delete output data for. Once selected, click the Delete button and all output data will be cleared for the selected scenarios.

Watch the video to learn how to import, export, geocode, and work with data within Cosmic Frog:
If you want to follow along, please download the data set available here:
Anura contains 100+ (and growing) input tables to organize modeling data.

There are six minimum required tables for every model:
This includes one table to identify the demand that must be met, two tables to lay out the structure, and a last table to link them all together with policies.
Most supply chain design models use at least one table each of the first five model categories (Modeling Elements, Sourcing, Inventory, Transportation, Demand). A Neo model converted from Supply Chain Guru© will generally contain the following tables:
By entering information in all of these tables you will have successfully added all demand, created all model elements, created sourcing and transportation policies for all arcs, and added an inventory policy to ensure inventory can be stored throughout the supply chain.
While not required, many Neo models will also contain data in the following tables:
The File Explorer is a fully functioning view into your workspace file system. To open the File Explorer, look for the following icon on the left-hand side of the Atlas environment:

To expand a folder to view its contents simply click anywhere on the folder, its label, or the expand icon (the small triangle next to the folder itself).

To collapse the folder you can either click on the same area again, or you can collapse all folders by clicking the button outlined below.

To open a file, navigate to it in the file explorer and double-click on it. Once done, the file will load into the editor. Please note that some files will not show properly in the editor, such as files that contain binary content or encrypted content.
There are two ways you can create a new file; through the file menu:

or via the file explorer context menu.

Keep in mind that the context you have selected in the file explorer is where the new folder will go, so if you want the folder underneath a specific other folder make sure to select that folder before you begin. That being said, if you forget you can still move the folder afterward.
To duplicate a file or folder, select the Duplicate command from the right-click context menu. This will create a copy of the file or folder in the same location with the suffix _copy.

To rename a file or folder, select the Rename command from the right-click context menu or use the hotkey F2 while the file or folder is selected. This will present a dialog where you can type in the new name that you want to use. Once specified, hit Enter or click the OK button to apply the new name.
To upload a file or files, make sure that you have a folder selected. From the right-click context menu select the Upload Files… command or select the same command from the file menu. Select the file or files from the file selection dialog and hit Enter or click the Open button. This will upload the selected files to the folder that you have selected.
To download a file or folder, select the Download command from the right-click context menu or the same command from the file menu. This will download the selected file or folder to your local machine. If you have selected a folder, the contents will be compressed into a .tar archive file. You will need to use a file archiving tool to extract the contents of the archive.
To save a file, select the Save option from the file menu or use the hotkey CTRL+S. This will save the changes in the file that has focus in the editor. Note, a file with focus will have the tab in the editor have the same background as the background of the editor itself. You can tell if a file needs to be saved by the presence of a white dot in the file header open in the editor.
You can delete a file or folder by selecting the Delete command from the right-click context menu or hitting the Delete key on your keyboard.
By default, the auto-save option is turned on. This means that your files will automatically save as you are editing them. You can turn this on or off by selecting the Auto Save option in the file menu.

Atlas includes a file comparison tool. This will show you line-by-line differences between two files.

To use it you can either select the Compare with Each Other command from the right-click context menu while two files are selected, or you can select the Select for Compare command on the first file, then select the Compare with Selected command on the other file you would like to compare.

When there are changes to your workspace file system but they haven’t shown up visually, you can refresh the explorer to show you the latest state on disk. This can happen sometimes if the models you are building are writing out files. To refresh the explorer, click the refresh button in the upper right.

We have prepared step-by-step instructions to build your own version of the Global Supply Chain Strategy model from scratch.
Please download the “Build your first Frog model.zip” file, save to your local machine, and follow the overview and instructions laid out in the following videos.
Python models can be run on your “local” IDE instance or you can leverage the power of hyper-scaling by running the module as a Job. When running as a job you have access to a number of different machine configurations as follows:
For reference, a typical laptop will be equivalent to either a XS or S machine configuration.
Complex optimization models may require more CPU cores to solve quickly and large scale simulations may require the use of more RAM due to increase in data required for model fidelity.
When modeling supply chains, stakeholders are often interested in understanding the cost to serve specific customers and/or products for segmentation and potential reposition purposes. In order to calculate the cost to serve, variable costs incurred all along the path through the supply chain need to be aggregated, while fixed costs that are incurred need to be apportioned to the correct customer/product. This is not always straightforward and easy to do, think for example of multi-layer BOMs.
When running a network optimization (using the Neo engine) in Cosmic Frog, these cost to serve calculations are automatically done, and the outputs are written into three output tables. In this help center article, we will cover these output tables and how the calculations underneath to populate them work.
First, we will briefly cover the example model used for screenshots for most of this help center article, then we will cover the 3 cost to serve output tables in some detail, and finally we will discuss a more complex example that uses detailed production elements too.
There are 3 network optimization output tables which contain cost to serve results; they can be found in the Output Summary Tables section of the output tables list:

We will cover the contents and calculations used for these tables by using a relatively simple US Distribution model, which does use quite a few different cost types in it. This model consists of:
Following screenshot shows the locations and flows of one of the scenarios on a map:

One additional important input to mention is the Cost To Serve Unit Basis field on the Model Settings input table:

User can select Quantity, Volume, or Weight. This basis is used when needing to allocate costs based on amounts of product (e.g. amount produced or moved). For example, we need to allocate $100,000 fixed operating cost at DC_Raleigh to 3 outbound flows. The result is different when we use a different cost to serve unit basis:
Note that in this documentation the Quantity cost to serve unit basis is used everywhere, which is the default.
We will cover the 3 output tables related to Cost To Serve using the above described model as an example in the screenshots. Further below in the document, we will also show an example of a model that uses some detailed production inputs and its cost to serve outputs. Let us start with examining the most detailed cost to serve output table, the Cost To Serve Path Segment Details table. This output table contains each segment of each path, from the most upstream product source location to customer fulfillment, which is the most downstream location. Each path segment represents an activity along the supply chain: this can be a production when product is made/supplied, inventory that is held, a flow where product is moved from one location to another, or use of a bill of materials when raw materials are used to create intermediates or finished goods.
Note that these output tables are very wide due to the number of columns they contain. In the screenshots, columns are often re-ordered, and some may be hidden if they do not contain any values or are not the topic of discussion, so they may not look exactly the same as what you see in your Cosmic Frog. Also, grids are often sorted to show records in increasing order of path ID and/or segment sequence.

Please note that several columns are not shown in the above screenshot, these include:
Removing some additional columns and scrolling right, we see a few more columns for these same paths:

We will stay with the example of the path from MFG_Detroit to DC_Jacksonville to CZ_Hartford for 707 units of P1_Bullfrog to explain the costs (Path ID = 1 in the above). Here is a visual of this path, shown on the map:

The following 4 screenshots show the 3 segments of this path in the Optimization Cost To Serve Path Segment Details output table, and the different costs that are applied to each segment. In this first screenshot, the left most column is the Segment Sequence, and a couple of fields that were not present in the screenshots above are now shown:




Let us put these on a map again, together with the calculations:

There are 2 types of costs associated with the production of 707 units of P1_Bullfrog at MFG_Detroit:
For the flow of 707 units from MFG_Detroit to DC_Jacksonville, there are 4 costs that apply:
There are 7 different costs that apply to the flow of the 707 units of P1_Bullfrog from DC_Jacksonville to CZ_Hartford where it fulfills the customer’s demand of 707 units:
There are cost fields in the Optimization Cost To Serve Path Segment Details output table that are not shown in the above screenshots as these are all blank in the example model used. These fields and their calculations should there be inputs to calculate them are as follows:
In the above examples we have seen Segment Types of production and flow. When inventory is modelled, we will also start seeing segments with Segment Type = inventories, as shown in this next screenshot (Cosmic Frog outputs of the Optimization Cost To Serve Path Segment Details table were copied into Excel from which the screenshot was then taken):

Here, 75.41 units of product P4_Polliwog were produced at MFG_Detroit in period 2025 (segment 1), then moved to DC_Jacksonville in the same period (segment 2), where they have been put into inventory (segment 3). These units are then used to fulfill customer demand at CZ_Columbia in the next period, 2026 (segment 4).
The Optimization Cost To Serve Path Segment Details table can also contain records which have Segment Type = no_activity. On these records there is no Segment Origin – Segment Destination pair, just one location which is not being used during the period (0 throughput). This record is then used to allocate fixed operating, startup, and/or closing costs to that location.
There are still a few fields in the Optimization Cost To Serve Path Segment Details table that have not been covered so far, these are:
The Optimization Cost To Serve Path Summary table is an output table which is an aggregation of the Optimization Cost To Serve Path Segment Details table, aggregated by Path ID. In other words, the table contains 1 record for each path where all costs of the individual segments of the path are rolled up into 1 number. Therefore, this table contains many of the same fields as the Path Segment Details table, just not the segment specific ones. The next screenshot shows the record for Path ID = 1 of the Optimization Cost To Serve Path Summary table:

This output table summarizes the cost to serve at the customer-product-period level and this table can be used to create reports at the customer or product level by aggregating further.

Note that all these 3 output tables can also be used in the Analytics module of Cosmic Frog to create any cost to serve dashboards. For example, this Optimization Cost To Serve Summary output table is used in 2 standard analytics dashboards that are part of any new Cosmic Frog model and will be automatically populated when this table is populated after a network optimization (Neo) run: the “CTS Unprofitable Demand” and “CTS Customer Profitability” dashboards. Here is a screenshot of the charts included in this second dashboard:

To learn more about the Analytics module in Cosmic Frog, please see the help center articles on this page.
In the above discussion of cost to serve outputs, the example model used did not contain any detailed production inputs, such as Bills Of Materials (BOMs), Work Centers, or Processes. We will now look at an example where these are used in the model and specifically focus on how to interpret the Optimization Cost To Serve Path Segment Details output table when bills of materials are included.
Consider the following records in the Bills of Materials input table for making the finished good FG_BLU (blue cheese):

In the model, these BOMs are associated through Production Policies for the BULK_BLU and FG_BLU products.
Next, we will have a look at the Optimization Cost To Serve Path Segment Details output table to understand how costs are allocated to raw material vs finished good segments:

As we have seen before, there are many more cost columns in this table, so for each segment these can be reviewed by scrolling right. Finally, let’s look at these 3 paths in the Optimization Cost To Serve Path Summary output table:

Note that the Path Product Name is FG_BLU for these 3 paths and there are no details to indicate which raw materials each of the paths pertains to. If required, this can be looked up using the Path IDs from this table in the Optimization Cost To Serve Path Segment Details output table.
If you find that the standard constraints or costs in a model don’t quite capture your specific needs, you can create and define your own variables to use with costs and constraints.
To help in framing this discussion, let’s start with a simple example that fits into the standard input tables.
Objectives:
We wouldn’t need to do anything special in this instance, just create policies as normal and attach a Unit Cost of 5 to the MFG > DC transportation policy. To apply the constraint, we would create a Flow Constraint that sets a Max flow of 1000 units. While the input requirements are straightforward in this instance, let’s define both objectives in terms of variables as the solver would see them.
Flow Variable: MFG_CZ_Product_1_Flow
This example is simple, but it is important to think about costs and constraints in terms of the variables that they are applied over. This becomes even more important when we want to craft our own variables.
Let’s modify the constraint in the example above to now restrict the flow of Product_1 between MFG and DC to be no more than the flow of Product_2. Again, we will represent this in terms of variables as the solver will see them.
Flow Variables: MFG_CZ_Product_1_Flow, MFG_CZ_Product_2_Flow
We no longer have a constant on the right-hand side of our constraint – this is an issue as we have no way to input this type of a constraint requirement into the Flow Constraints table. Whenever we find ourselves expressing constraints or costs in terms of other variables that will be determined based on the model solve, we will need to make use of User Defined Variables.
Continuing with the constraint above, let’s modify the inequality statement so that we do in fact have a constant on the right-hand side. We can do this by subtracting one of the variables from both sides of the statement – this will then leave the right-hand side as 0.
We now have a constraint that can be modelled but we need to be able to define the left-hand side through the User Defined Variables table. User Defined Variables are defined as a series of Terms which are all linked to the same Variable Name. Each Term can be thought of as a solver variable as we have defined them in the examples above. For each Term, we will also need to enter a Coefficient, the Type of behavior we want to be capturing, and all of the needed information in the columns that follow depending on the Type that was just selected. All of these columns are based off of the individual constraint tables, so it is helpful to think about data as if you were entering a row in the specific constraint table.
Here is how the inputs for our example would look set up as a User Defined Variable:

We can see that by using the coefficients of 1 and -1, we have now accurately built the left-hand side of our inequality statement. All that’s left is to link this to a User Defined Constraint.
User Defined Constraints can be used to add restrictions to the values captured by the User Defined Variables. All that is needed is to enter the corresponding Variable Name and then select the appropriate constraint type and value.
Revisiting our inequality statement once more, we can see how the User Defined Constraint should be built:
MFG_CZ_Product_1_Flow – MFG_CZ_Product_2_Flow <= 0

Optilogic provides a uniform, easy to use way to connect to your models and embed optimization and simulation in any application that can make an API call.
With a free account you have access to our API capabilities to build and deploy custom solutions.
There are a number of calls that you will need to make to fully automate the use of optimization and simulation in your models. Below you will find more information on how to go about this task.
For full API documentation see our Optilogic API Documentation. From this page you will be able to view detailed API documentation and live test code within your account.
First, to use any API call you must be authenticated. One authenticated you will be provided with an API key that remains active for an hour. If your API key expires you will be required to re-authenticate to acquire a new key.
The Account API section of calls allows you to lookup your account information such as username, email, how many concurrent solves you have access to, and the number of workspaces in your account.
The Workspace API section of calls allows you to lookup information about a specific workspace. You can look up the workspace by name, obtaining a list of files in the workspace as well as a list of jobs associated with the models of that workspace.
The Job API section of calls allows you to view information relating to jobs in the system. Each time you execute a model solve, the Optilogic back end solver system, Andromeda, will spawn a new job to handle the request. The API call to start a job will return a key with which you can lookup information about that job, even after it has completed. With these API calls you can get any job’s status, start a new job, or delete a particular job.
The Files API section of calls allows you to interact with the files of a given workspace. Each model is made up of a collection of files (mainly code and data files). With these calls you can copy, delete, upload or download any file of in a specified workspace.
An example of how to connect a Cosmic Frog model to a Snowflake database, along with a video walkthrough, can be found in the Resource Library. To get a copy of this demo into your own Optilogic account simply navigate to the Resource Library and copy the Snowflake template into your workspace.

If this isn’t your first time using a supply chain design software, then take heart: the transition to Cosmic Frog is a smooth one. There are a few key differences worth noting below:

Most of these changes will be self-explanatory – if you used to write Customer Sourcing Policies you will now put similar data in a table called Customer Fulfillment Policies. Others may become easier to see over time — instead of many tables to create process logic you can enter everything you need in one table.

Have you ever wanted to put a site in your model and distinguish whether you owned the site or not? Have you ever wanted to make clear what you own and what you outsource? If so, the suppliers tables are for you:

Do you really need a corresponding transportation policy for every sourcing policy and vice versa? Did you know that by doing so you actually making the model take longer to build and solve? Have you ever built a model that was infeasible because you forgot to add a policy?
We put the power in you, the user’s, hands. Simply change the lane creation rule and you can ensure that your model builds and solves the way you want.


We split inventory policies into three sections because we believe there is a lot going on when you process how to model inventory in your models, especially when you process inventory in simulation. Other than that, we cleaned up the table structure, why enter data in multiple tables if you don’t need to? Where possible we streamlined the table structure to make it easier to enter your data.
The following instructions show how to establish a local connection, using Alteryx, to an Optilogic model that resides within our platform. These instructions will show you how to:
Watch the video for an overview of the connection process:
A step by step set of instructions can also be downloaded in the slide deck here: CosmicFrog-Alteryx-Connection-Instructions
To make a local connection you must first open a Firewall connection between your current IP address and the Optilogic platform. Navigate to the Cloud Storage app – note that the app selection found on the left-hand side of the screen might need to be expanded. Check to see if your current IP address is authorized and if not, add a rule to authorize this IP address. You can optionally set an expiration date for this authorization.

If you are working from a new IP Address, a banner notification should be displayed to let you know that the new IP Address will need to be authorized.
From the Databases section of the Cloud Storage page, click on the database that you want to connect to. Then, click on the Connection Strings button to display all of the required connection information.

We have connection information for the following formats:
To select the format of your connection information, use the drop-down menu labeled Select Connection String:

For this example, we will copy and paste the strings for the ‘PSQL’ connection. The screen should look something like the following:

You can click on any of the parameters to copy them to your clipboard, and then paste them into the relevant field when establishing the PSQL ODBC connection.
Many tools, including Alteryx, use Open Database Connectivity (ODBC) to enable a connection to the Cosmic Frog model database. To access the Cosmic Frog model, you will need to download and install the relevant ODBC drivers. Latest versions of the drivers are located here: https://www.postgresql.org/ftp/odbc/releases/
From here, click on the latest parent folder, which as of June 20, 2024 will be REL-16_00_0005. Select and download the psqlodbc_x64.msi file.
When installing, use the default settings from the installation wizard.
At this point we have the pieces to make a connection in Alteryx. Open Alteryx and start a new Workflow. Drag the Input Data action into the Workflow and click to “Connect a File or Database.”

Select “Data sources” and scroll down to select “PostgresSQL ODBC”


On the next screen click “ODBC Admin” to setup the connection.

Click “Add” to create a new connection and then select “PostgreSQL ANSI(x64)” then click “Finish.”

Now we need to configure the connection with the information we gathered from the connection strings.

“Data Source” and “Description” allow you to name the connection, these can be named whatever you wish.
Copy the values for “Server”, “Database”, “User Name”, “Password” and “Port” from the connection string information copied from Optilogic Cloud Storage (see above).
DON’T FORGET to select “require” in “SSL Mode”
You may click “Test” to confirm the connection works or click “Save.”
Now select the new connection, in this example “Alteryx Demo Model” and click “OK”

Now we need to select the same Data Source that we just built in ODBC within Alteryx. We need to enter the username and password for the connection for Alteryx authentication. These are the same credentials used to setup the ODBC connection. Remember to use your specific model’s credentials from the Connection String in the Optilogic platform Cloud Storage page.

Depending on your organization’s security protocols, one additional step might need to be taken to whitelist Optilogic’s Postgres SQL Server. This can be done by whitelisting the host URL (*.postgres.database.azure.com) and the port (6432). If you are unsure how to whitelist the server or do not have the necessary permissions, please contact your IT department or network administrator for assistance.
11/16/2023 – There is an issue connecting through ODBC with the latest version of Alteryx. While we await a fix in an updated version of Alteryx, you can still connect with an older version of Alteryx (2021.4.2.47884)
05/01/2024 – Alteryx has resolved the ODBC connection issue with their latest major version release of 2024.1. If your currently installed Alteryx version is not working as intended, please upgrade to latest.
An alternative workaround is to disable the AMP engine on your Alteryx workflow. For any workflows that use the ODBC connection to a database hosted on the platform, you can uncheck the option in the Workflow Configuration for ‘Use AMP Engine’. The Workflow Configuration window will display on the left side of your screen if you click on the whitespace anywhere in your workflow.

Cosmic Frog users can now perform additional quick analyses on their supply chain models’ input and output data through Cosmic Frog’s new grid features. This functionality enables users to easily apply different types of grouping and aggregation to their data, while also allowing users to view their data in a pivoted format. Think for example of the following use cases:
In this documentation we will cover how grids can be configured to use these new features, show several additional examples, and conclude with a few pointers for effective use of these features.
These new grid features can be accessed from the “Columns” section on the side bar on the right-hand side of input and output tables while in the Data module of Cosmic Frog:

Alternatively, users can also start grouping and subsequently aggregating by right clicking on the column names in the table grid:

We will first cover Row Grouping, then Aggregated Table Mode, and finally Pivot Mode.
Using the row grouping functionality allows users to select 1 column in an input or output table by which all the records in the table will be grouped. These groups of records can be collapsed and expanded as desired to review the data. In the following screenshot the row grouping feature is used to compare the sources of a certain finished good in a particular period for 1 scenario:

When clicking on Columns on the right hand-side of the table to open the row grouping / aggregated table / pivot grid configuration pane shows the configuration for this row grouping:

Once a table is grouped by a field, a next step can be to aggregate one or multiple columns by this grouped field. When this is done, we call this aggregated table mode. Different types of aggregation are available to the user, which will be discussed in this section.
When configuring the grid through the configuration panel that comes up when clicking on Columns on the right-hand side of input & output tables, several options are available to help users find field names quickly and turn multiple on/off simultaneously:

To configure the grid, fields can be dragged and dropped:

Alternatively, instead of dragging and dropping, user can also right-click on the field(s) of interest to add them to the configuration areas. This can be done both in the list with column names at the top of the configuration window as shown in the following screenshot, but also on the column names in the grid itself (which we have seen an example of in the “How to Access the New Grid Features” section above):

In the screenshot above (taken with Pivot Mode on which is why the Column Labels area is also visible), user right-clicked on the Flow Volume field and now user can choose to add it to the Row Groups area (“Group by FlowVolume”), to the ∑ Values area (“Add FlowVolume to values”), or to the Column Labels area (“Add FlowVolume to labels”).
The next screenshot shows the result of a configured aggregated table grid:

When adding numeric fields to the ∑ Values area, the following aggregation options are available to the user:

For non-numeric fields, only the last 3 options are available as aggregations:

When adding an aggregation field through right-clicking on a field name in the grid, it looks as follows. User right-clicked on a numerical field, Transportation Cost, here:

When filters are applied to the table, these are still applied when the table is being grouped by rows, aggregated, or pivoted:

It was mentioned above that the number in parentheses after the scenario name represents the number of rows that the aggregation was applied to. We can expand this by clicking on the greater than (>) icon to view the individual rows that make up the aggregation:

When users turn on pivot mode, an extra configuration area named Column Labels becomes available in addition to the Row Groups and ∑ Values areas:
Another example to show the total volumes of different flow types, filtered for finished goods, by scenario is shown in the next screenshot:

So far, we have only looked at using the new grid features on the Optimization Flow Summary output table. Here, we will show some additional examples on different input and output tables.
In this first additional example, a pivot grid is configured to show the total production quantity for each facility by scenario:

In the next example, we will show how to configure a pivot grid to do a quick check on the shipment quantities: how much the backhaul vs linehaul quantity is and how much of each is set to Include vs Exclude:

In the following 2 examples, we are doing some quick analysis on the Simulation Inventory On Hand Report, a simulation (Throg) output table containing granular details on the inventory levels by location and product over time. In the first of these 2 examples, we want to see the average inventory by location and product for a specific scenario:

In the next example, we want to see how often products stock out at the different facilities in the Baseline scenario:

The last 2 examples in this section show 2 different views of Greenfield (Triad) outputs. The first example shows the average transport distance and time by scenario:

Lastly, we want to look, by scenario, how much of the quantity delivered to customers falls in the 300 miles, 500 miles, and 750 miles service bands:

Please take note of following to make working with Row Grouping, Aggregated Table Grids and Pivot Grids as effective as possible:


Watch the video to learn how to create and manage models in your workspace:
The Anura data model contains 11 categories of tables. The most basic of models can be built by only populated six tables (Customers, Facilities, Products, Customer Demand, Production Policies, and Transportation Policies); however, most models require a few tables from each modeling category to build out a realistic interpretation of the supply chain.
Tables are grouped into the following sections:

The following instructions show how to establish a local connection, using Azure Data Studio, to an Optiogic model that resides in the platform. These instructions will show you how to:
Watch the video for an overview of the connection process:
To make a local connection you must first open a Firewall connection between your current IP address and the Optilogic platform. Navigate to the Cloud Storage app – note that the app selection found on the left-hand side of the screen might need to be expanded. Check to see if your current IP address is authorized and if not, add a rule to authorize this IP address. You can optionally set an expiration date for this authorization.

If you are working from a new IP Address, a banner notification should be displayed to let you know that the new IP Address will need to be authorized.
From the Databases section of the Cloud Storage page, click on the database that you want to connect to. Then, click on the Connection Strings button to display all of the required connection information.

We have connection information for the following formats:
To select the format of your connection information, use the drop-down menu labeled Select Connection String:

For this example, we will copy and paste the strings for the ‘PSQL’ connection. The screen should look something like the following:

You can click on any of the parameters to copy them to your clipboard, and then paste them into the relevant field in Azure Data Studio when establishing the PSQL connection.
Within Azure Data Studio click the “Extensions” button and type in “postgres” in the search box to find and install the PostgreSQL extension.

Add a new connection in Azure Data Studio, change the connection type to “PostgreSQL, “and enter the arguments for “PSQL” from the Cloud Storage page. NOTE: you will need to click “Advanced” to type in the Port and to change the SSL mode to “require.”



Depending on your organization’s security protocols, one additional step might need to be taken to whitelist Optilogic’s Postgres SQL Server. This can be done by whitelisting the host URL (*.database.optilogic.app) and the port (6432). If you are unsure how to whitelist the server or do not have the necessary permissions, please contact your IT department or network administrator for assistance.
With the 2.8 version of Anura there are quite a few new tables and columns being added, along with a small number of existing columns that have been renamed. These updates enable new features including, but not limited to, the following:
We will cover the 2 instances of breaking changes below, followed by a more detailed review of schema adjustments specific to each solver. If you have any questions regarding these updates, please do not hesitate to reach out to support@optilogic.com for further information.
2.7_TransportationRates
The Transportation Rates table previously used by NEO has been renamed to Transportation Band Costing in 2.8. This is done to allow for a Hopper-specific table to be built out and take the name of Transportation Rates. All data upgrades will be processed automatically, but any ETL workflows that targeted the Transportation Rates table in 2.7 will need to be updated.
2.7_OptimizationCostToServeParentInformationReport
Optimization Cost To Serve Parent Information Report has had the CurrentNodeName and ParentNodeName renamed to CurrentSiteName and ParentSiteName. All upgrades will be handled automatically, but any dashboards or external data visualizations that target these columns would need to be updated.
Watch the video to learn how to connect your data tool of choice directly to your Optilogic model:
If you are running into issues loading Atlas or Cosmic Frog initially where the loading spinner is stuck on the screen you can attempt to perform a hard refresh of the browser. This spinner being stuck can be caused by the website loading a stale token and refreshing the page without loading from cache should generate a fresh token.

To perform a hard refresh of the browser hit CTRL + F5. Alternatively, you can hit the refresh button in the browser while holding down CTRL.
If the issue persists, please reach out to support@optilogic.com.