Live Monitoring¶
Once the model training is complete, deploying the model to accept real time data as input and generate model output for that data is Live Monitoring.
Make a Model Live¶
To make a model live for an assessment go to the Models view and follow the steps below:
- 1. Pick the desired model from the list and click Make Live action.
- 2. Select Generate Episodes option and click Make Live button to start the STARTLIVEMODEL activity.
- Track the activity within the Activity view and wait for this activity to complete.
Make an Entity Live¶
After a model is live, it is required to start live monitoring for the entities that are part of the datastream. Follow the steps below:
- 1. Go to Entities view within the datastream. Select the entities to start live monitoring and click Start Monitoring action.
- 2. Select a live model and click Start button.
Note
This starts live monitoring for the entities accepting real time data to produce output.
Start live monitoring from a point in time¶
Warning
Use this option only if live model stopped producing model output because of data or system disruptions.
To produce model output on a past time range while starting the live monitoring, it is advised to set Retroactively include data from option while starting the entity monitoring. This time option must be set to the time since you want the live monitoring to start processing data. It is recommended to not set this time option beyond 30 days in past.
Alternatively, you can use the ReST API to start an EVAL
activity that
generates model output for the missed time range. The activity status
can be tracked within Activity view on the Falkonry UI.
Note
This is a recommended option, if you do not want to disrupt or restart live monitoring for an entity in order to set the retroactive time option to generate model output on the historical time range.
The API requires live model ID, common model ID (i.e. M[0]),
entity name and time range for which model output needs to be
generated. Use the
Live Context API</apis/fetch_output>
to
get the common model ID and use the Falkonry UI to get the live model
ID of the relevant assessment. Then use the below Flows API
to start
generating model output for the historical time range.
Request Payload¶
{ "flowType": "EVAL", "datastream": "zzzzzzzzzzzzzzz", "assessment": "yyyyyyyyyyyyyyy", "model": "mmmmmmmmmmmmmm", "name": "Model output run-1", "spec": { "commonModelId": "ccccccccccccccc", "segments": [ { "entities": ["<entity_name_1>", "<entity_name_2>"], "timeRange": { "startTime": "0000000000000000", // in nanoseconds "endTime": "0000000000000000" // in nanoseconds } } ], "forLive": true, "withEpisodes": true } }
Example request
::::: tabs ::: code-tab bash
- \$ curl --location --request POST \'https://app3.falkonry.ai/api/1.2/accounts/xxxxxxxxxxxxxxx/flows\' --header \'Authorization: Bearer \<token>\' --header \'Content-Type: application/json\' --data-raw \'{
\"flowType\": \"EVAL\", \"datastream\": \"zzzzzzzzzzzzzzz\", \"assessment\": \"yyyyyyyyyyyyyyy\", \"model\": \"mmmmmmmmmmmmmm\", \"name\": \"Model output run-1\", \"spec\": { \"commonModelId\": \"ccccccccccccccc\", \"segments\": [ { \"entities\": [\"\<entity_name>\"], \"timeRange\": { \"startTime\": \"0000000000000000\", // in nanoseconds \"endTime\": \"0000000000000000\" // in nanoseconds } } ], \"forLive\": true, \"withEpisodes\": true } }\'
:::
::: code-tab python
import requests URL = \'https://app3.falkonry.ai/api/1.2/accounts/xxxxxxxxxxxxxxx/flows\' TOKEN = \'\<token>\' HEADERS = {\'Authorization\': f\'Bearer {TOKEN}\'} PAYLOAD = { \"flowType\": \"EVAL\", \"datastream\": \"zzzzzzzzzzzzzzz\", \"assessment\": \"yyyyyyyyyyyyyyy\", \"model\": \"mmmmmmmmmmmmmm\", \"name\": \"Model output run-1\", \"spec\": { \"commonModelId\": \"ccccccccccccccc\", \"segments\": [ { \"entities\": [\"\<entity_name>\"], \"timeRange\": { \"startTime\": \"0000000000000000\", # in nanoseconds \"endTime\": \"00000000000000000\" # in nanoseconds } } ], \"forLive\": true, \"withEpisodes\": true } }
response = requests.post(URL, headers=HEADERS, data=PAYLOAD) print(response.json()) ::: :::::
Example response
{ "id": "1124191510837645312", "type": "entities.flow", "flowType": "EVAL", "status": "CREATED", "name": "Model output run-1", "datastream": "zzzzzzzzzzzzzzz", "assessment": "yyyyyyyyyyyyyyy", "model": "mmmmmmmmmmmmmm", "spec": { "isBatch": false, "datastreamStats": {}, "model": "mmmmmmmmmmmmmm", "commonModelId": "ccccccccccccccc", "segments": [ { "entities": [ "machine-1" ], "timeRange": { "startTime": "1684843200000000000", "endTime": "1687521600000000000" } } ], "assessmentRate": 2000000000, "config": {}, "inputList": [], "withEpisodes": true, "forEpisode": false, "tolerateNoData": false, "forLive": true }, "createTime": 1688098542651, "updateTime": 1688098542651, "createdBy": "1013056801100267520", "updatedBy": "1013056801100267520", "archived": false, "links": [] }