Kusto Query Language
The Kusto Query Language (KQL) is a plain-text, read-only language used to query big datasets. Some products which use KQL are Azure Data Explorer, Microsoft Defender ATP, Log Analytics, and Azure Sentinel.
Create List
let set1 = toscalar (AzureNetworkAnalytics_CL | take 1 | project FlowCount_d); print set1
Compare two lists
let history = dynamic (['20.150.9.36','20.50.65.82']); let current = dynamic (['20.150.9.36','20.50.65.82', '10.0.0.10']); print set_difference(current, history)
let startdate = ago(1h);
let current = toscalar(AzureNetworkAnalytics_CL
| where FlowStartTime_t > startdate
| where SubType_s == "FlowLog" and FlowDirection_s == "O"
| extend DestinationIP = extract("(([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})\\.(([0-9]{1,3})))", 1, DestPublicIPs_s)
| distinct DestinationIP
| summarize make_list(DestinationIP));
let history = toscalar (AzureNetworkAnalytics_CL
| where FlowStartTime_t <= startdate and FlowStartTime_t > ago(2d)
| where SubType_s == "FlowLog" and FlowDirection_s == "O"
| extend DestinationIP = extract("(([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})\\.(([0-9]{1,3})))", 1, DestPublicIPs_s)
| distinct DestinationIP
| summarize make_list(DestinationIP));
print set_difference(current, history)
Countwill return the number of rows in the queryWhereallows filtering the rows that are returned based on a conditionTakewill return a specified number of rows, but no guarantee which rows are returnedSortwill allow sorting the output into an orderTopwill return the first N records sorted by the specified columnExtendcommand allows creating new columns from existing columns or other data such as hard-coded valuesSummarizewill return the total values for a specific grouping of rows
Selecting a custom dimension means extracting a value from the dynamic field and showing it as a normal column.
Basic Example
If your logs contain a custom dimension called UserId, use this query:
What this does:
- Reads the value using square brackets.
- Converts it to a string.
- Creates a new column named UserId.
You can select multiple custom dimensions in the same query:
Tips
- Always use
tostring()unless you know the value is numeric or boolean. - Rename the extracted value to keep your results readable.
- Use
projectto control exactly what columns appear in the output.
This pattern is ideal for building reports or exporting data because it turns hidden metadata into visible columns that anyone can understand.
Once a custom dimension is extracted, you can filter and analyze it like any normal column.
Filter by Text Value
requests
| where tostring(customDimensions["Region"]) == "EU"
This keeps only rows where the Region custom dimension matches the value.
Filter by Numeric Value
If the value represents a number, convert it first:
requests
| extend DurationMs = todouble(customDimensions["DurationMs"])
| where DurationMs > 1000
Reuse Extracted Values
Using extend lets you reuse the value multiple times:
traces
| extend UserId = tostring(customDimensions["UserId"])
| where UserId != ""
| summarize Count = count() by UserId
Tips
- Use
extendwhen the value appears more than once in your query. - Always convert to the correct type before filtering.
- Avoid comparing raw dynamic values directly.
These patterns help you build fast, readable queries that work reliably across dashboards and alerts.
Before using a custom dimension, it helps to know what keys actually exist in your data.
View Raw Custom Dimensions
Start by inspecting a few records:
traces
| take 5
| project customDimensions
This shows the full dynamic object so you can see available keys and example values.
List All Keys Found in the Data
traces
| summarize by tostring(bag_keys(customDimensions))
This returns a list of keys across your dataset.
Correct Way to Access a Key
tostring(customDimensions["UserId"])
Avoid this pattern — it does not work:
customDimensions.UserId
Tips
- Key names are case-sensitive.
- Some records may not contain the same keys.
- Always test with a small sample before building complex queries.
These discovery steps prevent mistakes and make your queries more reliable from the start.
Comments