Tutorial Part 8: Activating the Blueprint
Use the completed blueprint as an actionable source of truth to drive automation.
Part 8: Activating the Blueprint - From Model to Reality
Goal: In this final hands-on section, we will demonstrate how the rescile blueprint is more than just a model; it’s an actionable source of truth that can drive real-world automation and validation for our hybrid environment.
Step 28: Generating Actionable Configuration
One of the most powerful use cases for a rescile blueprint is “Blueprint-Driven Operations,” where the graph is queried to dynamically generate configuration files for other tools.
Use Case: Let’s generate a Prometheus scrape configuration to monitor all the microservices that make up our order-frontend application.
First, we write a GraphQL query to get the necessary information. We need the FQDNs of the microservices, which we dynamically generated in Part 4.
query GetMicroserviceEndpoints {
application(filter: {name: "order-frontend"}) {
composed_of {
node {
name
fqdn
}
}
}
}
Running this query in GraphiQL gives us a result like this:
{
"data": {
"application": [
{
"composed_of": [
{ "name": "auth-service", "fqdn": "auth-service.svc.example.com" },
{ "name": "payment-service", "fqdn": "payment-service.svc.example.com" },
{ "name": "order-api", "fqdn": "order-api.svc.example.com" }
]
}
]
}
}
Now, imagine a simple script (e.g., Python, Bash with jq) that runs this query against the rescile-ce GraphQL API and formats the result into a prometheus.yml file.
Conceptual Script (Python):
import requests
import yaml
# Query the rescile GraphQL API
query = """
query GetMicroserviceEndpoints {
application(filter: {name: "order-frontend"}) {
composed_of { node { name fqdn } }
}
}
"""
response = requests.post("http://localhost:7600/graphql", json={'query': query})
data = response.json()
# Format the data for Prometheus
targets = [
f"{service['node']['fqdn']}:8080"
for service in data['data']['application'][0]['composed_of']
]
prometheus_config = {
'scrape_configs': [{
'job_name': 'order-frontend-microservices',
'static_configs': [{
'targets': targets
}]
}]
}
# Write the config file
with open('prometheus.yml', 'w') as f:
yaml.dump(prometheus_config, f)
print("Generated prometheus.yml")
Result: The script produces a valid prometheus.yml file, generated directly from our blueprint.
prometheus.yml
scrape_configs:
- job_name: order-frontend-microservices
static_configs:
- targets:
- auth-service.svc.example.com:8080
- payment-service.svc.example.com:8080
- order-api.svc.example.com:8080
This demonstrates how the blueprint becomes the single source of truth. Instead of manually updating monitoring configurations, you update the blueprint (e.g., by adding a new microservice to application.csv), and the operational tooling configuration is generated automatically.
Step 29: Desired State Validation
The blueprint defines the desired state of your architecture, including compliance rules. This can be used to continuously validate your real-world environment and detect “drift.”
Use Case: In Part 5, Step 17, our dora.toml compliance rule mandated min_tls_version = "1.3" for the connection to our aurora-main managed database. We can write a script that checks if reality matches this desired state.
First, a query to get the desired state from the blueprint:
query GetDesiredTlsVersion {
application(filter: {name: "order-frontend"}) {
connects_to(filter: {node: {name: "aurora-main"}}) {
properties {
controls
}
}
}
}
This would return the min_tls_version: "1.3" property we set on the edge.
Next, a conceptual validation script would:
- Run the GraphQL query above to get the desired state (
1.3). - Use the AWS SDK/CLI to connect to the AWS API and query the actual configuration of the
aurora-mainRDS instance. - Compare the desired state from the blueprint with the actual state from the cloud provider.
- If they don’t match (e.g., the actual minimum TLS version is configured to
1.2), the script reports a drift and can trigger an alert or an automated remediation action.
Conceptual Script:
# 1. Get desired state from rescile blueprint
DESIRED_TLS=$(curl -s -X POST -H "Content-Type: application/json" \
--data '{"query": "{ application(filter: {name: \"order-frontend\"}) { connects_to(filter: {node: {name: \"aurora-main\"}}) { properties { controls } } } }"}' \
http://localhost:7600/graphql | jq -r '.data.application[0].connects_to[0].properties.controls[0].min_tls_version')
echo "Desired min_tls_version from blueprint: $DESIRED_TLS"
# 2. (Conceptual) Get actual state from cloud provider API
# In a real script, you would use the AWS CLI or SDK here.
ACTUAL_TLS="1.2"
echo "Actual min_tls_version from AWS API: $ACTUAL_TLS"
# 3. Compare and report drift
if [ "$DESIRED_TLS" != "$ACTUAL_TLS" ]; then
echo "DRIFT DETECTED! Blueprint requires TLS $DESIRED_TLS, but actual configuration is TLS $ACTUAL_TLS."
# exit 1
else
echo "Configuration matches the blueprint. No drift detected."
fi
Result: This workflow closes the loop between policy, architecture, and reality. The rescile blueprint acts as the source of truth for automated audits and continuous compliance, ensuring that your deployed infrastructure adheres to the rules you’ve defined as code.
Step 29a: Generating Structured Reports
While scripts can generate custom configuration, rescile has a built-in capability to generate structured data reports directly from the graph using Report definitions. This is ideal for creating artifacts like compliance summaries, service catalogs, or input for other systems.
Use Case: Let’s create a JSON report summarizing the order-frontend application, including its owner, TCO, and microservices.
First, create a new reports directory and a new report definition TOML file.
data/reports/application_summary.toml
# This report will process 'application' resources.
origin_resource = "application"
[[output]]
# Create a new resource of type 'application_summary_report'.
resource_type = "application_summary_report"
# Dynamically name the report resource.
name = "summary-for-{{ origin_resource.name }}"
# This rule only applies to the 'order-frontend' application.
match_on = [ { property = "name", value = "order-frontend" } ]
# The 'template' block defines the structure of the generated resource's properties.
# It uses Tera templating to pull data from the graph.
template = """
{
"applicationName": "{{ origin_resource.name }}",
"businessOwner": "{{ origin_resource.provider[0].name }}",
"totalCostOfOwnership": {{ origin_resource.tco | default(value=0) }},
"hosting": {
"platform": "{{ origin_resource.managed_k8s_cluster[0].name }}",
"region": "{{ origin_resource.managed_k8s_cluster[0].cloud_region[0].name }}"
},
"components": [
{% for service in origin_resource.microservice -%}
{ "name": "{{ service.name }}", "fqdn": "{{ service.fqdn }}" }
{%- if not loop.last %},{% endif %}
{%- endfor %}
]
}
"""
Run rescile-ce serve.
Result: The importer processes this new report definition and creates a new (application_summary_report) node in the graph. The properties of this node are a structured JSON object generated from the template.
You can query this report directly:
query GetApplicationSummary {
application_summary_report {
name
# The generated properties are available as fields.
applicationName
businessOwner
totalCostOfOwnership
}
}
This demonstrates how rescile’s reporting engine can transform your graph into customized, structured data artifacts, perfect for system integrations, generating compliance evidence, or populating service catalogs.