Putting Your Graph to Work

Leveraging Your Digital Twin for Automation, Auditing, and Insight


5. Putting Your Graph to Work: Practical Use Cases

Building the graph is the first step. The true power of rescile is unlocked when you treat this graph as a dynamic, queryable “digital twin” of your entire hybrid estate. It becomes the single source of truth that drives automation, provides deep architectural insights, and enables continuous compliance. This section explores several powerful ways to use the data you’ve just modeled.

Automation and Infrastructure as Code (IaC)

Your infrastructure graph can directly feed your automation toolchains, ensuring that your declared architecture is what gets deployed.

  • Generating Terraform Variables: Instead of manually maintaining .tfvars files, you can generate them dynamically. A script can query the GraphQL API for all resources of a certain type and environment, and format the output as a terraform.tfvars.json file.

    Example GraphQL Query for Terraform:

    query GetProdServersForTerraform {
      server(filter: { managed_by: "team-alpha" }) {
        name
        os
        # Imagine these properties were added via your models
        instance_type
        memory_gb
      }
    }
    

    This query fetches all servers managed by team-alpha. The output can be directly transformed into a list of server configurations for a Terraform module to provision, ensuring perfect alignment between your model and reality.

  • Generating Complex tfvars with Report Templates: For more complex scenarios that require data from multiple related nodes, you can use Report Templates to traverse the graph and generate structured configuration files like terraform.tfvars.json. This approach keeps your automation logic declarative and version-controlled alongside your infrastructure models.

    Let’s generate a complete configuration for the asseteditor application, pulling data from the application node and its related image, network, and server nodes.

    Example Report Template (data/reports/terraform_asseteditor.toml):

    origin_resource = "application"
    
    [[output]]
    resource_type = "terraform_variables"
    name = "tfvars-for-{{ origin_resource.name }}"
    match_on = [
      { property = "name", value = "asseteditor" }
    ]
    template = """
    {
      "application_name": "{{ origin_resource.name }}",
      "network": "{{ origin_resource.network[0].name }}",
      "ports": {{ origin_resource.port | json_encode() }},
      "image_details": {
        "name": "{{ origin_resource.image[0].name }}",
        "platform": "{{ origin_resource.image[0].platform[0].name }}"
      },
      "compute_resources": {
        "cores": {{ origin_resource.core }},
        "memory_gb": {{ origin_resource.memory }}
      }
    }
    """
    

    After the importer runs, rescile creates a new terraform_variables resource in the graph. You can query its properties, which correspond to the top-level keys of the JSON object generated by the template.

    GraphQL Query to Retrieve the Generated tfvars:

    query GetAssetEditorTfVars {
      terraform_variables(filter: {name: "tfvars-for-asseteditor"}) {
        application_name
        network
        ports
        image_details
        compute_resources
      }
    }
    

    The data.terraform_variables[0] object from the GraphQL response is the desired configuration, ready to be saved as terraform.tfvars.json. This ensures your Terraform deployments are always in sync with your architectural digital twin.

    Resulting Configuration Object:

    {
      "application_name": "asseteditor",
      "network": "edge",
      "ports": [
        "80",
        "443"
      ],
      "image_details": {
        "name": "frontend",
        "platform": "kubernetes"
      },
      "compute_resources": {
        "cores": 1,
        "memory_gb": 32
      }
    }
    
  • Dynamic Ansible Inventories: Create an Ansible dynamic inventory script that queries the rescile API. This allows you to target hosts based on any attribute in the graph, such as the application they run, the business owner, or their compliance status.

  • Driving Provider CLIs: Use the graph to generate shell scripts or commands for provider-specific command-line interfaces (e.g., aws, az, gcloud). You can query for all resources with a specific tag or belonging to a certain application and pipe the results into a loop to perform bulk operations, such as security audits or configuration updates.

  • Intelligent Kubernetes Deployments: In a large cluster, tracking which Deployments use which ConfigMaps across many Helm charts is a common challenge. Updating a shared ConfigMap often leads to risky, cluster-wide rollouts. By ingesting Kubernetes manifests, rescile can build a live dependency graph. An automation script can then query this graph to identify exactly which Deployments mount a specific ConfigMap and dynamically generate a targeted Kustomize patch or Helm values to safely update only the affected workloads.

Continuous Auditing and Governance

The graph transforms auditing from a periodic, manual task into a continuous, automated process. Compliance rules are no longer just documents; they are queryable facts within your infrastructure’s digital twin.

  • Automated Compliance Checks: Schedule scripts to run GraphQL queries that check for policy violations. The compliance enrichment you performed in Step 2 is now your audit trail.

    Example GraphQL Query for Auditing:

    query FindUnencryptedConnections {
      # Find all application-to-database connections...
      application {
        database {
          properties {
            # ...that are MISSING the mandatory security control.
            # A sophisticated client would check for the absence of the "SEC-DB-01" control ID
            # within the 'controls' array. This simplified query checks if the array is missing.
            controls
          }
          node { name }
          sourceNode: parent { name } # 'parent' gets the source application
        }
      }
    }
    

    Running this query daily immediately identifies any database connections that have not been properly enriched by your security.toml compliance file, giving you a real-time view of your security posture. Furthermore, the compliance-as-code model enables you to define your entire security posture, such as an OSCAL System Security Plan (SSP), in code and ensure it is distributed to all responsible parties for implementation.

Architectural Insight and Impact Analysis

The graph provides a holistic view of your systems and their interdependencies, enabling powerful analysis that is impossible with siloed tools.

  • Generating Diagrams and Reports: Use rescile-ce convert -o graph.graphml to export the graph for visualization in tools like yEd. Additionally, you can define Report definitions in TOML to generate structured data artifacts (like JSON or YAML) directly from the graph. These reports are created as new resources in the graph and can be queried to produce service catalogs, compliance evidence, or configuration for other systems.

  • Blast Radius Analysis: Before performing maintenance or in the event of an outage, you can instantly determine the potential impact.

    Example GraphQL Query for Impact Analysis:

    query BillingDatabaseImpact {
      # If this database goes down...
      database(filter: {name: "billing-db-prod"}) {
        name
        # ...which applications are affected?
        application {
          name
          owner
        }
      }
    }
    
graph TD subgraph "Blast Radius for billing-db-prod" DB["database
billing-db-prod"] App1["application
billing-api
{owner: team-alpha}"] App2["application
reporting-service
{owner: team-gamma}"] App1 -- "depends on" --> DB App2 -- "depends on" --> DB end
This simple traversal query immediately tells you which applications depend on a critical component, allowing you
to notify the correct teams.

Financial Management and Cost Simulation (FinOps)

By enriching your graph with cost data (e.g., from cloud bills or software licenses), you can turn your architectural model into a powerful FinOps tool.

  • Cost Allocation and Showback: Since the graph links technical resources to business owners and teams (like team-alpha), you can accurately allocate costs. A query can sum the costs of all resources owned_by a specific team, providing clear showback or chargeback data.

  • Cost Impact Simulation: Model the financial impact of architectural changes before you make them. If you enrich server resources with a monthly_cost property, you can run a query like: “What is the total monthly cost of all java applications running on-prem?” and compare this to the projected cost of running them in the cloud to build a data-driven business case for migration.

SLA and Reliability Calculation

Model the reliability of your services by adding Service Level Agreement (SLA) or Service Level Objective (SLO) data as properties on your resources.

  • Composite SLA Calculation: For a service composed of multiple components in a dependency chain (e.g., frontend-app -> billing-api -> billing-db-prod), the end-to-end availability is the product of each component’s availability. A script can traverse the dependency graph for a given service, retrieve the sla property from each resource in the path, and calculate the composite SLA for the entire service. This allows you to identify weak links in your architecture and predict the reliability of user-facing features.
graph LR A[frontend-app
SLA: 99.95%] --> B[billing-api
SLA: 99.9%] --> C[billing-db-prod
SLA: 99.99%]

Interacting with Your Infrastructure via LLMs (GenAI)

Your rescile graph is a powerful, structured knowledge base—a digital twin of your enterprise. By connecting it to a Large Language Model (LLM) using a Model Context Protocol (MCP), you can enable natural language queries to solve complex operational, contractual, and architectural questions. Instead of relying on the LLM’s general knowledge, you provide it with real-time, accurate context from your graph, leading to precise and trustworthy answers.

The workflow involves an agent translating a user’s question into GraphQL queries, fetching the results, and injecting them as context into a prompt for the LLM.

  • Scenario: A product manager asks, “Give me a summary of the ‘billing-api’ service, who is the operational owner, and what legal party is responsible for its on-premise hosting contract?”

  • Step 1: Context Retrieval via GraphQL: An agent translates the question into a query to traverse the application’s dependencies.

  • Step 2: Constructing the LLM Prompt: The structured JSON output from the query is fed into a prompt that instructs the LLM how to answer, preventing hallucination.

  • Step 3: Synthesized LLM Response: The LLM processes the context and generates a concise, human-readable answer grounded in the facts from your graph.

sequenceDiagram actor User participant Agent participant rescile API as rescile
Graph API participant LLM User->>Agent: Asks natural language question Agent->>rescile API: Translates question to GraphQL query rescile API-->>Agent: Returns structured data (JSON) Agent->>LLM: Injects data as context into prompt LLM-->>Agent: Synthesizes human-readable answer Agent-->>User: Displays final answer

Ready for Enterprise Scale?

The rescile-ce command-line tool gives you the full power of our graph engine. To bring this capability to your entire organization, the commercial rescile platform provides an extensive web UI, a secure and scalable graph server, and seamless integrations with your existing enterprise systems.

Contact us to schedule a demo and see how our enterprise platform can transform your hybrid cloud operations.