Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.vantage.sh/llms.txt

Use this file to discover all available pages before exploring further.

Snowflake Data Sharing gives your Snowflake account read-only SQL access to a daily_costs table containing the same processed cost data you see in Vantage Cost Reports, delivered via Secure Data Sharing. This feature is built for data engineering teams and FinOps practitioners who already operate in warehouse-native workflows. Once enabled, you can query Vantage cost data alongside your own datasets, power your BI tools without building ingestion pipelines, and join cost data with revenue, usage, or organizational metadata directly in SQL. daily_costs contains rows from every provider you have connected to Vantage—AWS, Azure, GCP, Snowflake, AI providers, and so on—distinguished by the provider column. The schema is normalized so that the same column set works across all providers. Where the meaning of a column varies by provider, Discovering Your Data below shows you how to introspect what’s actually in your table.

How It Works

Vantage stores your processed cost data as an Apache Iceberg table in Vantage-owned AWS infrastructure. A managed Snowflake catalog integration in the Vantage Snowflake account makes that table queryable, and a Secure Data Share exposes it to your Snowflake account. Key properties of this architecture:
  • No data is copied into your Snowflake account. The Secure Data Share gives your account queryable access to data that lives in the Vantage-managed Iceberg table.
  • Data refreshes whenever the Vantage ETL completes for a billing period. This typically tracks the same cadence you see in the Vantage console for that provider. See the provider data refresh documentation for details.
  • The share lives in the same cloud region as your Snowflake account, so your queries do not incur cross-region or internet egress on Vantage’s side.

Cost Considerations

You pay Snowflake for the compute used to run your queries (warehouse credits). Vantage does not charge per query. Because the Secure Data Share lives in the same cloud region as your Snowflake account, you do not incur cross-region or internet egress on Vantage’s side when you query the data. For guidance on sizing the warehouse you point at the share, see Snowflake’s warehouse considerations.

Get Access

Snowflake Data Sharing is available to Vantage Enterprise customers.
To enable Snowflake Data Sharing, first contact your Vantage Customer Success representative or email support@vantage.sh. Vantage will enable the connection screen for your account. Once enabled, complete the setup in the Vantage console:
1

Open the Snowflake Export integration

From the top navigation, click Settings. On the left navigation, select Integrations, then click Snowflake Export.
2

Provide your Snowflake details

Enter your Data Sharing Account Identifier (in the form ORGNAME.ACCOUNTNAME) and select the Region where your Snowflake account is hosted from the dropdown.
3

Connect

Click Connect Snowflake. Vantage provisions a Secure Data Share scoped to your account. When the share is ready, it will appear in your Snowflake account. Follow the steps below to connect it.
The share is one-way and read-only. Vantage cannot read or write into your Snowflake account, and your account cannot modify data through the share.

Connect the Share in Snowflake

After Vantage confirms your share is ready, run the following in Snowflake to mount the data. Replace the placeholders with the values Vantage provided.
1

Verify the inbound share

You can confirm the share is available in the Snowflake UI by navigating to Data Sharing > Shared with you in the left sidebar. The Vantage share will appear under Direct shares. Alternatively, run the following in the SQL console:
SHOW SHARES LIKE 'VANTAGE_SHARE_%';
2

Create a database from the share

Choose any local database name. The examples on this page use VANTAGE_DB. See Snowflake’s CREATE DATABASE … FROM SHARE reference for full syntax.
CREATE DATABASE VANTAGE_DB
  FROM SHARE <vantage_account_locator>.<vantage_share_name>;
Vantage will provide the full share identifier (e.g., ACME123.VANTAGE_AWS_US_EAST_1.VANTAGE_SHARE_ACME).
If you already have a VANTAGE database from the Snowflake integration, use a different name to avoid a collision.
3

Grant access to a role

Grant the imported privileges to whichever role will run queries.
GRANT IMPORTED PRIVILEGES ON DATABASE VANTAGE_DB TO ROLE <your_role>;
4

Confirm the schema and table

Snowflake folds unquoted identifiers to uppercase, so the namespace and table appear as <YOUR_NAMESPACE>.DAILY_COSTS. Queries can reference either case.
SHOW SCHEMAS IN DATABASE VANTAGE_DB;
SHOW TABLES IN SCHEMA VANTAGE_DB.<your_namespace>;
5

Run a test query

Pull the last seven days of spend by service to confirm the share is working.
SELECT service_name, SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATEADD(day, -7, CURRENT_DATE())
  AND charge_category IN ('Usage', 'Discounted')
  AND provider = 'aws'
GROUP BY service_name
ORDER BY total DESC
LIMIT 25;

The daily_costs Table

daily_costs is the table exposed through Snowflake Data Sharing. It contains one row per resource per day, after Vantage’s ETL has processed it and merged any Virtual Tags you’ve configured. The numbers match what you see in Cost Reports for the same filters.

Schema

ColumnTypeDescription
providerVARCHARIdentifies which connected provider the row came from, lowercase (e.g., aws, azure, gcp, anthropic, datadog). Always populated.
service_nameVARCHARThe service the row belongs to (e.g., AmazonEC2, claude-sonnet-4, Datadog Infrastructure). Values are scoped per provider.
region_idVARCHARProvider region identifier where the resource ran. May be empty for SaaS providers without a region concept.
resource_idVARCHARProvider-native resource identifier (e.g., AWS ARN, Azure resource ID, GCP resource path). This is the raw identifier, not a friendly display name. May be empty for providers that do not expose a per-resource identity.
service_categoryVARCHARVantage’s normalized service category (e.g., Compute, Storage, AI and Machine Learning). Values are scoped per provider. See Discovering Your Data to see what’s in your table.
service_subcategoryVARCHARVantage’s normalized service subcategory. Values are scoped per provider.
consumed_unitVARCHARUnit of measurement for consumed_quantity. Varies by provider (e.g., Hrs, GB-Mo, Tokens, Requests).
charge_period_startDATEDay the charge applies to.
charge_period_endDATEExclusive upper bound: always charge_period_start + 1 day.
billed_costNUMBER(26,10)Cost without amortizing upfront commitments. Closest to AWS “Unblended”.
amortized_costNUMBER(26,10)Cost with upfront commitments (RIs, Savings Plans) spread across their term.
consumed_quantityNUMBER(26,10)Amount of consumed_unit consumed.
billing_account_idVARCHARTop-level billing account (e.g., AWS payer/management account, GCP billing account, Azure billing account). May be empty for providers without a multi-account hierarchy.
sub_account_idVARCHARSub-account, project, or workspace within billing_account_id (e.g., AWS linked account, GCP project, Azure subscription).
charge_categoryVARCHARClassifies the row as Usage, Discounted, Undiscounted, Tax, Refund, or Credit. See charge_category values below.
tagsVARCHARJSON-encoded resource tags plus any Vantage Virtual Tags. Default is '{}'. May be empty for providers that do not support tagging.
billed_cost and amortized_cost can each be NULL on rows where that view doesn’t apply. Always use SUM() (which ignores NULL) and pick one column or the other. Never add them together, because some rows populate both with the same value and would be double-counted.
The table also contains an import_token column. This is an internal Vantage identifier used for data loading and is not meaningful for querying. You can safely ignore it.

Date Semantics

Rows are daily. charge_period_start is the day the charge applies to, and charge_period_end is always exactly one day later (exclusive). To select a date range, filter on charge_period_start only:
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
If you filter on charge_period_end instead, keep in mind that it’s already a day ahead, which makes it easy to accidentally drop the first or last day of your range. Using charge_period_start on both sides avoids this issue. Every example on this page follows this logic.

charge_category Values

charge_category is a normalized category that does not depend on the upstream provider’s vocabulary. Rows fall into one of:
ValueMeaning
UsageStandard usage charges. The default for rows that do not match any specific bucket.
DiscountedDiscounted usage rows (e.g., RI- or Savings-Plan-covered usage).
UndiscountedUsage at on-demand rates with no commitment applied.
TaxTax line items.
RefundRefund line items.
CreditCredit line items.

Tags

The tags column stores resource tags as a JSON-encoded string with a default of '{}'. Use Snowflake’s PARSE_JSON and the : accessor to read tag values. Rule-based Virtual Tags are merged into this column.
Virtual Tags that use cost allocation (Business Metrics-Based, Cost-Based, or Percent-Based) are not currently available in daily_costs. Those allocations are computed separately inside Vantage and do not yet flow into the export. Standard Virtual Tags that map tag values directly (e.g., renaming or consolidating tag keys) are included.

Query Cookbook

The examples below all target VANTAGE_DB.<your_namespace>.daily_costs. Replace the database name and namespace with your own values. Most examples use SUM(amortized_cost) with charge_category IN ('Usage', 'Discounted') to match the default view in Cost Reports. To match a different toggle combination (for example, on-demand rates without amortization), see Matching Cost Report Toggles below.

Discovering Your Data

Because the schema is normalized but the values are scoped per provider, the fastest way to understand what’s in your table is to introspect it. The five queries below take you from “what providers do I have?” to “what categories and units does each provider report?” Run them in order the first time you connect.
1

Which providers are in the table?

SELECT DISTINCT provider
FROM VANTAGE_DB.<your_namespace>.daily_costs
ORDER BY provider;
The list reflects every provider you have connected to your Vantage account. If a provider is missing, it has not yet been ingested or is not yet enabled for export.
2

What services does a specific provider expose?

SELECT DISTINCT service_name
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE provider = 'anthropic'
ORDER BY service_name;
Substitute any provider value from the previous query. The set of service_name values is the same as what you’d see in Vantage’s filter dropdowns for that provider.
3

How does Vantage categorize a provider's services?

SELECT DISTINCT service_category, service_subcategory
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE provider = 'aws'
ORDER BY service_category, service_subcategory;
service_category and service_subcategory are Vantage’s normalized rollups (e.g., Compute, Storage, AI and Machine Learning). The exact values depend on the provider. Running this for each of your providers tells you which buckets you can group by.
4

What units of consumption does a provider report?

SELECT DISTINCT consumed_unit
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE provider = 'gcp'
ORDER BY consumed_unit;
Cloud providers tend to report units like Hrs, GB-Mo, and Requests. AI providers often report Tokens. Pair consumed_unit and consumed_quantity if you want to compute usage-based metrics.
5

What date range and row counts do you have?

SELECT provider,
       MIN(charge_period_start) AS first_day,
       MAX(charge_period_start) AS last_day,
       COUNT(*)                 AS row_count
FROM VANTAGE_DB.<your_namespace>.daily_costs
GROUP BY provider
ORDER BY provider;
Use this to check data freshness and backfill. If a provider’s last_day lags the others, its data may not have finished loading yet. See Provider Data Refresh for the expected cadence.

Total Cost for a Date Range

The simplest possible query: total amortized cost over a fixed window, matching the default Cost Report view.
SELECT SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
  AND charge_category IN ('Usage', 'Discounted');
When you bound a date range, put both bounds on charge_period_start (>= start AND < end). Avoid filtering on charge_period_end as it’s exclusive (charge_period_start + 1 day) and easy to get off-by-one.

Billed vs. Amortized: Which Column to Use

billed_cost is what you were billed for in the period. It closely corresponds to AWS “Unblended” cost. amortized_cost spreads upfront commitments (Reserved Instances, Savings Plans) across their term. It closely corresponds to AWS “Amortized” cost. Pick one consistently for any given report. Either column is NULL for rows where the alternative view does not apply, so always use SUM().
SELECT service_name, SUM(billed_cost) AS billed
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
GROUP BY service_name
ORDER BY billed DESC;

Matching Cost Report Toggles

In Cost Reports, you control what’s included via toggles. In SQL, you reproduce the same view by choosing which charge_category values to include and whether to use billed_cost or amortized_cost. Discounted and Undiscounted are mutually exclusive. You can use one or the other, but never both. Discounted includes commitment-covered usage (the default in Cost Reports). Undiscounted shows on-demand rates instead.
Never include both Discounted and Undiscounted in the same query. A row is one or the other depending on whether a commitment covers it.
The four base combinations that match the Cost Report discount and amortization toggles:
DiscountsAmortizationSQL
On (default)On (default)charge_category IN ('Usage', 'Discounted') + SUM(amortized_cost)
OnOffcharge_category IN ('Usage', 'Discounted') + SUM(billed_cost)
OffOncharge_category IN ('Usage', 'Undiscounted') + SUM(amortized_cost)
OffOffcharge_category IN ('Usage', 'Undiscounted') + SUM(billed_cost)
To include additional line items, add them to the IN (...) list:
ToggleAdd to charge_category IN (...)
Include Tax'Tax'
Include Refunds'Refund'
Include Credits'Credit'
The default Cost Report view (discounts on, amortization on):
SELECT SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
  AND charge_category IN ('Usage', 'Discounted');
The same period with discounts off:
SELECT SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
  AND charge_category IN ('Usage', 'Undiscounted');
To see exactly what falls in each bucket for a given period, break the totals out by charge_category:
SELECT charge_category, SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
GROUP BY charge_category
ORDER BY total DESC;

Cost by Service

SELECT provider, service_name, SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
  AND charge_category IN ('Usage', 'Discounted')
GROUP BY provider, service_name
ORDER BY total DESC;

Cost by Account, Sub-Account, or Region

billing_account_id is the top-level billing account (AWS payer/management account, GCP billing account, Azure billing account). sub_account_id is the sub-account within it (AWS linked account, GCP project, Azure subscription).
SELECT provider, billing_account_id, SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
  AND charge_category IN ('Usage', 'Discounted')
GROUP BY provider, billing_account_id
ORDER BY total DESC;

Top Resources by Spend

SELECT provider, resource_id, service_name, SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
  AND charge_category IN ('Usage', 'Discounted')
GROUP BY provider, resource_id, service_name
ORDER BY total DESC
LIMIT 50;

Multi-Cloud Filtering and Grouping

SELECT provider, SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE provider IN ('aws', 'azure', 'gcp')
  AND charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
  AND charge_category IN ('Usage', 'Discounted')
GROUP BY provider
ORDER BY total DESC;

Filtering and Grouping by Tag

The tags column is a JSON-encoded string. Use PARSE_JSON and the : accessor to read individual values. See Snowflake’s Querying semi-structured data for the full syntax, including how to read keys with special characters via PARSE_JSON(tags)['my-key'].
SELECT SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE PARSE_JSON(tags):environment::string = 'production'
  AND charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
  AND charge_category IN ('Usage', 'Discounted');
Use TRY_PARSE_JSON instead of PARSE_JSON if you want to safely handle rows where tags might contain unexpected values. It returns NULL instead of raising an error. Wrap with COALESCE and NULLIF to replace missing or empty tag values with a fallback label like 'Not tagged'.
Use DATE_TRUNC and DATEADD to bucket time series.
SELECT charge_period_start AS day, SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATEADD(day, -30, CURRENT_DATE())
  AND charge_category IN ('Usage', 'Discounted')
GROUP BY day
ORDER BY day;

Month-over-Month Change

Use Snowflake’s window functions (in particular LAG) to compute month-over-month deltas. NULLIF(..., 0) avoids division by zero.
WITH monthly AS (
  SELECT DATE_TRUNC('month', charge_period_start) AS month,
         SUM(amortized_cost) AS total
  FROM VANTAGE_DB.<your_namespace>.daily_costs
  WHERE charge_category IN ('Usage', 'Discounted')
  GROUP BY month
)
SELECT month,
       total,
       total - LAG(total) OVER (ORDER BY month) AS mom_delta,
       (total - LAG(total) OVER (ORDER BY month))
         / NULLIF(LAG(total) OVER (ORDER BY month), 0) * 100 AS mom_pct
FROM monthly
ORDER BY month;

Service Category Rollups

Group by service_category and service_subcategory to see the Compute, Storage, Networking, etc. mix.
SELECT service_category, service_subcategory, SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start >= DATE '2026-04-01'
  AND charge_period_start <  DATE '2026-05-01'
  AND charge_category IN ('Usage', 'Discounted')
GROUP BY service_category, service_subcategory
ORDER BY total DESC;

Joining with Your Own Data

A common pattern is to compute a unit cost by dividing Vantage cost data by a customer-side metric (revenue, number of customers, requests, etc.). The example below assumes a customer-side table MY_DB.PUBLIC.monthly_revenue(month DATE, revenue NUMBER).
WITH monthly_costs AS (
  SELECT DATE_TRUNC('month', charge_period_start) AS month,
         SUM(amortized_cost) AS cost
  FROM VANTAGE_DB.<your_namespace>.daily_costs
  WHERE charge_category IN ('Usage', 'Discounted')
  GROUP BY month
)
SELECT c.month,
       c.cost,
       r.revenue,
       c.cost / NULLIF(r.revenue, 0) AS cost_per_revenue
FROM monthly_costs c
LEFT JOIN MY_DB.PUBLIC.monthly_revenue r USING (month)
ORDER BY c.month;
For a managed equivalent inside Vantage, see the Per Unit Costs documentation.

Materializing a View for BI Tools

Wrap frequently-used aggregates in a Snowflake view (or a dynamic table) so that BI dashboards can hit a stable interface without re-aggregating on every load. The example below exposes both billed_cost and amortized_cost so the BI tool can choose between them, and includes all charge_category values so users can filter further on top. Adjust the columns and WHERE clause to match how your team uses the data.
CREATE OR REPLACE VIEW MY_DB.PUBLIC.monthly_costs_by_service AS
SELECT DATE_TRUNC('month', charge_period_start) AS month,
       service_name,
       SUM(billed_cost)    AS billed,
       SUM(amortized_cost) AS amortized
FROM VANTAGE_DB.<your_namespace>.daily_costs
GROUP BY month, service_name;

Query Efficiency and Table Metadata

daily_costs is an Iceberg table partitioned by charge_period_start. Filtering on charge_period_start in your WHERE clause allows Snowflake to skip partitions that don’t match, which significantly reduces the amount of data scanned and the warehouse credits consumed.
Always include a charge_period_start filter in your queries, even if you’re querying other dimensions. Without it, Snowflake scans the entire table.
To see how the table is partitioned, run:
SHOW ICEBERG TABLES LIKE 'DAILY_COSTS';

WITH iceberg_table AS (
  SELECT
    "current_partition_spec_id" AS current_partition_spec_id,
    PARSE_JSON("partition_specs") AS partition_specs
  FROM TABLE(RESULT_SCAN(LAST_QUERY_ID()))
),
current_spec AS (
  SELECT
    spec.value:"spec-id"::number AS spec_id,
    spec.value:fields AS fields
  FROM iceberg_table,
    LATERAL FLATTEN(input => partition_specs) spec
  WHERE spec.value:"spec-id"::number = current_partition_spec_id
)
SELECT
  spec_id,
  f.index + 1 AS partition_position,
  f.value:name::string AS partition_name,
  f.value:transform::string AS transform
FROM current_spec,
  LATERAL FLATTEN(input => fields) f
ORDER BY partition_position;
To check when the table was last refreshed and see how data is flowing in, query the snapshot refresh history:
SELECT
  refreshed_on,
  snapshot_id,
  is_current_snapshot,
  snapshot_summary:"added-records"::number AS added_records,
  snapshot_summary:"deleted-records"::number AS deleted_records,
  snapshot_summary:"total-records"::number AS total_records
FROM TABLE(INFORMATION_SCHEMA.ICEBERG_TABLE_SNAPSHOT_REFRESH_HISTORY(
  TABLE_NAME => 'DAILY_COSTS'
))
ORDER BY refreshed_on DESC;
This tells you when Vantage last pushed data, how many records were added or replaced, and the current total row count.

Troubleshooting

Confirm with your Vantage Customer Success representative that provisioning has completed. Once it has, the share appears in Data Sharing > Shared with you in the Snowflake UI, or in the SQL console via:
SHOW SHARES LIKE 'VANTAGE_SHARE_%';
If the share still doesn’t appear, double-check that the Data Sharing Account Identifier you provided in the Vantage console matches the account you’re checking. The identifier is case-sensitive and uses the form ORGNAME.ACCOUNTNAME.
Three things to check:
  • The role you’re using has been granted access: GRANT IMPORTED PRIVILEGES ON DATABASE VANTAGE_DB TO ROLE <your_role>;
  • You’re using the right schema name. Vantage assigns the schema name when provisioning. Run SHOW SCHEMAS IN DATABASE VANTAGE_DB; to confirm.
  • Your date filter falls inside the loaded range. Run the date range and row counts introspection query to see what’s actually there.
The most common reasons are:
  • You’re missing the default toggle filters. Cost Reports defaults to charge_category IN ('Usage', 'Discounted') and SUM(amortized_cost). See Matching Cost Report Toggles.
  • You opted into the 2-day cost delay in Vantage. That delay is not applied to the share. Add WHERE charge_period_start < DATEADD(day, -2, CURRENT_DATE()) to match.
  • You’re using allocation-based Virtual Tags. Cost allocation tags (Business Metrics-Based, Cost-Based, Percent-Based) are not currently included in daily_costs. Standard Virtual Tags are.
  • You added billed_cost and amortized_cost together. They overlap on most rows. Pick one.
Run the date range and row counts introspection query to see each provider’s last_day. If one provider lags the others, its ETL has not yet caught up. Check Provider Data Refresh for the expected cadence per provider.
daily_costs is partitioned by charge_period_start. Always include a charge_period_start filter in your WHERE clause to allow Snowflake to skip partitions that don’t match. Without it, every query scans the entire table. See Query Efficiency and Table Metadata for more.

Frequently Asked Questions

Data refreshes whenever Vantage’s ETL completes for a billing period. The cadence matches what you see in the Vantage console for that provider. See Provider Data Refresh for details.
No. The Secure Data Share gives your account queryable access in place. The underlying data lives in Vantage-managed storage, and queries against the share consume your Snowflake compute but do not store the data on your side.
Yes. The full retention range available in your Vantage account is backfilled into the share when Snowflake Data Sharing is enabled.
You can drop the database created from the share on your side with DROP DATABASE VANTAGE_DB;. To have Vantage stop publishing into the share, contact support@vantage.sh.
Virtual Tags are applied during the Vantage ETL before data lands in daily_costs, so the values you see through the export already reflect them.
The Generate cost data export API is still available for one-off pulls. Snowflake Data Sharing is the warehouse-native option for continuous query access without managing an ingestion pipeline.
Not at this time. Vantage makes all integrated providers available in the data share. Inside Snowflake, you can create your own views, masking policies, and role-based access on top of the shared database to expose only certain providers, accounts, columns, or tag values to specific users in your organization.
The share is scoped to the data available in the specific Vantage account it is provisioned for. If you want separate shares for separate managed customers, the feature can be provisioned on each underlying Vantage account.
As Vantage processes new data or reprocesses existing data (for example, when tagging changes or a cloud provider updates historical costs), those updates are reflected automatically in the share. Queries always return the latest available data, in sync with Vantage. There is nothing to truncate and reload on your side.
The share is one-way and read-only. Vantage cannot read or write into your Snowflake account, and your account cannot modify data through the share. The share is provisioned specifically for the Snowflake account identifier you supply, so you can only access your own data. Inside Snowflake, your account admin controls which roles can query the shared database using standard Snowflake RBAC (GRANT IMPORTED PRIVILEGES ON DATABASE … TO ROLE …).
No. Vantage sends all data to Snowflake to provide the most up-to-date view possible. If you want to exclude the most recent days to match the delayed view in the Vantage console, add a date filter:
SELECT SUM(amortized_cost) AS total
FROM VANTAGE_DB.<your_namespace>.daily_costs
WHERE charge_period_start < DATEADD(day, -2, CURRENT_DATE());