Inefficient copy commands

Inefficient copy commands

The Inefficient copy commands optimization highlights COPY command queries that copied very large files. Snowflake suggests a maximum of 256MB when executing a COPY command, recommending that you break larger files into smaller files and COPY them in parallel to improve cost and performance by increasing concurrency.

ℹ️
Note: This optimization follows Slingshot’s role-based access patterns and only shows inefficient queries to users who are part of the same business org as the user who executed the query. If you have not assigned users to business orgs, non-admins will be unable to access data for this optimization. For more information on assigning objects to business orgs, you can view the Org Management documentation.

Frequency of update

Daily @ 07:00

Fields in result

  • Account Locator
  • Region
  • Query ID
  • Frequency
  • Schema
  • Database
  • Avg MB Written
  • Avg Execution Time (s)
  • Monthly Cost
  • Target Execution Time (s)
  • Potential Monthly Savings

Why is this helpful?

Splitting large file sizes into Snowflake’s recommended 256MB upper size improves performance and cost by increasing concurrency. Snowflake can also add surcharges for large files moved through Snowpipe.

FAQ

  • How does breaking up large files save money?
    • Breaking up a large file into multiple small chunks allows those individual chunks to be processed in parallel rather than as a single large job. This can reduce the total compute time needed to process the COPY command. Furthermore, COPY commands can be executed on serverless compute resources, and Snowflake charges extra when moving large files in this manner.