Testing Smart Data Capture with AI Sandbox

The AI Sandbox is a powerful tool within SpotDraft that allows you to test and improve the accuracy of your Smart Data Capture (SDC) configurations without affecting your live contract data. Use the AI Sandbox to experiment with different prompts and fine-tune your SDC settings, ensuring reliable and accurate data extraction from your contracts.

With the AI Sandbox, you can:

  • Test Smart Data Capture without affecting live data.
  • Test AI features to improve accuracy and build confidence

Navigating to the AI Sandbox

Follow these steps to access the AI Sandbox:

  1. Click on the Manage section in the left-hand navigation menu.
  2. Select Contract Metadata or Metadata Manager.
  3. Choose the specific contract type you want to test (skip this step if your are in the new global metadata system).
  4. Find the metadata field you'd like to evaluate. Click the three dots icon next to it.
  5. Select Test Smart Data Capture.
Right focus (4).png

Adding Contracts to the Test

The "Add Contracts From Repository" screen allows you to select contracts for testing Smart Data Capture.

To add contracts:

  1. Search: Use the search bar to find contracts by Title or Counterparty.
  2. Filter: Refine your search using the filter options (Approval, Contract Metadata, Contract Type, Contract Users, Contract Workflow etc.)

Note: For higher accuracy, it's recommended to add at least 5 contracts for testing. You can select a maximum of 10 contracts at a time.

Running the Smart Data Capture Test

Once you've selected your contracts, click the Add & Run Test button. The AI Sandbox will then extract data from the selected contracts, based on the prompt associated with your chosen metadata field.

The results will be displayed in a grid format, showing the extracted values for each contract.

Each row represents a metadata field, and each column represents a contract you're testing. "Extraction Started" loader indicates the Smart Data Capture is working. 

Verifying and Correcting Extracted Data

Review the extracted data for accuracy.

  1. View Extracted Data: Hover over an Unverified cell to see the extracted value and the "Reason for extraction" that AI pulled to come up with the value.
  2. View in Contract: Click the View in Contract link to open the contract and verify the extracted data's context. This helps double check if the AI is pulling the correct information.
  3. Mark as Correct/Incorrect:
    • If the extracted data is accurate, click the checkmark icon to mark it as "Correct."
    • If the extracted data is inaccurate, click the X icon to mark it as "Incorrect." In this case, since the AI result is incorrect, a pop up will display. Enter the correct result and then click Save & Next to save the expected value. AI will use this correct information to suggest better prompts and validate subsequent extractions automatically.

Using AI Suggestions to Improve Prompts

After reviewing and correcting the extracted data, the AI Sandbox may offer suggestions for improving your metadata prompt, this is indicated by "AI Suggestion available" on the prompt cell in the first column.

  1. View AI Suggestion: Click on the metadata prompt to view the revised prompt suggestion from AI.
  2. Choose your prompt:
    Click Use This to to use the suggested prompt.
    Click Suggest New to generate another AI suggestion.
    Click outside the pop up box to ignore the suggestion (if the current accuracy is sufficient).

Saving Updated Prompts

After using and choosing the best prompt, the save button will be enabled.

  1. Click on save edited prompts, and a Review & Save Changes will pop up showing both old and new values.
  2. Click the Save Changes button to save the prompted changes.

Interpreting Accuracy Scores

You can improve your process by interpreting the accuracy score to reduce errors. Accuracy scores are important indicators to understand system predictability and figure out things that are not working.
Here's how the score get calculated for accuracy:
Accuracy = Number of correct results / Total number of cells * 100

Based on the above calculation a value will be displayed. There are multiple important accuracy metrics displayed for you to interpret

  • Overall Accuracy: Overall accuracy is the overall correctness of metadata calculations for the particular dataset you chose.
  • Per-Metadata accuracy: This is the individual accuracy per metadata field noted alongside the field name (first column). This lets you understand if SDC is working well for specific metadata fields and poorly for others.
  • Per-contract Accuracy: This is the accuracy of all metadata fields on a specific contract (noted alongside the contract name on the column headers). This lets you understand if SDC is not working well for specific contracts due to the complexity or quality of the document.

As a good standard to follow, you can save the prompt if accuracy is greater than 75% to match human level benchmark.

Known Issues

Please be aware of the following known issues when using the AI Sandbox:

  • Single SDC Run Limitation: Only one Smart Data Capture (SDC) test run can occur at a time per workspace. If an SDC test is already running, other users in the workspace will be unable to initiate a new test.
  • Unsaved Prompt Edits: Edited prompts are not automatically saved. If you close or refresh the page before explicitly saving your changes using the "Save Edited Prompts" button, your edits will be lost.
  • Unable to Stop SDC Run: Once an SDC test run is initiated, it cannot be stopped, even if you close the AI sand box. You must wait for the current test run to complete before starting a new one.
  • Fields unavailable: Creator and counterparty fields can’t be tested using Sandbox

FAQs

Q: Is there a limitation of the status of the contract to choose to be a part of the AI sandbox?
A: No, there is no status limitation.

Q: How is the accuracy score calculated?
A: Accuracy is simply the number of correct divided by total number of cells as a percentage"

Q: I'm getting an error "Your previous Smart Data Capture test is still running". What should I do?
A: Only one Smart Data Capture test can run at a time per workspace. This error appears when:
        - You have a previous test still processing
        - Another user in your workspace is currently running a test
Wait for the current test to complete, then try again. If the error persists beyond 5 minutes, contact support.

Q: Can anyone use the AI Sandbox?
A: Only users with access to edit and create metadata fields can use the AI Sandbox to test Smart Data Capture. “Extract Contract Metadata” and “Manage Contract Metadata Fields”.

 

Conclusion

Leverage the power of AI SandBox to quickly test and understand strengths and shortcomings of AI features, so that you can improve upon the shortcomings and get higher accuracy and predictability.

Was this article helpful?

0 out of 0 found this helpful