Table of Contents

Ometa Framework Release v6.3.0

Version 6.3.0 of the Ometa Framework has been released with the following new features and changes.

Before Upgrade

Migration of CaseProperties

In this version, we made some changes to the CaseProperties table to massively boost performance when this table has millions of records:

  • The data type has been changed from NVARCHAR(MAX) to NVARCHAR(500) to support indexing
  • 2 indexes are introduced to boost performance when searching on case property values

This migration will be automatically applied during upgrade if the CaseProperties table doesn't have more than 5 million records and there are no property values exceeding 500 characters. The migration will check these requirements and will fail if one of these conditions are detected.

If you have one or more property values exceeding 500 characters, you are required to update and shorten those values before upgrading. Run the script below on the Ometa Framework Database to check if there are any values exceeding this threshold.

SELECT *
FROM dbo.CaseProperties
WHERE Value IS NOT NULL AND DATALENGTH(Value) > 1000

If your CaseProperties table exceeds 5 million records, you are required to execute the migration manually. Automatic migration would take too much time.

Execute the migration script below if this is the case. Keep in mind that this can take up to 30 minutes on a CaseProperties table with 100 million records (depending on the hardware and recovery model). You'll also need to have 5~10Gb of free space in your transaction log.

Note

Running this script manually is only required if you have more than 5 million records in the CaseProperties table.

If you don't, the migration will be automatically executed during the upgrade of the Ometa Framework.

Important

Make sure the database is not queried by users / software during the migration.

Stop all Ometa Framework services and IIS before running it.

-- Guard against logical transactions which can flood the transaction log.
IF @@TRANCOUNT > 0
BEGIN
  THROW 51001, 'This script must be run without an active transaction. Close the current transaction and retry.', 1;
END;

SET IMPLICIT_TRANSACTIONS OFF;
DECLARE @MigrationNeeded bit = 1;

-- Column already migrated?
IF EXISTS (
    SELECT 1
    FROM sys.columns
    WHERE object_id = OBJECT_ID('dbo.CaseProperties')
      AND name = 'Value'
      AND max_length = 1000  -- nvarchar(500) = 1000 bytes
)
BEGIN
    PRINT 'The Value column is already NVARCHAR(500). Skipping data migration.';
    SET @MigrationNeeded = 0;
END;

-- Safety check: values longer than 500
IF @MigrationNeeded = 1 AND EXISTS (
  SELECT 1
  FROM dbo.CaseProperties
  WHERE Value IS NOT NULL AND DATALENGTH(Value) > 1000
)
BEGIN
    THROW 51003, 'The Value column contains data longer than 500 NVARCHAR characters. Manually update those values to have a maximum length of 500 characters and retry.', 1;
END;

/**
STEP 1: Add the new temporary column
This command is a metadata-change. Because it allows NULL, SQL Server doesn't need to write anything to disk. It should be immediately finished.
**/
IF @MigrationNeeded = 1 AND COL_LENGTH('dbo.CaseProperties', 'Value_New') IS NULL
BEGIN
  ALTER TABLE [dbo].[CaseProperties] ADD Value_New NVARCHAR(500) NULL;
END;

/**
STEP 2: The data migration from the old value (NVARCHAR(MAX)) to the new temporary column (NVARCHAR(500))
This script copies the data in batches of 10.000.000 rows. This prevents flooding the transaction log.
**/
IF @MigrationNeeded = 1
BEGIN
  SET NOCOUNT ON;

  DECLARE @RowsAffected INT = 1,
      @BatchSize    INT = 10000000,
      @BatchCount   INT = 0,
      @Sql NVARCHAR(MAX);

  WHILE (@RowsAffected > 0)
  BEGIN
    BEGIN TRY
      BEGIN TRAN;

            SET @sql = N'UPDATE TOP (' + CAST(@BatchSize AS NVARCHAR(20)) + N') CP
                  SET Value_New = CP.Value
                  FROM dbo.CaseProperties CP
                  WHERE CP.Value_New IS NULL
                    AND CP.Value IS NOT NULL;';

            EXEC sys.sp_executesql @sql;

      SET @RowsAffected = @@ROWCOUNT;
      SET @BatchCount += 1;

      COMMIT;
    END TRY
    BEGIN CATCH
      ROLLBACK;
      THROW;
    END CATCH

    -- Every 10 batches: small pause
    IF @BatchCount % 50 = 0
      WAITFOR DELAY '00:00:01';
  END;
END;

/**
STEP 3: Drop existing indexes
**/
IF @MigrationNeeded = 1 AND EXISTS (
    SELECT 1
    FROM sys.indexes
    WHERE object_id = OBJECT_ID('dbo.CaseProperties')
      AND name = 'IX_CaseProperty_LifecyclePropertyID'
)
BEGIN
  DROP INDEX [IX_CaseProperty_LifecyclePropertyID] ON [dbo].[CaseProperties];
END;

IF @MigrationNeeded = 1 AND EXISTS (
    SELECT 1
    FROM sys.indexes
    WHERE object_id = OBJECT_ID('dbo.CaseProperties')
      AND name = 'IX_CaseProperties_LifecyclePropertyID_Value'
)
BEGIN
  DROP INDEX [IX_CaseProperties_LifecyclePropertyID_Value] ON [dbo].[CaseProperties];
END;

/**
STEP 4: Swap columns (fast but blocking for milliseconds)
**/
IF @MigrationNeeded = 1
BEGIN
  BEGIN TRAN;

  ALTER TABLE dbo.CaseProperties DROP COLUMN Value;
  EXEC sp_rename 'dbo.CaseProperties.Value_New', 'Value', 'COLUMN';

    COMMIT;
END;

/**
STEP 5: (Re)create indexes
**/
IF NOT EXISTS (
    SELECT 1
    FROM sys.indexes
    WHERE object_id = OBJECT_ID('dbo.CaseProperties')
      AND name = 'IX_CaseProperty_LifecyclePropertyID'
)
BEGIN
  CREATE NONCLUSTERED INDEX [IX_CaseProperty_LifecyclePropertyID] ON [dbo].[CaseProperties]
  (
    [LifecyclePropertyID] ASC
  )
  INCLUDE([CaseID],[Value])
  WITH
  (
    SORT_IN_TEMPDB = ON,
    MAXDOP = 4,
    FILLFACTOR = 80
  );
END;

IF NOT EXISTS (
    SELECT 1
    FROM sys.indexes
    WHERE object_id = OBJECT_ID('dbo.CaseProperties')
      AND name = 'IX_CaseProperties_LifecyclePropertyID_Value'
)
BEGIN
  CREATE NONCLUSTERED INDEX [IX_CaseProperties_LifecyclePropertyID_Value] ON [dbo].[CaseProperties]
  (
    [LifecyclePropertyID] ASC,
    [Value] ASC
  )
  INCLUDE ([CaseID])
  WITH
  (
    SORT_IN_TEMPDB = ON,
    MAXDOP = 4,
    FILLFACTOR = 80
  );
END;