World of Whatever

A blog about SQL Server, SSIS, C# and whatever else I happen to be dealing with in my professional life.

Find ramblings

Wednesday, February 21, 2018

Pop quiz - altering column types

Pop quiz

Given the following DDL

CREATE TABLE dbo.IntToTime
(
    CREATE_TIME int
);

What will be the result of issuing the following command?

ALTER TABLE dbo.IntToTime ALTER COLUMN CREATE_TIME time NULL;

Clearly, if I'm asking, it's not what you might expect. How can an empty table not allow you to change data types? Well it seems Time and datetime2 are special cases as they'll raise errors of the form

Msg 206, Level 16, State 2, Line 47 Operand type clash: int is incompatible with time

If you're in this situation and need to get the type converted, you'll need to make two hops, one to varchar and then to time.

ALTER TABLE dbo.IntToTime ALTER COLUMN CREATE_TIME varchar(10) NULL;
ALTER TABLE dbo.IntToTime ALTER COLUMN CREATE_TIME time NULL;

Thursday, January 25, 2018

What are all the functions and their parameters?

What are all the functions and their parameters?

File this one under: I wrote it once, may I never need it again

In my ever expanding quest for getting all the metadata, I how could I determine the metadata for all my table valued functions? No problem, that's what sys.dm_exec_describe_first_result_set is for. SELECT * FROM sys.dm_exec_describe_first_result_set(N'SELECT * FROM dbo.foo(@xmlMessage)', N'@xmlMessage nvarchar(max)', 1) AS DEDFRS

Except, I need to know parameters. And I need to know parameter types. And order. Fortunately, sys.parameters and sys.types makes this easy. The only ugliness comes from the double invocation of row rollups


SELECT 
    CONCAT
    (
        ''
    ,   'SELECT * FROM '
    ,   QUOTENAME(S.name)
    ,   '.'
    ,   QUOTENAME(O.name)
    ,   '('
        -- Parameters here without type
    ,   STUFF
        (
            (
                SELECT 
                    CONCAT
                    (
                        ''
                    ,   ','
                    ,   P.name
                    ,   ' '
                    )
                FROM
                    sys.parameters AS P
                WHERE
                    P.is_output = CAST(0 AS bit)
                    AND P.object_id = O.object_id
                ORDER BY
                    P.parameter_id
                FOR XML PATH('')
            )
        ,   1
        ,   1
        ,   ''
        )

    ,   ') AS F;'
    ) AS SourceQuery
,   (
        STUFF
        (
            (
                SELECT 
                    CONCAT
                    (
                        ''
                    ,   ','
                    ,   P.name
                    ,   ' '
                    ,   CASE 
                        WHEN T2.name LIKE '%char' THEN CONCAT(T2.name, '(', CASE P.max_length WHEN -1 THEN 'max' ELSE CAST(P.max_length AS varchar(4)) END, ')')
                        WHEN T2.name = 'time' OR T2.name ='datetime2' THEN CONCAT(T2.name, '(', P.scale, ')')
                        WHEN T2.name = 'numeric' THEN CONCAT(T2.name, '(', P.precision, ',', P.scale, ')')
                        ELSE T2.name
                    END
                    )
                FROM
                    sys.parameters AS P
                    INNER JOIN
                        sys.types AS T2
                        ON T2.user_type_id = P.user_type_id
                WHERE
                    P.is_output = CAST(0 AS bit)
                    AND P.object_id = O.object_id
                ORDER BY
                    P.parameter_id
                FOR XML PATH('')
            )
        ,   1
        ,   1
        ,   ''
        )
    ) AS ParamterList
FROM
    sys.schemas AS S
    INNER JOIN
        sys.objects AS O
        ON O.schema_id = S.schema_id
WHERE
    O.type IN ('FT','IF', 'TF');

How you use this is up to you. I plan on hooking it into the Biml Query Table Builder to simulate tables for all my TVFs.

Monday, January 22, 2018

Staging Metadata Framework for the Unknown

Staging metadata framework for the unknown

That's a terrible title but it's the best I got. A client would like to report out of ServiceNow some metrics not readily available in the PowerBI App. The first time I connected, I got a quick look at the Incidents and some of the data we'd be interested in but I have no idea how that data changes over time. When you first open a ticket, maybe it doesn't have a resolved date or a caused by field populated. And since this is all web service stuff and you can customize it, I knew I was looking at lots of iterations to try and keep up with all the data coming back from the service. How can I handle this and keep sane? Those were my two goals. I thought it'd be fun to share how I solved the problem using features in SQL Server 2016.

To begin, I created a database called RTMA to perform my real time metrics analysis. CREATE DATABASE RTMA; With that done, I created a schema within my database like USE RTMA; GO CREATE SCHEMA ServiceNow AUTHORIZATION dbo; To begin, we need a table to hold our discovery metadata.

CREATE TABLE 
    ServiceNow.ColumnSizing
(
    EntityName varchar(30) NOT NULL
,   CollectionName varchar(30) NOT NULL
,   ColumnName varchar(30) NOT NULL
,   ColumnLength int NOT NULL
,   InsertDate datetime NOT NULL
    CONSTRAINT DF_ServiceNow_ColumnSizing_InsertDate DEFAULT (GETDATE())
);

CREATE CLUSTERED COLUMNSTORE INDEX
    CCI_ServiceNow_ColumnSizing
    ON ServiceNow.ColumnSizing;
The idea for this metadata table is that we'll just keep adding more information in for the entities we survey. All that matters is the largest length for a given combination of Entity, Collection, and Column.

In the following demo, we'll add 2 rows into our table. The first batch will be our initial sizing and then "something" happens and we discover the size has increased.

INSERT INTO
    ServiceNow.ColumnSizing
(
    EntityName
,   CollectionName
,   ColumnName
,   ColumnLength
,   InsertDate
)
VALUES
    ('DoesNotExist', 'records', 'ABC', 10, current_timestamp)
,   ('DoesNotExist', 'records', 'BCD', 30, current_timestamp);

Create a base table for our DoesNotExist. What columns will be available? I know I'll want my InsertDate and that's the only thing I'll guarantee to begin. And that's ok because we're going to get clever.

DECLARE @entity nvarchar(30) = N'DoesNotExist'
,   @Template nvarchar(max) = N'DROP TABLE IF EXISTS ServiceNow.Stage;
    CREATE TABLE
        ServiceNow.Stage
    (
    
    InsertDate datetime CONSTRAINT DF_ServiceNow_Stage_InsertDate DEFAULT (GETDATE())
    );
    CREATE CLUSTERED COLUMNSTORE INDEX
        CCI_ServiceNow_Stage
    ON
        ServiceNow.Stage;'
,   @Columns nvarchar(max) = N'';

DECLARE @Query nvarchar(max) = REPLACE(REPLACE(@Template, '', @Entity), '', @Columns);
EXECUTE sys.sp_executesql @Query, N'';

We now have a table with one column so let's look at using our synthetic metadata (ColumnSizing) to augment it. The important thing to understand in the next block of code is that we'll use FOR XML PATH('') to concatenate rows together and the CONCAT function to concatenate values together.

See more here for the XML PATH "trick"

If we're going to define columns for a table, it follows that we need to know what table needs what columns and what size those columns should be. So, let the following block be that definition.

DECLARE @Entity varchar(30) = 'DoesNotExist';

SELECT
    CS.EntityName
,   CS.CollectionName
,   CS.ColumnName
,   MAX(CS.ColumnLength) AS ColumnLength
FROM
    ServiceNow.ColumnSizing AS CS
WHERE
    CS.ColumnLength > 0
    AND CS.ColumnLength =  
    (
        SELECT
            MAX(CSI.ColumnLength) AS ColumnLength
        FROM
            ServiceNow.ColumnSizing AS CSI
        WHERE
            CSI.EntityName = CS.EntityName
            AND CSI.ColumnName = CS.ColumnName
    )
    AND CS.EntityName = @Entity
GROUP BY
    CS.EntityName
,   CS.CollectionName
,   CS.ColumnName;

We run the above query and that looks like what we want so into the FOR XML machine it goes.

DECLARE @Entity varchar(30) = 'DoesNotExist'
,   @ColumnSizeDeclaration varchar(max);

;WITH BASE_DATA AS
(
    -- Define the base data we'll use to drive creation
    SELECT
        CS.EntityName
    ,   CS.CollectionName
    ,   CS.ColumnName
    ,   MAX(CS.ColumnLength) AS ColumnLength
    FROM
        ServiceNow.ColumnSizing AS CS
    WHERE
        CS.ColumnLength > 0
        AND CS.ColumnLength =  
        (
            SELECT
                MAX(CSI.ColumnLength) AS ColumnLength
            FROM
                ServiceNow.ColumnSizing AS CSI
            WHERE
                CSI.EntityName = CS.EntityName
                AND CSI.ColumnName = CS.ColumnName
        )
        AND CS.EntityName = @Entity
    GROUP BY
        CS.EntityName
    ,   CS.CollectionName
    ,   CS.ColumnName
)
SELECT DISTINCT
    BD.EntityName
,   (
        SELECT
            CONCAT
            (
                ''
            ,   BDI.ColumnName
            ,   ' varchar('
            ,   BDI.ColumnLength
            ,   '),'
            ) 
        FROM
            BASE_DATA AS BDI
        WHERE
            BDI.EntityName = BD.EntityName
            AND BDI.CollectionName = BD.CollectionName
        FOR XML PATH('')
) AS ColumnSizeDeclaration
FROM
    BASE_DATA AS BD;

That looks like a lot, but it's not. Run it and you'll see we get one row with two elements: "DoesNotExist" and "ABC varchar(10),BCD varchar(30)," That trailing comma is going to be a problem, that's generally why you see people either a leading delimiter and use STUFF to remove it or in the case of a trailing delimiter LEFT with LEN -1 does the trick.

But we're clever and don't need such tricks. If you look at the declaration for @Template, we assume there will *always* be at final column of InsertDate which didn't have a comma preceding it. Always define the rules to favor yourself. ;)

Instead of the static table declaration we used, let's marry our common table expression, CTE, with the table template.

DECLARE @entity nvarchar(30) = N'DoesNotExist'
,   @Template nvarchar(max) = N'DROP TABLE IF EXISTS ServiceNow.Stage;
    CREATE TABLE
        ServiceNow.Stage
    (
    
    InsertDate datetime CONSTRAINT DF_ServiceNow_Stage_InsertDate DEFAULT (GETDATE())
    );
    CREATE CLUSTERED COLUMNSTORE INDEX
        CCI_ServiceNow_Stage
    ON
        ServiceNow.Stage;'
,   @Columns nvarchar(max) = N'';

-- CTE logic patched in here

;WITH BASE_DATA AS
(
    -- Define the base data we'll use to drive creation
    SELECT
        CS.EntityName
    ,   CS.CollectionName
    ,   CS.ColumnName
    ,   MAX(CS.ColumnLength) AS ColumnLength
    FROM
        ServiceNow.ColumnSizing AS CS
    WHERE
        CS.ColumnLength > 0
        AND CS.ColumnLength =  
        (
            SELECT
                MAX(CSI.ColumnLength) AS ColumnLength
            FROM
                ServiceNow.ColumnSizing AS CSI
            WHERE
                CSI.EntityName = CS.EntityName
                AND CSI.ColumnName = CS.ColumnName
        )
        AND CS.EntityName = @Entity
    GROUP BY
        CS.EntityName
    ,   CS.CollectionName
    ,   CS.ColumnName
)
SELECT DISTINCT
    @Columns = (
        SELECT
            CONCAT
            (
                ''
            ,   BDI.ColumnName
            ,   ' varchar('
            ,   BDI.ColumnLength
            ,   '),'
            ) 
        FROM
            BASE_DATA AS BDI
        WHERE
            BDI.EntityName = BD.EntityName
            AND BDI.CollectionName = BD.CollectionName
        FOR XML PATH('')
) 
FROM
    BASE_DATA AS BD;

DECLARE @Query nvarchar(max) = REPLACE(REPLACE(@Template, '', @Entity), '', @Columns);
EXECUTE sys.sp_executesql @Query, N'';

Bam, look at it now. We took advantage of the new DROP IF EXISTS (DIE) syntax to drop our table and we've redeclared it, nice as can be. Don't take my word for it though, ask the system tables what they see.

SELECT
    S.name AS SchemaName
,   T.name AS TableName
,   C.name AS ColumnName
,   T2.name AS DataTypeName
,   C.max_length
FROM
    sys.schemas AS S
    INNER JOIN
        sys.tables AS T
        ON T.schema_id = S.schema_id
    INNER JOIN
        sys.columns AS C
        ON C.object_id = T.object_id
    INNER JOIN
        sys.types AS T2
        ON T2.user_type_id = C.user_type_id
WHERE
    S.name = 'ServiceNow'
    AND T.name = 'StageDoesNotExist'
ORDER BY
    S.name
,   T.name
,   C.column_id;
Excellent, we now turn on the actual data storage process and voila, we get a value stored into our table. Simulate it with the following.
INSERT INTO ServiceNow.StageDoesNotExist
(ABC, BCD) VALUES ('Important', 'Very, very important');
Truly, all is well and good.

*time passes*

Then, this happens

WAITFOR DELAY ('00:00:03');

INSERT INTO
    ServiceNow.ColumnSizing
(
    EntityName
,   CollectionName
,   ColumnName
,   ColumnLength
,   InsertDate
)
VALUES
    ('DoesNotExist', 'records', 'BCD', 34, current_timestamp);
Followed by
INSERT INTO ServiceNow.StageDoesNotExist
(ABC, BCD) VALUES ('Important','Very important, yet ephemeral data');
To quote Dr. Beckett: Oh boy

Friday, December 29, 2017

Python Azure Function requestor's IP address

Python Azure Function requestor's IP address

I'm working on an anonymous level Azure Function in python and couldn't find where they stored the IP address of the caller, if applicable. It's in the request headers, which makes sense but not until I spent far too much time looking in all the wrong places. A minimal reproduction would look something like

import os
iptag = "REQ_HEADERS_X-FORWARDED-FOR"
ip = "Tag name:{} Tag value:{}".format(iptag, os.environ[iptag])
print(ip)

Now, something to note is that it will return not only the IP address but the port the call came in through. Thus, I see a value of 192.168.1.200:33496 instead of just the ipv4 value.

Knowing where to look, I can see that the heavy lifting had already been done by the most excellent HTTPHelper but as a wise man once said: knowing is half the battle.

import os
from AzureHTTPHelper import HTTPHelper
http = HTTPHelper()
#Notice the lower casing of properties here and the trimming of the type (REQ_HEADERS)
iptag = "x-forwarded-for"
ip = "Tag name:{} Tag value:{}".format(iptag, http.headers[iptag])
print(ip)

Yo Joe!


Thursday, November 9, 2017

What's my transaction isolation level

What's my transaction isolation level

That's an easy question to answer - StackOverflow has a fine answer.

But, what if I use sp_executesql to run some dynamic sql - does it default the connection isolation level? If I change isolation level within the query, does it propagate back to the invoker? That's a great question, William. Let's find out.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

SELECT CASE transaction_isolation_level 
WHEN 0 THEN 'Unspecified' 
WHEN 1 THEN 'ReadUncommitted' 
WHEN 2 THEN 'ReadCommitted' 
WHEN 3 THEN 'Repeatable' 
WHEN 4 THEN 'Serializable' 
WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL 
FROM sys.dm_exec_sessions 
where session_id = @@SPID;

DECLARE
    @query nvarchar(max) = N'-- Identify iso level
SELECT CASE transaction_isolation_level 
WHEN 0 THEN ''Unspecified'' 
WHEN 1 THEN ''ReadUncommitted'' 
WHEN 2 THEN ''ReadCommitted'' 
WHEN 3 THEN ''Repeatable'' 
WHEN 4 THEN ''Serializable'' 
WHEN 5 THEN ''Snapshot'' END AS TRANSACTION_ISOLATION_LEVEL 
FROM sys.dm_exec_sessions 
where session_id = @@SPID;

SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

-- Test iso level
SELECT CASE transaction_isolation_level 
WHEN 0 THEN ''Unspecified'' 
WHEN 1 THEN ''ReadUncommitted'' 
WHEN 2 THEN ''ReadCommitted'' 
WHEN 3 THEN ''Repeatable'' 
WHEN 4 THEN ''Serializable'' 
WHEN 5 THEN ''Snapshot'' END AS TRANSACTION_ISOLATION_LEVEL 
FROM sys.dm_exec_sessions 
where session_id = @@SPID'

EXECUTE sys.sp_executesql @query, N'';

SELECT CASE transaction_isolation_level 
WHEN 0 THEN 'Unspecified' 
WHEN 1 THEN 'ReadUncommitted' 
WHEN 2 THEN 'ReadCommitted' 
WHEN 3 THEN 'Repeatable' 
WHEN 4 THEN 'Serializable' 
WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL 
FROM sys.dm_exec_sessions 
where session_id = @@SPID;

I begin my session in read uncommitted aka "nolock". I then run dynamic sql which identifies my isolation level, still read uncommitted, change it to a different level, confirmed at read committed, and then exit and check my final state - back to read uncommitted.

Finally, thanks to Andrew Kelly (b|t) for answering the #sqlhelp call.

Thursday, October 12, 2017

Temporal table maker

Temporal table maker

This post is another in the continuing theme of "making things consistent." We were voluntold to help another team get their staging environment set up. Piece of cake, SQL Compare made it trivial to snap the tables over.

Oh, we don't want these tables in Custom schema, we want them in dbo. No problem, SQL Compare again and change owner mappings and bam, out come all the tables.

Oh, can we get this in near real-time? Say every 15 minutes. ... Transaction replication to the rescue!

Oh, we don't know what data we need yet so could you keep it all, forever? ... Temporal tables to the rescue?

Yes, temporal tables is perfect. But don't put the history table in the same schema as the table, put in this one. And put all of that in its own file group.

And that's what this script does. It

  • generates a table definition for an existing table, copying it into a new schema while also adding in the start/stop columns for temporal tables.
  • crates the clustered column store index command
  • creates a non-clustered index against the start/stop columns and the natural key(s)
  • Alters the original table to add in our start/stop columns with defaults and the period
  • Alters the original table to turn on versioning

    How does it do all that? It finds all the tables that exist in our source schema and doesn't yet exist in the target schema. I build out a select * query against that table and feed it into sys.dm_exec_describe_first_result_set to identify the columns. And since sys.dm_exec_describe_first_result_set so nicely brings back the data type with length, precision and scale specified, we might as well use that as well. By specifying a value of 1 for browse_information_mode parameter, we will get the key columns defined for us. Which is handy when we want to make our non-clustered index.

    DECLARE
        @query nvarchar(4000)
    ,   @targetSchema sysname = 'dbo_HISTORY'
    ,   @tableName sysname
    ,   @targetFileGroup sysname = 'History'
    
    DECLARE
        CSR CURSOR
    FAST_FORWARD
    FOR
    SELECT ALL
        CONCAT(
        'SELECT * FROM '
        ,   s.name
        ,   '.'
        ,   t.name) 
    ,   t.name
    FROM 
        sys.schemas AS S
        INNER JOIN sys.tables AS T
        ON T.schema_id = S.schema_id
    WHERE
        1=1
        AND S.name = 'dbo'
        AND T.name NOT IN
        (SELECT TI.name FROM sys.schemas AS SI INNER JOIN sys.tables AS TI ON TI.schema_id = SI.schema_id WHERE SI.name = @targetSchema)
    
    ;
    OPEN CSR;
    FETCH NEXT FROM CSR INTO @query, @tableName;
    WHILE @@FETCH_STATUS = 0
    BEGIN
        -- do something
        SELECT
            CONCAT
        (
            'CREATE TABLE '
        ,   @targetSchema
        ,   '.'
        ,   @tableName
        ,   '('
        ,   STUFF
            (
                (
                SELECT
                    CONCAT
                    (
                        ','
                    ,   DEDFRS.name
                    ,   ' '
                    ,   DEDFRS.system_type_name
                    ,   ' '
                    ,   CASE DEDFRS.is_nullable
                        WHEN 1 THEN ''
                        ELSE 'NOT '
                        END
                    ,   'NULL'
                    )
                FROM
                    sys.dm_exec_describe_first_result_set(@query, N'', 1) AS DEDFRS
                ORDER BY
                    DEDFRS.column_ordinal
                FOR XML PATH('')
                )
            ,   1
            ,   1
            ,   ''
            )
            ,   ', SysStartTime datetime2(7) NOT NULL'
            ,   ', SysEndTime datetime2(7) NOT NULL'
            ,   ')'
            ,   ' ON '
            ,   @targetFileGroup
            ,   ';'
            ,   CHAR(13)
            ,   'CREATE CLUSTERED COLUMNSTORE INDEX CCI_'
            ,   @targetSchema
            ,   '_'
            ,   @tableName
            ,   ' ON '
            ,   @targetSchema
            ,   '.'
            ,   @tableName
            ,   ' ON '
            ,   @targetFileGroup
            ,   ';'
            ,   CHAR(13)
            ,   'CREATE NONCLUSTERED INDEX IX_'
            ,   @targetSchema
            ,   '_'
            ,   @tableName
            ,   '_PERIOD_COLUMNS '
            ,   ' ON '
            ,   @targetSchema
            ,   '.'
            ,   @tableName
    
            ,   '('
            ,   'SysEndTime'
            ,   ',SysStartTime'
            ,   (
                    SELECT
                        CONCAT
                        (
                            ','
                        ,   DEDFRS.name
                        )
                    FROM
                        sys.dm_exec_describe_first_result_set(@query, N'', 1) AS DEDFRS
                    WHERE
                        DEDFRS.is_part_of_unique_key = 1
                    ORDER BY
                        DEDFRS.column_ordinal
                    FOR XML PATH('')
                    )
            ,   ')'
            ,   ' ON '
            ,   @targetFileGroup
            ,   ';'
            ,   CHAR(13)
            ,   'ALTER TABLE '
            ,   'dbo'
            ,   '.'
            ,   @tableName
            ,   ' ADD '
            ,   'SysStartTime datetime2(7) GENERATED ALWAYS AS ROW START HIDDEN'
            ,   ' CONSTRAINT DF_'
            ,   'dbo_'
            ,   @tableName
            ,   '_SysStartTime DEFAULT SYSUTCDATETIME()'
            ,   ', SysEndTime datetime2(7) GENERATED ALWAYS AS ROW END HIDDEN'
            ,   ' CONSTRAINT DF_'
            ,   'dbo_'
            ,   @tableName
            ,   '_SysEndTime DEFAULT DATETIME2FROMPARTS(9999, 12, 31, 23,59, 59,9999999,7)'
            ,   ', PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime);'
            ,   CHAR(13)
            ,   'ALTER TABLE '
            ,   'dbo'
            ,   '.'
            ,   @tableName
            ,   ' SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = '
            ,   @targetSchema
            ,   '.'
            ,   @tableName
            ,   '));'
    
        )
    
    FETCH NEXT FROM CSR INTO @query, @tableName;
    END
    CLOSE CSR;
    DEALLOCATE CSR;

    Lessons learned

    The exampled I cobbled together from MSDN were great, until they weren't. Be wary of anyone who doesn't specify lengths - one example used datetime2 for the start/stop columns, the other specified datetime2(0). The default precision with datetime2 is 7, which is very much not 0. Those data types differences were incompatible for temporal table and history.

    Cleaning up from that mess was ugly. I couldn't drop the start/stop columns until I dropped the PERIOD column. One doesn't drop a PERIOD though, one has to DROP PERIOD FOR SYSTEM_TIME

    I prefer to use the *FromParts methods where I can so that's in my default instead of casting strings. Out ambiguity of internationalization!

    This doesn't account for tables with bad names and potentially without primary/unique keys defined. My domain was clean so beware of this a general purpose temporal table maker.

    Improvements

    How can you make this better? My hard coded dbo should have been abstracted out to a @sourceSchema variable. I should have used QUOTENAME for all my entity names. I could have stuffed all those commands into either a table or invoked it directly with a sp_execute_sql call. I should have abused CONCAT more Wait, that's done. That's very well done.

    Finally, you are responsible for the results of this script. Don't run it anywhere without evaluating and understanding the consequences.

  • Thursday, October 5, 2017

    Broken View Finder

    Broken View Finder

    Shh, shhhhhh, we're being very very quiet, we're hunting broken views. Recently, we were asked to migrate some code changes and after doing so, the requesting team told us we had broken all of their views, but they couldn't tell us what was broken, just that everything was. After a quick rollback to snapshot, thank you Red Gate SQL Compare, I thought it'd be enlightening to see whether anything was broken before our code had been deployed.

    You'll never guess what we discovered </clickbain&grt;

    How can you tell a view is broken

    The easiest way is SELECT TOP 1 * FROM dbo.MyView; but then you need to figure out all of your views.

    That's easy enough, SELECT * FROM sys.schemas AS S INNER JOIN sys.views AS V ON V.schema_id = S.schema_id;

    But you know, there's something built into SQL Server that will actually test your views - sys.sp_refreshview. That's much cleaner than running sys.sp_executesql with our SELECT TOP 1s

    -- This script identifies broken views
    -- and at least the first error with it
    SET NOCOUNT ON;
    DECLARE
        CSR CURSOR
    FAST_FORWARD
    FOR
    SELECT
        CONCAT(QUOTENAME(S.name), '.', QUOTENAME(V.name)) AS vname
    FROM
        sys.views AS V
        INNER JOIN
            sys.schemas AS S
            ON S.schema_id = V.schema_id;
    
    DECLARE
        @viewname nvarchar(776);
    DECLARE
        @BROKENVIEWS table
    (
        viewname nvarchar(776)
    ,   ErrorMessage nvarchar(4000)
    ,   ErrorLine int
    );
    
    OPEN
        CSR;
    FETCH
        NEXT FROM CSR INTO @viewname;
    
    WHILE
        @@FETCH_STATUS = 0
    BEGIN
    
        BEGIN TRY
            EXECUTE sys.sp_refreshview
                @viewname;
        END TRY
        BEGIN CATCH
            INSERT INTO @BROKENVIEWS(viewname, ErrorMessage, ErrorLine)
            VALUES
            (
                @viewname
            ,   ERROR_MESSAGE()
            ,   ERROR_LINE()
            );
            
        END CATCH
    
        FETCH
            NEXT FROM CSR INTO @viewname;
    END
    
    CLOSE CSR;
    DEALLOCATE CSR;
    
    SELECT
        B.*
    FROM
        @BROKENVIEWS AS B
    

    Can you think of ways to improve this? Either way, happy hunting!