World of Whatever

A blog about SQL Server, SSIS, C# and whatever else I happen to be dealing with in my professional life.

Find ramblings

Loading...

Monday, October 26, 2015

Biml - Script Component Source

Biml - Script Component Source

What is the Biml to create an SSIS Script Component Source? This is a very simplistic demo but you'll see the magic is distilled to two sections - the first part is where we define the output buffer, lines 20-24. In this case, I specify it is DemoOutput and then provide a columns collection with a single column, SourceColumn.

The second set of magic is in the CreateNewOutputRows, lines 54 to 58. There I use the buffer I defined above to inject a single row into it with a value of "Demo". Nothing fancy, everything is static from a Biml perspective but I needed to know the syntax before I could try something a little more advanced.

Biml Demo Script Component Source

Using this a simple matter of adding a new Biml file into an existing SSIS project and pasting the following code. What results from right-clicking on the file and selecting Generate New SSIS package will be a single SSIS package, BasicScriptComponentSource, with a Data Flow task "DFT Demo Source Component"

The data flow "DFT Demo Source Component" consists of our new Script Component, SCR Demo Source, and a Derived Column, DER Placeholder, so you can attach a data viewer if need be.

Use the following Biml to generate your package and feel free to tell me in the comments how you adapted it to solve a "real" problem.

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
    <ScriptProjects>
        <ScriptComponentProject Name="SC_Demo">
            <AssemblyReferences>
                <AssemblyReference AssemblyPath="Microsoft.SqlServer.DTSPipelineWrap" />
                <AssemblyReference AssemblyPath="Microsoft.SqlServer.DTSRuntimeWrap" />
                <AssemblyReference AssemblyPath="Microsoft.SqlServer.PipelineHost" />
                <AssemblyReference AssemblyPath="Microsoft.SqlServer.TxScript" />
                <AssemblyReference AssemblyPath="System.dll" />
                <AssemblyReference AssemblyPath="System.AddIn.dll" />
                <AssemblyReference AssemblyPath="System.Data.dll" />
                <AssemblyReference AssemblyPath="System.Xml.dll" />
            </AssemblyReferences>
            <OutputBuffers>
                <!--    
                Define what your buffer is called and what it looks like
                Must set IsSynchronous as false. Otherwise it is a transformation
                (one row enters, one row leaves) and not a source.
                -->
                <OutputBuffer Name="DemoOutput" IsSynchronous="false">
                    <Columns>
                        <Column Name="SourceColumn" DataType="String" Length="50" /> 
                    </Columns>                    
                </OutputBuffer>                                 
            </OutputBuffers>
            <Files>
                <File Path="Properties\AssemblyInfo.cs">
using System.Reflection;
using System.Runtime.CompilerServices;
[assembly: AssemblyTitle("SC_Demo")]
[assembly: AssemblyDescription("Demonstrate Script Component as source")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("billinkc")]
[assembly: AssemblyProduct("SC_Demo")]
[assembly: AssemblyCopyright("Copyright @ 2015")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]
[assembly: AssemblyVersion("1.0.*")]
</File>
<File Path="main.cs">
<![CDATA[
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;

    /// &lt;summary&gt;
    /// Demonstrate how to generate a Script Component Source in SSIS
    /// &lt;/summary&gt;
    [Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
    public class ScriptMain : UserComponent
    {

        public override void CreateNewOutputRows()
        {
            DemoOutputBuffer.AddRow();
            DemoOutputBuffer.SourceColumn = "Demo";
        }
    }
]]>
                </File>
            </Files>
        </ScriptComponentProject>
    </ScriptProjects>

    <Packages>
        <Package Name="BasicScriptComponentSource" ConstraintMode="Linear">
            <Tasks>
                <Dataflow Name="DFT Demo Source Component">
                    <Transformations>
                        <ScriptComponentSource Name="SCR Demo Source">
                            <ScriptComponentProjectReference
                                ScriptComponentProjectName="SC_Demo">
                                
                            </ScriptComponentProjectReference>
                        </ScriptComponentSource>
                        <DerivedColumns Name="DER Placeholder" />
                    </Transformations>
                </Dataflow>
            </Tasks>
        </Package>
    </Packages>    
</Biml>

Tuesday, September 22, 2015

SSISDB Delete all packages and environments

SSISDB tear down script

For my Summit 2015 presentation, 2014 SSIS Project Deployment Model: Deployment and Maintenance, I needed to revise my SSISDB tear down script. When I first built it, it removed all the projects and then removed all the folders. Which was great but as I've noted elsewhere, a folder can contain Environments and those too will need to be accounted for. Otherwise, the catalog.delete_folder operation will fail as it is not empty.

Running the following code will remove everything in your SSISDB. This is the nuclear option so be certain you really want to clean house. You can uncomment the WHERE clause and selectively remove folders for a tactical nuclear option.

How it works is simple: I query catalog.folders to get a list of folders and then look in catalog.projects to find all the projects contained within the folder and delete those. I then repeat the process but look in catalog.environment to identify and remove all the SSIS environments.

USE [SSISDB]
GO

DECLARE
    @folder_name nvarchar(128)
,   @project_name nvarchar(128)
,   @environment_name nvarchar(128);

DECLARE Csr CURSOR
READ_ONLY FOR 
SELECT
    CF.name AS folder_name
FROM
    catalog.folders AS CF
--WHERE
--    CF.name IN ('');
;

OPEN Csr;
FETCH NEXT FROM Csr INTO
    @folder_name;
WHILE (@@fetch_status <> -1)
BEGIN
    IF (@@fetch_status <> -2)
    BEGIN

        -------------------------------------------------------------
        -- Drop any projects
        -------------------------------------------------------------
        DECLARE FCsr CURSOR
        READ_ONLY FOR 
        SELECT
            CP.name AS project_name 
        FROM
            catalog.projects AS CP
            INNER JOIN
                catalog.folders AS CF
                ON CF.folder_id = CP.folder_id
        WHERE
            CF.name = @folder_name;

        OPEN FCsr;
        FETCH NEXT FROM FCsr INTO
            @project_name;
        WHILE(@@FETCH_STATUS = 0)
        BEGIN
            EXECUTE catalog.delete_project
                @folder_name
            ,   @project_name;

            FETCH NEXT FROM FCsr INTO
                @project_name;
        END
        CLOSE FCsr;
        DEALLOCATE FCsr;

        -------------------------------------------------------------
        -- Drop any environments
        -------------------------------------------------------------
        DECLARE ECsr CURSOR
        READ_ONLY FOR 
        SELECT
            E.name AS project_name 
        FROM
            catalog.environments AS E 
            INNER JOIN
                catalog.folders AS CF
                ON CF.folder_id = E.folder_id
        WHERE
            CF.name = @folder_name;

        OPEN ECsr;
        FETCH NEXT FROM ECsr INTO
            @environment_name;
        WHILE(@@FETCH_STATUS = 0)
        BEGIN
            EXECUTE catalog.delete_environment             
                @folder_name
            ,   @environment_name;

            FETCH NEXT FROM ECsr INTO
                @environment_name;
        END
        CLOSE ECsr;
        DEALLOCATE ECsr;

        -------------------------------------------------------------
        -- Finally, remove the folder
        -------------------------------------------------------------
        EXECUTE [catalog].[delete_folder]
            @folder_name;

    END
    FETCH NEXT FROM Csr INTO
        @folder_name;

END

CLOSE Csr;
DEALLOCATE Csr;

Caveat

The one thing I haven't investigated yet was cross folder dependencies. Imagine folders Configurations and Projects. Configurations has an Environment called Settings. Projects has a project called AWShoppingCart which then has a reference to the environment Settings. I expect I will be able to delete the Configurations folder and the environment just fine and it will just leave the project AWShoppingCart broken until I reconfigure it. But, the environment delete operation could just as easily fail if there's reference count is non-zero.

Monday, September 21, 2015

Testing SQL Server Alerts

a.k.a. why the heck aren't my Alerts alerting?

I'm learning Power BI by building a sample dashboard for database operational analytics. One of the metrics we want to track was whether any SQL Alerts had fired in the reporting time frame.

Seems reasonable enough so I defined a few alerts on my machine and did some dumb things that should fire off the alerts. And it didn't. At first, I thought I must be looking in the wrong place but I watched profiler and the SSMS gui was calling "EXECUTE msdb.dbo.sp_help_alert" which at its core uses "msdb.dbo.sysalerts". All of that looked right but by golly the last_occurrence_date fields all showed zeros.

I took to twitter asking what else I could do to invoke errors and y'all had some great ideas

  • https://twitter.com/crummel4/status/645764451724095489
  • https://twitter.com/SirFisch/status/645763600116617216
but the one that kept coming up was raiserror . There I scan specify whatever severity I'd like so here's my alert tester for severity 15
DECLARE 
    @DBID int
,   @DBNAME nvarchar(128)
,   @severity int;

SELECT 
    @DBID = DB_ID()
,   @DBNAME = DB_NAME()
,   @severity = 15;

RAISERROR
(
    N'The current database ID is:%d, the database name is: %s.'
,   @severity -- Severity.
,   1 -- State.
,   @DBID -- First substitution argument.
,   @DBNAME
);
I run it, verify SSMS shows The current database ID is:23, the database name is: tempdb. Msg 50000, Level 15, State 1, Line 29 and as expected sp_help_alert shows ... 0 for occurrence for date and time? What the heck? I raised an error of the appropriate severity and had SQL Alerts defined for that severity - it's peanut butter and chocolate, they go together but this was more like mayonnaise and peanut butter.

A quick trip around the interblag raised some interesting possibilities. Maybe I needed to restart SQL Agent. Maybe I was tracking the wrong thing and some other links I had since closed but none of these bore fruit.

Resolution

I scaled up my RAISERROR call to enumerate all the severity levels just to get something to work and it wasn't until I hit 19 that I found my mistake. Error severity levels greater than 18 can only be specified by members of the sysadmin role, using the WITH LOG option.

Once I tacked on the "WITH LOG" option, my alerts began firing.

DECLARE 
    @DBID int
,   @DBNAME nvarchar(128)
,   @severity int;

SELECT 
    @DBID = DB_ID()
,   @DBNAME = DB_NAME();

DECLARE Csr CURSOR
FOR
-- Sev 20 and above breaks the current connection
SELECT TOP 20
    CAST(ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS int) AS Severity
FROM
    sys.objects AS O;

OPEN Csr;

FETCH NEXT FROM Csr
INTO @severity;

WHILE(@@FETCH_STATUS = 0)
BEGIN

    RAISERROR
    (
        N'The current database ID is:%d, the database name is: %s.'
    ,   @severity -- Severity.
    ,   1 -- State.
    ,   @DBID -- First substitution argument.
    ,   @DBNAME
    )
    WITH LOG;

    FETCH NEXT FROM Csr
    INTO @severity;
END

CLOSE Csr;
DEALLOCATE Csr;

Resources

Glenn Berry and Brent Ozar Unlimited both have great posts with some Alerts should have turned on for all of your servers.

Wednesday, August 5, 2015

Tracking bad queries aka Finally an Extended Event that is useful

Dessert first a.k.a. how useful would this be?

Imagine running a query and finding out all the bad queries that have been submitted. Like this

The client restored a database but neglected to fix permissions for those first three. Look at that fourth one though! I only enabled this trace on the 3rd. Any takers on how many months that error has been swallowed by their application framework? Exactly. This is definitely something I'm adding into my utility belt. Read on for how to set up Extended Events to do just this.
We now resume your regularly scheduled blog post, already in progress.

Finally, an extended event that was useful.

I'm sure that's heresy among the pure DBAs but I'm not a DBA. I'm just a developer, among many other things. At my last client, one of the developers was hell bent on building procedures which called procedures that built code to call more procedures - dynamic code to build statements to an untenable nesting level. Which was cool in theory but unfortunately they had no way to debug it when it built bad queries. Which it did. Over and over again. Thousands per second. It could have been marketed as Errors as a Service. The real problem though was that their bad code took several hours to for them to recreate the error scenario. What we needed was a way to log the statements across all those levels of nested stored procedures and the values they sent.

Enter Kendra Little and this "magic" script. It's magic because it was the missing link for me finally "getting" XEvents. I remember asking at SQL Saturday #91 (Omaha) back in 2011 "where would I, as a developer, use extended events?" Four years later, I had a scenario where I needed it.

I had meant to blog about this in April when I first used it but then shiny squirrels and this was lost in the TODO list. Fast forward to today. At my current client, we've built a new website and the database to manage part of their business. Most of the website runs off stored procedures but part of it is SQL embedded in the web pages. As much as the developer would like to just make the account the database owner, I'd rather not give up on security. I created a custom database role and added the service account into and granted it execute permissions to the schema. All is well and good except for those pesky direct queries we couldn't turn into procedures. When those queries hit, the "database is down." At least, that's the depth of the email I get. You know what would be handy here, that same bad code tracker!

Implementation

My implementation is to create an extended event that logs to an asynchronous file (first, do no harm). Where my approach differs from Kendra's is parsing all those files is dog slow - at least for the Errors As A Service scenario. So instead, I create a heap to hold the parsed bits I care about as well as the raw XML from the extended event. That way if I determine I wanted some other attribute extracted, I'd still have the original data available. Oh, and finally, I clean up after myself which is important when you're a consultant. You might roll off an engagement and someone forgets to watch the drive and suddenly, you've filled a drive with log data.

You'll notice I create the table in tempdb. Basically, I don't care if I lose that table, it's just for my monitoring purposes and in fact, putting it in temp assures me that at some point, it'll get wiped and recreated. If I were blessed to be on a 2014+ platform, I'd look hard at whether a memory optimized table might serve the same need as long as it's declared with a durability of SCHEMA_ONLY. I don't think LOB types are supported there yet, but I suspect it's a matter of time before they are.

This idempotent script will create our table, dbo.ExtendedEventErrorCapture, if it does not exist. It will create an extended event session, what_queries_are_failing, where we capture any error above severity 10 and stream it out to the file on F:\XEventSessions. If you don't have an F drive with that folder, you need to either update the script or talk to your storage vendor as they will happily supply one. Finally, we turn on the trace/session.

USE tempdb;

--------------------------------------------------------------------------------
-- Create the table to store out the XE data
--------------------------------------------------------------------------------
IF NOT EXISTS
(
    SELECT
        *
    FROM
        sys.schemas AS S
        INNER JOIN 
            sys.tables AS T
        ON T.schema_id = S.schema_id
    WHERE
        T.name = 'ExtendedEventErrorCapture'
        AND S.name = 'dbo'
)
BEGIN
    CREATE TABLE 
        dbo.ExtendedEventErrorCapture
    (
        err_timestamp datetime2(7) NULL
    ,   err_severity bigint NULL
    ,   err_number bigint NULL
    ,   err_message nvarchar(512) NULL
    ,   sql_text nvarchar(MAX) NULL
    ,   event_data xml NULL
    ,   session_name sysname
    ,   username nvarchar(512)
    )
    -- This is only vaild for a Develper/Enterprise edition license
    WITH (DATA_COMPRESSION = PAGE);
END

--------------------------------------------------------------------------------
-- Create the extended event to keep track of bad sql queries
--------------------------------------------------------------------------------
IF NOT EXISTS
(
    SELECT
        *
    FROM
        sys.server_event_sessions AS SES
    WHERE
        SES.name = N'what_queries_are_failing'
)
BEGIN
    --Create an extended event session
    CREATE EVENT SESSION
        what_queries_are_failing
    ON SERVER
    ADD EVENT sqlserver.error_reported
    (
        ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
        WHERE ([severity]> 10)
    )
    ADD TARGET package0.asynchronous_file_target
    (set filename = 'F:\XEventSessions\what_queries_are_failing.xel' ,
        metadatafile = 'F:\XEventSessions\what_queries_are_failing.xem',
        max_file_size = 512,
        increment = 16,
        max_rollover_files = 5)
    WITH (MAX_DISPATCH_LATENCY = 5SECONDS);

END

--------------------------------------------------------------------------------
-- Turn on the extended event
--------------------------------------------------------------------------------
IF NOT EXISTS
(
    -- When a XE is active, then there is an entry
    -- in sys.dm_xe_sessions
    SELECT
        *
    FROM
        sys.dm_xe_sessions AS DXS
        INNER JOIN 
            sys.server_event_sessions AS SES
            ON SES.name = DXS.name
    WHERE
        SES.name = N'what_queries_are_failing'
)
BEGIN
    -- Start the session
    ALTER EVENT SESSION what_queries_are_failing
    ON SERVER STATE = START;
END

--------------------------------------------------------------------------------
-- Wait for errors
-- PROFIT!
--------------------------------------------------------------------------------

That's not so bad. Sure, it looks like a lot of code but you've got 1-2 lines of code to change (page compression and where to log to).

Where's the beef?

Once you think you've captured enough data, I turn off the session and parse the files into tables. As I am not a demon robot, I am grateful to Kendra for writing the XQuery for me to parse the meaningful data out of the trace files. I am a developer, but not that kind of developer.

USE tempdb;

--------------------------------------------------------------------------------
-- Turn off our extended event
--------------------------------------------------------------------------------
IF EXISTS
(
    -- When a XE is active, then there is an entry
    -- in sys.dm_xe_sessions
    SELECT
        *
    FROM
        sys.dm_xe_sessions AS DXS
        INNER JOIN 
            sys.server_event_sessions AS SES
            ON SES.name = DXS.name
    WHERE
        SES.name = N'what_queries_are_failing'
)
BEGIN
    -- Start the session
    ALTER EVENT SESSION what_queries_are_failing
    ON SERVER STATE = STOP;
END

--------------------------------------------------------------------------------
-- Extract data from our XE
--------------------------------------------------------------------------------
;
WITH events_cte AS
(
    SELECT
        DATEADD(mi,
        DATEDIFF(mi, GETUTCDATE(), CURRENT_TIMESTAMP),
        xevents.event_data.value('(event/@timestamp)[1]', 'datetime2')) AS [err_timestamp],
        xevents.event_data.value('(event/data[@name="severity"]/value)[1]', 'bigint') AS [err_severity],
        xevents.event_data.value('(event/data[@name="error_number"]/value)[1]', 'bigint') AS [err_number],
        xevents.event_data.value('(event/data[@name="message"]/value)[1]', 'nvarchar(512)') AS [err_message],
        xevents.event_data.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)') AS [sql_text],
        xevents.event_data,
        xevents.event_data.value('(event/action[@name="username"]/value)[1]', 'nvarchar(512)') AS [username],
        'what_queries_are_failing' AS session_name
    FROM sys.fn_xe_file_target_read_file
    (
        'F:\XEventSessions\what_queries_are_failing*.xel'
    ,   'F:\XEventSessions\what_queries_are_failing*.xem'
    ,   NULL
    ,   NULL) AS fxe
    CROSS APPLY (SELECT CAST(event_data as XML) AS event_data) AS xevents
)
INSERT INTO
    dbo.ExtendedEventErrorCapture
(
    err_timestamp
,   err_severity
,   err_number
,   err_message
,   sql_text
,   event_data
,   session_name
,   username
)
SELECT
    E.err_timestamp
,   E.err_severity
,   E.err_number
,   E.err_message
,   E.sql_text
,   E.event_data
,   E.session_name
,   E.username
FROM
    events_cte AS E;

Mischief managed

Clean up, clean up, everybody clean up! As our final step, since we're good citizens of this database and we remove our extended event and once that's done, we'll leverage xp_cmdshell to delete the tracefiles.

USE tempdb;

--------------------------------------------------------------------------------
-- Get rid our extended event
--------------------------------------------------------------------------------
IF EXISTS
(
    SELECT
        *
    FROM
        sys.server_event_sessions AS SES
    WHERE
        SES.name = N'what_queries_are_failing'
)
BEGIN
    -- Clean up your session from the server
    DROP EVENT SESSION what_queries_are_failing ON SERVER;
END

--------------------------------------------------------------------------------
-- Get rid our extended event files only if the XE is turned off
-- or no longer exists
--------------------------------------------------------------------------------
IF NOT EXISTS
(
    SELECT
        1
    FROM
        sys.dm_xe_sessions AS DXS
        INNER JOIN 
            sys.server_event_sessions AS SES
            ON SES.name = DXS.name
    WHERE
        SES.name = N'what_queries_are_failing'

    UNION ALL
    SELECT
        1
    FROM
        sys.server_event_sessions AS SES
    WHERE
        SES.name = N'what_queries_are_failing'

)
BEGIN
    -- Assumes you've turned on xp_cmdshell
    EXECUTE sys.xp_cmdshell'del F:\XEventSessions\what_queries_are_failing*.xe*';
END

Now what?

Query our table and find out what's going on. I use a query like the following to help me identify what errors we're getting, whether they're transient, etc.
SELECT
    EEEC.err_message
,   MIN(EEEC.err_timestamp) AS FirstSeen
,   MAX(EEEC.err_timestamp) AS LastSeen
,   COUNT_BIG(EEEC.err_timestamp) AS ErrorCount
FROM
    tempdb.dbo.ExtendedEventErrorCapture AS EEEC
GROUP BY
    EEEC.err_message
ORDER BY
    COUNT_BIG(1) DESC;

You've already seen the results but I've very happy to have found a use case for XE in my domain. In the next post, I'll wrap this implementation up with a nice little bow. Or a SQL Agent job. Definitely one of the two.

Monday, June 29, 2015

Biml - Unpivot transformation

Biml - Unpivot transformation

I had cause today to use the Unpivot transformation in SSIS. My source database was still in 2000 compatibility mode, don't laugh, so I couldn't use the PIVOT operator and I was too lazy to remember the CASE approach. My client records whether a customer uses a particular type of conveyance. For simplicity sake, we'll limit this to just whether they own a car or truck. Some customers might own both, only one or none. Part of my project is to normalize this data into a more sane data structure.

Source data

The following table approximates the data but there are many more bit fields to be had.
CustomerNameOwnsCarOwnsTruck
Customer 111
Customer 210
Customer 310
Customer 410
Customer 510
Customer 610
Customer 711
Customer 810
Customer 910
Customer 1010
Customer 1110
Customer 1210
Customer 1301
Customer 1400
Customer 1500
Customer 1600
Customer 1700
Customer 1800

SSIS Package

The package is rather simple - we have the above source data fed into an Unpivot component and then we have a Derived Column serving as an anchor point for a data viewer.

Unpivot

To no great surprise to anyone who's worked with Biml, the code is not complex. We need to provide specifics about how the pivot key column should work and then the detailed mapping of what we want to do with our columns. Here we're going to keep our CustomerName column but we want to merge OwnsCar and OwnsTruck columns into a single new column called SourceValue. The PivotKeyValue we supply will be the values associated to our pivot. Since we specified an Ansi string of length 20, the values we supply of Car and Truck must map into that domain.

Unpivot Biml

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
    <Connections>
        <OleDbConnection ConnectionString="Provider=SQLOLEDB;Data Source=localhost\dev2014;Integrated Security=SSPI;Initial Catalog=tempdb" Name="CM_OLE" />
    </Connections>
        <Packages>
        <Package ConstraintMode="Linear" Name="Component_Unpivot">
            <Variables>
                <Variable Name="QuerySource" DataType="String">
                    <![CDATA[SELECT
    'Customer ' + CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS varchar(2)) AS CustomerName
,   *
FROM
    (
        VALUES
            (CAST(1 AS bit))
        ,   (CAST(1 AS bit))
        ,   (CAST(0 AS bit))
    ) S(OwnsCar)
    CROSS APPLY
    (
        VALUES
            (CAST(1 AS bit))
        ,   (CAST(0 AS bit))
        ,   (CAST(0 AS bit))
        ,   (CAST(0 AS bit))
        ,   (CAST(0 AS bit))
        ,   (CAST(0 AS bit))
    ) F(OwnsTruck);
]]></Variable>
            </Variables>
            <Tasks>
                <Dataflow Name="DFT Unpviot">
                    <Transformations>
                        <OleDbSource 
                            ConnectionName="CM_OLE" 
                            Name="OLESRC Unpivot Source">
                            <VariableInput VariableName="User.QuerySource" />
                        </OleDbSource>

                        <Unpivot
                            Name="UPV Vehicle types"
                            PivotKeyValueColumnName="Vehicle"
                            PivotKeyValueColumnDataType="AnsiString"
                            PivotKeyValueColumnCodePage="1252"
                            PivotKeyValueColumnLength="20"
                            AutoPassThrough="false"
                            >
                            <Columns>
                                <Column SourceColumn="CustomerName" IsUsed="true" />
                                <Column SourceColumn="OwnsCar" TargetColumn="SourceValue" PivotKeyValue="Car" />
                                <Column SourceColumn="OwnsTruck" TargetColumn="SourceValue" PivotKeyValue="Truck" />
                            </Columns>
                        </Unpivot>

                        <DerivedColumns Name="DER Placeholder" />
                    </Transformations>
                </Dataflow>
            </Tasks>
        </Package>
    </Packages>
</Biml>

Monday, May 4, 2015

Biml breaking change - ServerExecutionID is repeated within scope

This is a brief post but I wanted to note that with the 1.7 release of BIDS Helper and 4.0 release of Mist, Biml code you found from around the Internet that dealt with the Project Deployment Model may no longer build. Specifically, you might run into

The name ServerExecutionID is repeated within scope. Eliminate duplicate names in scope

In your Variables collection, your Biml likely has an entry like

<Variable Name="ServerExecutionID" DataType="Int64" IncludeInDebugDump="Include" Namespace="System">0</Variable>

The approach of explicitly declaring the ServerExecutionID was required in BIDS Helper 1.6.6 and prior to be able to access that System scoped Variable. With the newer release and the ability to natively emit 2014+ packages, this is no longer a requirement. Not only is it not a requirement, it breaks validation with the above error message.

The resolution is simple, delete that declaration.

Saturday, May 2, 2015

Biml - Adding external assemblies

This blog post is coming live to you from SQL Saturday Dallas BI Edition. We were discussing horrible things you can do in SSIS and I mentioned how you can do anything that the .NET library supports. That got me wondering about whether I could do the same awful things within Biml. The answer is yes, assuming you supply sufficient force.

As a simple demonstration, I wanted to pop up a message box whenever I generated or checked my biml for errors. However, the MessageBox method lives in an assembly that isn't loaded by default. And to get that assembly to import, I needed to add it as a reference for the biml compiler. The last two lines of this show me adding the assembly and then importing the namespace.

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
    <#
    MessageBox.Show("This works");
    #>
</Biml>
<#@ import namespace="System.Windows.Forms" #>
<#@ assembly name= "C:\Windows\Microsoft.NET\Framework\v4.0.30319\System.Windows.Forms.dll" #>