SQL

Uploading Multiple RDL files to SSRS

My QA folks have a problem; development has been working on migrating a bunch of reports from a legacy 3rd-party application to SQL Server Reporting Services. Development has finished their “sprint” (*cough* waterfall), and handed over 52 reports to QA, and told the to load them into their QA server. The problem? The web interface only loads 1 report at a time.

Me, being the DevOps guy that I am, thought “well, that’s just silly; there has to be a way to upload all of the reports at once, rather than clicking on some silly buttons over and over again”. I googled, and the best explanation I found was at this blogpost: https://basic-ssrs.blogspot.com/2015/12/upload-multiple-rdl-files-from-local.html (Kudos; this is a VERY well written walk-through).

Unfortunately, I did exactly what the blog suggested, and ran into two problems. The first? Running the .bat file opened and closed a command window in seconds. No feedback. Nothing. I went back and made a minor change to the .bat file; namely, I added the PAUSE command to the end:

“E:\Program Files (x86)\Microsoft SQL Server\130\Tools\Binn\rs.exe” -i C:\report_upload\Upload_Multiple_RDL_files.rss -s “http://localhost/reportserver” -v FILE_NAME=”C:\report_upload\ConfigurationFile.txt”
pause

This allowed me to discover the second (more important) issue; I ran the .bat file, and now I get the following error:

Unhandled exception:
The definition of this report is not valid or supported by this version of Reporting Services. The report definition may have been created with a later version of Reporting Services, or contain content that is not well-formed or not valid based on Reporting Services schemas. Details: ‘.’, hexadecimal value 0x00, is an invalid character. Line 2563, position 10.

Ummmmm, what?

Back to the Bing. I found another article that describes basically the same process, but at the bottom of the comment section, there was this little gem:

Hello, I was getting the invalid content error:

“….or contain content that is not well-formed or not valid based on Reporting Services schemas. Details: ‘.’, hexadecimal value 0x00, is an invalid character….”

Made some research and found that arrays in vb are created as N+1 long size, and filled with 0x00, so you need to change the script to:

definition = New [Byte](stream.Length – 1) {}

 to get it working.

https://www.mssqltips.com/sqlservertip/3255/sql-server-reporting-services-ssrs-rsexe-utility/

BINGO! Went back to the .rss file and changed the below code block to

 ‘ Deploying the Reports

                    Try

                        Dim stream As FileStream = File.OpenRead(fileFullPath)

                        Dim NameWithExt As String = fileFullPath.Replace(path, “”)

                        Dim NameOnly As String = NameWithExt.Replace(“.rdl”, “”)

                        definition = New [Byte](stream.Length-1) {}

                        stream.Read(definition, 0, CInt(stream.Length))

Ran the.bat, and in just a few minutes, I had a ton of reports loaded to my directory. Now I just gotta fix the datasources.

**Edit – based on comment below, I’ve embedded the entire revised script.

‘ Script Starting Point
' Script to deploy report to report server
' EXECUTE VIA COMMAND LINE
‘ Original script: https://basic-ssrs.blogspot.com/2015/12/upload-multiple-rdl-files-from-local.html
‘ Upload_Multiple_RDL_files.rss

DIM definition As [Byte]() = Nothing
DIM warnings As Warning() = Nothing

Public Sub Main()

' Variable Declaration
        Dim TextLine As String = ""
        Dim LocalDir As String = ""
        Dim ServerDir As String = ""
        Dim definition As [Byte]() = Nothing
        'Dim warnings As Warning() = Nothing

' Reading Configuration Text file and assigning the values to variables
        If System.IO.File.Exists(FILE_NAME) = True Then
            Dim objReader As New System.IO.StreamReader(FILE_NAME)
            Do While objReader.Peek() <> -1
                TextLine = objReader.ReadLine()
                Dim parts As String() = TextLine.Split(New Char() {","c})
                'TextLine & objReader.ReadLine() '& vbNewLine
                LocalDir = parts(0)
                ServerDir = parts(1)

                Dim path As String = LocalDir
                Dim fileEntries As String() = Directory.GetFiles(path)
                Dim fileFullPath As String = ""
                For Each fileFullPath In fileEntries


 ' Deploying the Reports
                    Try
                        Dim stream As FileStream = File.OpenRead(fileFullPath)
                        Dim NameWithExt As String = fileFullPath.Replace(path, "")
                        Dim NameOnly As String = NameWithExt.Replace(".rdl", "")
                        definition = New [Byte](stream.Length-1) {}
                        stream.Read(definition, 0, CInt(stream.Length))

      warnings = rs.CreateReport(NameOnly, ServerDir, True, definition, Nothing)

      If Not (warnings Is Nothing) Then
        DIM warning As Warning
        For Each warning In warnings
        Console.WriteLine(warning.Message)
        Next warning
      Else
       Console.WriteLine("Report: {0} PUBLISHED!", NameOnly)
      End If

     Catch e As IOException
      Console.WriteLine(e.Message)
     End Try
    Next fileFullPath
   Loop
         Else

            Dim MsgBox as String = "File Does Not Exist"

        End If
End Sub

'End of the Script

Oh, The Places You’ll Go! – #SQLSeuss #SQLPASS

Last week, I had the privilege to speak at the annual PASS Summit; I got to present two different sessions, but the one I’m the most proud of was my Lightning Talk: Oh, the Places You’ll Go! A Seussian Guide to the Data Platform. I bungled the presentation a bit (sorry for those of you who want to listen to it), but I feel pretty good about the content. I’ve presented it below, with the slides that I used for the talk.

The goal of this presentation was to explore the Microsoft Data Platform from the perspective of a SQL Server professional; I found this great conceptual diagram of the platform from this website a while back, and wanted to use it as a framework. I figured the best way to teach a subject was the same way I teach my 3-year-old: a little bit of whimsy.

Enjoy.

You have brains in your head

And SQL Skills to boot

You’ll soar to great heights

On the Data Platform too

You’re on your own, and you know what you know,

And YOU are the one who’ll decide where to go.

You’ve mastered tables, columns and rows, OHHHHH MYYYY

You may even have dabbled in a little B.I.

You’re a data professional, full of zest,

But now you’re wondering “What comes next?”

Data! It’s more than just SQL,

And there’s a slew of it coming, measured without equal.

Zettabytes, YotaBytes, XenoBytes and more

All coming our way, faster than ever before.

So what should we do? How should we act?

Should we rest on our laurels? Should we lie on our backs?

Do we sit idly by, while the going gets tough?

No… no, we step up our game and start learning new stuff!

 

Oh, the places you’ll go!

ARCHITECTURE

Let’s start with the Theories,

The things you should know

Designing systems as services,

Are the route you might go.

Distributed, scalable

Compute on Demand

The Internet of Things

And all that it commands.

Infrastructure is base,

Platform is in line

Software and data

Rest on top of design

Once you’ve grasped this

Once you’ve settled in

You’ve embraced cloud thinking

Even while staying on-prem.

But beyond the cloud, there’s data itself.

Structured, polyschematic, binary, and log

Centralized or on the edge,

Some might say “in the fog”

Big Data, Fast Data, Dark, New and Lost

All of it needs management, all at some cost

There’s opportunity there to discover something new

But it will take somebody, somebody with skills like you.

Beyond relational, moving deep into insight

We must embrace new directions, and bring data to life

And there’s so many directions to go!

ADMINISTRATORS

For those of you who prefer administration

System engineering and server calibration

You need to acknowledge, and you probably do

You’ll manage more systems, with resources few.

Automation and scripting are the tools of the trade

Learn powershell to step up your game.

Take what you know about managing SQL

And apply it to more tech; you’ll be without equal

Besides the familiar, disk memory CPU

There’s virtualization and networking too

In the future you might even manage a zoo,

Clustering elephants, and a penguin or two.

 

But it all hinges on answering things

Making servers reliable and performance tuning,

Monitoring, maintenance, backup strategies

All of these things you do with some ease.

And it doesn’t matter if the data is relational

Your strategies and skills will make you sensational

All it takes is some get up, and little bit of go

And you’re on your way, ready to know.

So start building a server, and try something new

SQL Server is free, Hadoop is too.

Tinker and learn in your spare time

Let your passions drive you and you’ll be just fine

DEVELOPERS

But maybe you’re a T-SQL kind of geek,

And it’s the languages of data that you want to speak

There’s lots of different directions for you

Too many to cover, but I’ll try a few

You could talk like a pirate

And learn to speak R

Statistics, and Science!

I’m sure you’ll go far

Additional queries for XML and JSON

Built in SQL Server, the latest edition.

You can learn HiveQL, if Big Data’s your thing

And interface with Tez, Spark, or just MapReducing

U_SQL is the language of the Azure Data Lake

A full-functioned dialect; what progress you could make!

There’s LINQ and C-Sharp, and so many more

Ways to write your code against the datastores

You could write streaming queries against Streaminsight

And answer questions against data in flight.

And lest I overlook, or lest I forget,

There’s products and processes still to mention yet.

SSIS, SSAS, In-memory design

SSRS, DataZen, and Power BI

All of these things, all of these tools

Are waiting to be used, are waiting for you.

You just start down the path, a direction you know

And soon you’ll be learning, your brain all aglow

And, oh, the places you’ll go.

And once you get there, wherever you go.

Don’t forget to write, and let somebody know.

Blog, tweet, present what you’ve mastered

And help someone else get there a little faster.

Feel free to leave a comment if you like, or follow me on Twitter: @codegumbo

#DevDataBhm inaugural event

I’m presenting today at DevDataDay in Birmingham.  First time in a while for me, but I’m excited about the opportunity to go to sessions again and learn something (more on that in a bit).  I realize this blog has been a bit dusty for a while, but I’m gonna try to do better about that.

TOPIC:   Theory of Cloud Database Administration
SPEAKER: Stuart Ainsworth
ABSTRACT: 
The cloud is more than just a marketing term; it’s a model for designing scalable, distributed systems and assigning ownership to various components of those systems.  Data is a crucial part of that model, and there are lots of challenges ahead.  This presentation will explore the history of cloud computing, the current state of data in the cloud (using SQL Server 2016), and the impact on career choices for data people.

#BigData is coming; what should SQL Server people do about it?

I’ve been presenting a lot on Big Data (specifically Hadoop) from the perspective of a SQL Server DBA, and I’ve made a couple of recent observations.  I think most people are aware of the fact that data generation is growing at a staggering rate, with some estimates as high as 44 zettabytes by the year 2020; what I think is lacking in the SQL Server community is a rapid movement among database professionals to expand their skills to highly scalable Big Data platforms (like Hadoop) or streaming technologies.  Don’t get me wrong; I think there’s people out there who have made the transition (like Michelle Ufford; SQLFool, now Hadoopsie), and are willing to share their knowledge, but by and large, I think most SQL Server professionals are accustomed to working with our precious relational system.

Why is that?  I think it boils down to three reasons:

  1.  The SQL Server platform is a complex product, with ever increasing opportunities to learn something new.  SQL 2016 is about to drop, and it’s a BIG release; I expect most SQL Server people to wrap themselves up in new features and learn something new soon.  There’s always going to be a need for deep expertise, and as the product continues to mature and grow, it requires deeper knowledge.
  2. Big Data tools are vast, untamed, and very organic.  Those of us accustomed to the Microsoft development cycle are used to having a single official product drop every couple of years; Big Data tools (like Hadoop) are open-source, prone to various forks, and very rapidly developed.  It’s like drinking from a firehose.
  3. It’s not quite clear how it all fits together.  We know that Microsoft has presented some interesting data technologies as of late, but it’s not quite clear how the pieces all work together; should SQL Server pros learn Azure, HDInsight, Hadoop?  What’s this about U-SQL?  StreamInsight, Spark, Cortana Analytics?

The first two reasons aren’t easily solved; they require a willingness to learn and a commitment to study (both of which are difficult resources to commit).  The third issue, however, can be easily addressed by the following graphic.

This is Microsoft’s generic vision of a complete end-to-end analytics platform; for the data professional, it’s a roadmap of skills to learn.  Note that relational engines (and their BI cousins) remain a part of the vision, but they’re only small pieces in an ever-increasing ecosystem of database tools.

So here’s the question for you; what should SQL Server people do about it?  Do we continue to focus on a very specific tool set, or do we push ourselves (and each other) to learn more about the broader opportunities?  Either choice is equally valid, but even if you choose to become an expert on a single platform in lieu of transitioning to something new, you should understand how other tools interact with the relational system.

What are you going to learn today?

Reason 1234 for attending #SQLPass: The stretch

I’ve done a lousy job of encapsulating my thoughts about the Professional Association for SQL Server’s Summits for the last few years, but here’s a quick thought about one of the reasons I love this conference: the stretch.

What do I mean by stretch?  It’s the exposure to new ideas, new concepts, things I may never use.  It’s easy for technical people to get obsessed about our struggles of the day; we invest a lot of mental energy into figuring out problems that require immediate solutions, so we often pick educational sessions that fit into our current solution requirements (i.e. I’m having issues with ETL, so let me go pick up a few sessions on BIML).  That’s satisfying, but it’s not a stretch.

The stretch is thinking about issues and problems which I don’t see ahead.  It’s the creative detours, the opportunities to explore ideas that are just, well, fun to think about.  There’s two benefits to this; first, by allowing the mind to wander, it actually gives me a chance to have an inspirational moment about my current issues (the “aha” moment).  Second, by learning a little bit about something foreign to me, it gives me an advantage if the situation ever arises that I may need to know something about that topic (i.e. we don’t use Azure now, but we might in the future).

What’s your stretch at Summit?  I’m going to squeeze in a class on R, and maybe some USQL.

Where does my index live? YouTube edition–#SQLServer

I was recently contacted by Webucator, an online training services provider, and asked if they could turn a recent post of mine (#SQLServer – Where does my index live?) into a video.  They are promoting their SQL Server classes by doing a free series called SQL Server Solutions from the Web using (with permission) different blog posts from around the web.   Wish I’d thought of this sooner; enjoy!

#SQLServer – Where does my index live?

Today, I got asked by one of my DBA’s about a recently deployed database that seemed to have a lot of filegroups with only a few tables.  He wanted to verify that one of the tables was correctly partition-aligned, as well as learn where all of the indexes for these tables were stored.  After a quick search of the Internets, I was able to fashion the following script to help.  The script below will find every index on every user table in a database, and then determine if it’s partitioned or not.  If it’s partitioned, the scheme name is returned; if not, the filegroup name.  The final column provides an XML list of filegroups (because schemes can span multiple filegroups) and file locations (because filegroups can span multiple files).

 WITH C AS ( SELECT ps.data_space_id
, f.name
, d.physical_name
FROM sys.filegroups f
JOIN sys.database_files d ON d.data_space_id = f.data_space_id
JOIN sys.destination_data_spaces dds ON dds.data_space_id = f.data_space_id
JOIN sys.partition_schemes ps ON ps.data_space_id = dds.partition_scheme_id
UNION
SELECT f.data_space_id
, f.name
, d.physical_name
FROM sys.filegroups f
JOIN sys.database_files d ON d.data_space_id = f.data_space_id
)
SELECT [ObjectName] = OBJECT_NAME(i.[object_id])
, [IndexID] = i.[index_id]
, [IndexName] = i.[name]
, [IndexType] = i.[type_desc]
, [Partitioned] = CASE WHEN ps.data_space_id IS NULL THEN 'No'
ELSE 'Yes'
END
, [StorageName] = ISNULL(ps.name, f.name)
, [FileGroupPaths] = CAST(( SELECT name AS "FileGroup"
, physical_name AS "DatabaseFile"
FROM C
WHERE i.data_space_id = c.data_space_id
FOR
XML PATH('')
) AS XML)
FROM [sys].[indexes] i
LEFT JOIN sys.partition_schemes ps ON ps.data_space_id = i.data_space_id
LEFT JOIN sys.filegroups f ON f.data_space_id = i.data_space_id
WHERE OBJECTPROPERTY(i.[object_id], 'IsUserTable') = 1
ORDER BY [ObjectName], [IndexName] 

Hadoop for the SQL Server DBA – Initial Challenges

I’ve been intrigued by the whole concept of Big Data lately, and have started actually presenting a couple of different sessions on it (one of which was accepted for PASS Summit 2014).  Seems only right that I should actually *gasp* blog about some of the concepts in order to firm up some of my explanations.  Getting started with Hadoop can be quite daunting, especially if you’re used to relational databases (especially the commercial ones); I hope that this series of posts can help clear up some of the mystery for the administrative side of the house.  Before we dive in, I think it’s only fair to lay out some of the initial challenges with discussing Big Data in general, and Hadoop specifically.  Depending on your background, some of these may be more challenging than others.

Rapid Evolution

Welcome to the wild, wild west.  If you come from a commercial database background (like SQL Server), you’re probably accustomed to a mature product.  For Microsoft SQL Server, a new version gets released on what appears to be a 2-4 year schedule (SQL 2005 -> 2008 -> 2012 -> 2014); of course, there’s always the debate as to what constitutes a major release (2008 R2?), but in general, the core product gets shipped with new functionality, and there’s some time before additional new functionality is released.

Hadoop’s approach to the release cycle is much looser; in 2014 alone, there have been two “major” releases with new features and functionality included.  Development for the Hadoop engine is distributed, so the release and packaging of new functions may vary within the ecosystem (more on that in a bit).  For developers, this is exciting; for admins, this is scary.   Depending on how acceptable change is within your operational department, the concept of rolling out an upgraded database engine every 3-4 months may be daunting.

Ecosystems, not products

Hadoop is an open-source product, so if you’re experienced with other open-source products like Linux, you probably already understand what that means; open-source licensing means that vendors can package the core product into their product, as long as they allow open access to the final package.  This usually means that commercial providers will either bundle an open-source product with their own proprietary side-by-side software (“we interface with MySQL” or “we run on Linux”), or they release their modified version of the software in a completely open fashion and earn revenue from a support contract (e.g., Red Hat).  In either case, it’s an ecosystem, not a canned product.

Hadoop technically consists of four modules:

  • Hadoop Common: The common utilities that support the other Hadoop modules.
  • Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
  • Hadoop YARN: A framework for job scheduling and cluster resource management.
  • Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.

However, take a look at the following framework from Hortonworks (the Microsoft partner for Hadoop):

hortonworks Lots of stuff in there that’s being developed, but isn’t officially Hadoop.  it could become part of this official stack at some point, or it may not.  Other vendors may adopt it, or they may not.   Each of these components has their own update schedule (again, change!), but there is some flexibility in this approach (you can upgrade only the individual components); it does make the road map complex compared to traditional database platforms.

Big Data doesn’t always mean Big Data.

Perhaps the hardest thing to embrace about Big Data in general (not just Hadoop) is that the nomenclature doesn’t necessarily line up with the driving factors; a Big Data approach may be the best approach for smaller data sets as well.   In essence, data can be described in terms of the 4 V’s:

  1. Volume – The amount of data held
  2. Velocity – The speed at which the data should be processed
  3. Variety – The variable sources, processing mechanisms and destinations required
  4. Value – The amount of data that is viewed as not redundant, unique, and actionable

A distributed approach (like Hadoop) is usually appropriate when tackling more than 1 of these four v’s; if your data’s just large, but low velocity, variety, or value, a single installation of SQL Server (with a lot of disk space) may be appropriate.  However, if your data has a lot of variety and a lot of velocity even if it’s small, a Big Data approach may yield considerable efficiency.  The point is that big data alone is not necessarily the impetus for using Hadoop at all.

Summary

Big Data & Hadoop are complex topics, and they’re difficult to understand if you approach them from a traditional RDBMS mentality.  However, understanding the fundamentals of how Big Data approaches are evolving, disparate, and generally applicable to more than just volume can lay a foundation for tackling the platforms.

 

 

Determining the Primary Key of a table without knowing it’s name

So, I’ve been trying to find more technical fodder for blog posts lately in order to get in the habit of blogging on a regular basis, so I thought I would explore a few of my higher-ranked answers on StackOverflow and provide a little more detail than that site provides.  This is my highest-rated answer (sad, I know, compared to some posters on the site):

Determine a table’s primary key using TSQL

I’d like to determine the primary key of a table using TSQL (stored procedure or system table is fine). Is there such a mechanism in SQL Server (2005 or 2008)?

My answer:

This should get you started:

SELECT *
    FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
        JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu ON tc.CONSTRAINT_NAME = ccu.Constraint_name
    WHERE tc.CONSTRAINT_TYPE = 'Primary Key'

 

Besides the obvious faux pas of using SELECT * in a query, you’ll note that I used the INFORMATION_SCHEMA views; I try to use these views wherever possible because of the portability factor.  In theory, I should be able to apply this exact same query to a MySQL or Oracle database, and get similar results from the system schema.

The added benefit of this query is that it allows you to discover other things about your PRIMARY KEYs, like looking for system named keys:

SELECT  tc.TABLE_CATALOG, tc.TABLE_SCHEMA, tc.TABLE_NAME, ccu.CONSTRAINT_NAME, ccu.COLUMN_NAME
FROM    INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
        JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu ON tc.CONSTRAINT_NAME = ccu.Constraint_name
WHERE   tc.CONSTRAINT_TYPE = 'Primary Key'
    AND ccu.CONSTRAINT_NAME LIKE 'PK%/_/_%' ESCAPE '/'

or finding PRIMARY KEYs which have more than one column defined:

SELECT  tc.TABLE_CATALOG, tc.TABLE_SCHEMA, tc.TABLE_NAME, ccu.CONSTRAINT_NAME
FROM    INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
        JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu ON tc.CONSTRAINT_NAME = ccu.Constraint_name
WHERE   tc.CONSTRAINT_TYPE = 'Primary Key'
GROUP BY tc.TABLE_CATALOG, tc.TABLE_SCHEMA, tc.TABLE_NAME, ccu.CONSTRAINT_NAME
HAVING COUNT(*) > 1

Error 574 Upgrading SQL Server 2012 to SP1

This blog post is way overdue (check the dates in the errror log below), but I promised our sysadmin that I would write it, so here it is.  Hopefully, it’ll help some of you with this aggravating issue.  During an upgrade of our SQL cluster, we ran into the following error as we attempted to upgrade one of the instances:

2014-04-15 23:50:14.45 spid14s     Error: 574, Severity: 16, State: 0.

2014-04-15 23:50:14.45 spid14s     CONFIG statement cannot be used inside a user transaction.

2014-04-15 23:50:14.45 spid14s     Error: 912, Severity: 21, State: 2.

2014-04-15 23:50:14.45 spid14s     Script level upgrade for database ‘master’ failed because upgrade step ‘msdb110_upgrade.sql’ encountered error 574, state 0, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.

2014-04-15 23:50:14.45 spid14s     Error: 3417, Severity: 21, State: 3.

2014-04-15 23:50:14.45 spid14s     Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

Google wasn’t helpful; there’s apparently lots of potential fixes for this, none of which helped.  The closest match we found was that we had some orphaned users in a few databases (not system databases), which we corrected; the upgrade still failed.  We eventually had to contact Microsoft support, and work our way up the second level technician.  Before I reveal the fix, let me give a little more background on how we got orphaned users.

You see, shortly after we upgraded to SQL 2012 (about a year ago), we did what many companies do; we phased out a service offering.  That service offering that we phased out required several database components, including a SQL login associated with users in the database, and several maintenance jobs that were run by SQL Agent.  When we phased out the service, those jobs were disabled, but not deleted.  Our security policy tracks the last time a login was used; if a login isn’t used within 60 days, it’s disabled.  30 days after that (if no one notices), the login is deleted.  Unfortunately, our implementation of this process missed two key steps:

  1. The associated user in each database was not dropped with the login (leaving an orphan), and
  2. any job that was owned by that login was also not dropped or transferred to a sysadmin.

The latter was the key to our particular situation; the upgrade detected an orphaned job even though that job was disabled, and blocked the upgrade from going forward.  Using trace flag –T902, we were able to start the server instance and delete the disabled job.  We then restarted the server without the trace flag, and the upgrade finished successfully.

 

Resources:

Find and fix all orphaned users for all databases.

Brent Ozar Unlimited’s sp_blitz will find jobs that are owned by users other than sa.