Code

Uploading Multiple RDL files to SSRS

My QA folks have a problem; development has been working on migrating a bunch of reports from a legacy 3rd-party application to SQL Server Reporting Services. Development has finished their “sprint” (*cough* waterfall), and handed over 52 reports to QA, and told the to load them into their QA server. The problem? The web interface only loads 1 report at a time.

Me, being the DevOps guy that I am, thought “well, that’s just silly; there has to be a way to upload all of the reports at once, rather than clicking on some silly buttons over and over again”. I googled, and the best explanation I found was at this blogpost: https://basic-ssrs.blogspot.com/2015/12/upload-multiple-rdl-files-from-local.html (Kudos; this is a VERY well written walk-through).

Unfortunately, I did exactly what the blog suggested, and ran into two problems. The first? Running the .bat file opened and closed a command window in seconds. No feedback. Nothing. I went back and made a minor change to the .bat file; namely, I added the PAUSE command to the end:

“E:\Program Files (x86)\Microsoft SQL Server\130\Tools\Binn\rs.exe” -i C:\report_upload\Upload_Multiple_RDL_files.rss -s “http://localhost/reportserver” -v FILE_NAME=”C:\report_upload\ConfigurationFile.txt”
pause

This allowed me to discover the second (more important) issue; I ran the .bat file, and now I get the following error:

Unhandled exception:
The definition of this report is not valid or supported by this version of Reporting Services. The report definition may have been created with a later version of Reporting Services, or contain content that is not well-formed or not valid based on Reporting Services schemas. Details: ‘.’, hexadecimal value 0x00, is an invalid character. Line 2563, position 10.

Ummmmm, what?

Back to the Bing. I found another article that describes basically the same process, but at the bottom of the comment section, there was this little gem:

Hello, I was getting the invalid content error:

“….or contain content that is not well-formed or not valid based on Reporting Services schemas. Details: ‘.’, hexadecimal value 0x00, is an invalid character….”

Made some research and found that arrays in vb are created as N+1 long size, and filled with 0x00, so you need to change the script to:

definition = New [Byte](stream.Length – 1) {}

 to get it working.

https://www.mssqltips.com/sqlservertip/3255/sql-server-reporting-services-ssrs-rsexe-utility/

BINGO! Went back to the .rss file and changed the below code block to

 ‘ Deploying the Reports

                    Try

                        Dim stream As FileStream = File.OpenRead(fileFullPath)

                        Dim NameWithExt As String = fileFullPath.Replace(path, “”)

                        Dim NameOnly As String = NameWithExt.Replace(“.rdl”, “”)

                        definition = New [Byte](stream.Length-1) {}

                        stream.Read(definition, 0, CInt(stream.Length))

Ran the.bat, and in just a few minutes, I had a ton of reports loaded to my directory. Now I just gotta fix the datasources.

**Edit – based on comment below, I’ve embedded the entire revised script.

‘ Script Starting Point
' Script to deploy report to report server
' EXECUTE VIA COMMAND LINE
‘ Original script: https://basic-ssrs.blogspot.com/2015/12/upload-multiple-rdl-files-from-local.html
‘ Upload_Multiple_RDL_files.rss

DIM definition As [Byte]() = Nothing
DIM warnings As Warning() = Nothing

Public Sub Main()

' Variable Declaration
        Dim TextLine As String = ""
        Dim LocalDir As String = ""
        Dim ServerDir As String = ""
        Dim definition As [Byte]() = Nothing
        'Dim warnings As Warning() = Nothing

' Reading Configuration Text file and assigning the values to variables
        If System.IO.File.Exists(FILE_NAME) = True Then
            Dim objReader As New System.IO.StreamReader(FILE_NAME)
            Do While objReader.Peek() <> -1
                TextLine = objReader.ReadLine()
                Dim parts As String() = TextLine.Split(New Char() {","c})
                'TextLine & objReader.ReadLine() '& vbNewLine
                LocalDir = parts(0)
                ServerDir = parts(1)

                Dim path As String = LocalDir
                Dim fileEntries As String() = Directory.GetFiles(path)
                Dim fileFullPath As String = ""
                For Each fileFullPath In fileEntries


 ' Deploying the Reports
                    Try
                        Dim stream As FileStream = File.OpenRead(fileFullPath)
                        Dim NameWithExt As String = fileFullPath.Replace(path, "")
                        Dim NameOnly As String = NameWithExt.Replace(".rdl", "")
                        definition = New [Byte](stream.Length-1) {}
                        stream.Read(definition, 0, CInt(stream.Length))

      warnings = rs.CreateReport(NameOnly, ServerDir, True, definition, Nothing)

      If Not (warnings Is Nothing) Then
        DIM warning As Warning
        For Each warning In warnings
        Console.WriteLine(warning.Message)
        Next warning
      Else
       Console.WriteLine("Report: {0} PUBLISHED!", NameOnly)
      End If

     Catch e As IOException
      Console.WriteLine(e.Message)
     End Try
    Next fileFullPath
   Loop
         Else

            Dim MsgBox as String = "File Does Not Exist"

        End If
End Sub

'End of the Script

Checking for #RedGate SCA databases

We’re starting a large scale conversion of our development processes with some very old database servers. Some of these databases were developed over 10 years ago, and they were never checked into source control. As our team develops new features, they’ve been using Red Gate’s SQL Change Automation to develop database projects along side their applications; it’s an impressive tool, and we’re getting ready to tackle older databases in an effort to improve our code base.

But, first, we need to determine what databases are already source control and which ones are not. Most of these databases and applications are minimally documented, so most of our development team doesn’t know what’s actually in production. To discover what databases are already part of SCA, I created a simple SQL script. It looks for the existence of the [dbo].[__MigrationLog] table, an artifact of the SCA process. I can then tell when a snapshot was last deployed, and what version was given.

CREATE TABLE #tmp (dbname varchar(1000), completedate Datetime, version varchar(200))

DECLARE @sql varchar (MAX)

SET @SQL = ' USE ?;
If exists (SELECT * from sys.tables t WHERE t.name = ''__MigrationLog'')
SELECT db_name() dbName, MAX(complete_dt) completeDate, MAX(version) version
FROM dbo.__MigrationLog
'

INSERT INTO #tmp
exec sp_msforeachdb @SQL

SELECT *
FROM #tmp

DROP TABLE #tmp

#NeverStopLearning 10 Minutes of Code: PS Array .Add

One of the scripts I’ve been working through in my spare time is a script to identify what servers I can’t reliably use Powershell remotely on. We manage a large domain of legacy servers, and it’s been sliced into smaller segments that have some complex routing; it’s a bit of a nightmare, but I figured I could probe each computer and run some basic PS script to see if it responds.

However, we have a LOT of servers, and some of them fail even the basic ping test. Why waste resources on testing PS commands when I can’t even reach those? My solution? Run a ping test first, and build two arrays: one that contains pingable servers, and one that does not.

 Foreach ($computer in $computers)
{
#ping test first; if it fails, add it to a new array $aNoPing with the .Add method
if(-not (test-connection -ComputerName $computer.Name -COUNT 1 -Quiet))
{
    [void]$aNoPing.Add($computer)
 }
else #it pinged, add it to a new array $aComputers for the rest of the test
    {
        [void]$aComputers.Add($computer)
    }
}                       
#generate some output; these boxes need to be looked at.
Write-Output “”
Write-Output “The following servers cannot be reached by ping.”
Write-Output “————————————————“
$aNoPing

#NeverStopLearning: 10 Minutes of Code – PS Get-ChildItem

In my last post, I started working on a very simple script to explore files in a directory using PowerShell: Get-ChildItem allows me to find the files and folders in the user directory, but what if I wanted to start with a different directory? That’s where the -Path switch comes in.

Get-ChildItem -Path C:\

gives em a listing of all of the files and folders at the root of my C drive. Lots more to come; this is just the beginning of my journey.

#Azure DataFest Atlanta #ADFATL – Call for Speakers Now Open!

As I’ve mentioned on twitter (what, you don’t follow me?), I’ve been involved with a new conference that’s focusing on the Microsoft Data Platform – Azure DataFest. It’s still very much in the works, but there’ have been a few events around the country so far, and we’re bringing one to Atlanta in August (as well as working on a national standardized presence). If you want to help build a community of data professionals that are passionate about the next generation of analytics and data science, please feel free a topic. Text for the CFP is below, but the actual call for speakers is here: https://sessionize.com/atlanta-2018-azure-datafest-microsoft.

More details to come (after I get through Atlanta SQLSaturday).

Atlanta 2018 Azure DataFest: Microsoft Azure Advanced Analytics and Big Data Conference

This is a call for speakers for the inaugural Atlanta Azure DataFest: Microsoft Azure Advance Analytics and Big Data Conference, a 2-day event to be held on August 16-17, 2018, 9:00AM to 5:00PM at the Microsoft Technology Center, 8000 Avalon Boulevard Suite 900, Alpharetta, GA 30009.

We are looking for 10-12 speakers to present on the following Azure Advanced Analytics and Big Data topics:

  • Azure Data Services
  • Azure Data Warehouse
  • Power BI
  • Cosmos DB
  • Azure Analysis Services
  • HDInsight
  • Machine Learning
  • Stream Analytics
  • Cognitive Services
  • Azure Bot Services
  • Data Lake Analytics
  • Data Lake Store
  • Data Factory
  • Power BI Embedded
  • Data Catalog
  • Log Analytics
  • Apache Spark for Azure
  • Dynamics 365 for Customer Insights
  • Custom Speech Service  APIs
  • Spark

Planned Schedule (Thursday, August 16)        

We plan on delivering a keynote, and three sessions to the at-large audience, then breaking into tracks after lunch.

8:00AM – 9:00AM – check-in/breakfast/networking

9:00AM – 9:50AM – Key Note, Room # All/Combined

10:00AM -10:50AM – Session 1  Room # All/Combined

11:00AM -11:50AM – Session 2 Room # All/Combined

12:00PM – 12:50PM – Partner/Sponsor Lunch and Learn – Room # All/Combined

1:15PM – 2:15PM  Breakout sessions

2:30PM – 3:30PM  Breakout sessions

3:45PM – 4:45PM Breakout sessions

Sessions should be 1 hour in duration, level 300 or higher. You can use best practices, case studies, demos, chalk talks, etc.

Planned Schedule (Friday, August 17)
The second day is intended to build on the first day with workshops, allowing attendees to have hands-on experiences with the applications.

8:00AM – 9:00AM – check-in/breakfast/networking

9:00AM – 11:50AM Workshops

12:00PM – 12:50PM – Lunch – Networking

1:00PM – 3:50PM Workshops

Workshop sessions should be 3 hours in length, and relate to material covered in the sessions on day one.  If you would like to submit a workshop session,  please ALSO submit a single-hour session for the first day.

The session submission deadline is Friday, July 13, 2018.  We will announce the speaker list and alternates on Monday, July 16, 2018.     

If you have questions, please contact stuart.ainsworth@azuredatafest.com

Oh, The Places You’ll Go! – #SQLSeuss #SQLPASS

Last week, I had the privilege to speak at the annual PASS Summit; I got to present two different sessions, but the one I’m the most proud of was my Lightning Talk: Oh, the Places You’ll Go! A Seussian Guide to the Data Platform. I bungled the presentation a bit (sorry for those of you who want to listen to it), but I feel pretty good about the content. I’ve presented it below, with the slides that I used for the talk.

The goal of this presentation was to explore the Microsoft Data Platform from the perspective of a SQL Server professional; I found this great conceptual diagram of the platform from this website a while back, and wanted to use it as a framework. I figured the best way to teach a subject was the same way I teach my 3-year-old: a little bit of whimsy.

Enjoy.

You have brains in your head

And SQL Skills to boot

You’ll soar to great heights

On the Data Platform too

You’re on your own, and you know what you know,

And YOU are the one who’ll decide where to go.

You’ve mastered tables, columns and rows, OHHHHH MYYYY

You may even have dabbled in a little B.I.

You’re a data professional, full of zest,

But now you’re wondering “What comes next?”

Data! It’s more than just SQL,

And there’s a slew of it coming, measured without equal.

Zettabytes, YotaBytes, XenoBytes and more

All coming our way, faster than ever before.

So what should we do? How should we act?

Should we rest on our laurels? Should we lie on our backs?

Do we sit idly by, while the going gets tough?

No… no, we step up our game and start learning new stuff!

 

Oh, the places you’ll go!

ARCHITECTURE

Let’s start with the Theories,

The things you should know

Designing systems as services,

Are the route you might go.

Distributed, scalable

Compute on Demand

The Internet of Things

And all that it commands.

Infrastructure is base,

Platform is in line

Software and data

Rest on top of design

Once you’ve grasped this

Once you’ve settled in

You’ve embraced cloud thinking

Even while staying on-prem.

But beyond the cloud, there’s data itself.

Structured, polyschematic, binary, and log

Centralized or on the edge,

Some might say “in the fog”

Big Data, Fast Data, Dark, New and Lost

All of it needs management, all at some cost

There’s opportunity there to discover something new

But it will take somebody, somebody with skills like you.

Beyond relational, moving deep into insight

We must embrace new directions, and bring data to life

And there’s so many directions to go!

ADMINISTRATORS

For those of you who prefer administration

System engineering and server calibration

You need to acknowledge, and you probably do

You’ll manage more systems, with resources few.

Automation and scripting are the tools of the trade

Learn powershell to step up your game.

Take what you know about managing SQL

And apply it to more tech; you’ll be without equal

Besides the familiar, disk memory CPU

There’s virtualization and networking too

In the future you might even manage a zoo,

Clustering elephants, and a penguin or two.

 

But it all hinges on answering things

Making servers reliable and performance tuning,

Monitoring, maintenance, backup strategies

All of these things you do with some ease.

And it doesn’t matter if the data is relational

Your strategies and skills will make you sensational

All it takes is some get up, and little bit of go

And you’re on your way, ready to know.

So start building a server, and try something new

SQL Server is free, Hadoop is too.

Tinker and learn in your spare time

Let your passions drive you and you’ll be just fine

DEVELOPERS

But maybe you’re a T-SQL kind of geek,

And it’s the languages of data that you want to speak

There’s lots of different directions for you

Too many to cover, but I’ll try a few

You could talk like a pirate

And learn to speak R

Statistics, and Science!

I’m sure you’ll go far

Additional queries for XML and JSON

Built in SQL Server, the latest edition.

You can learn HiveQL, if Big Data’s your thing

And interface with Tez, Spark, or just MapReducing

U_SQL is the language of the Azure Data Lake

A full-functioned dialect; what progress you could make!

There’s LINQ and C-Sharp, and so many more

Ways to write your code against the datastores

You could write streaming queries against Streaminsight

And answer questions against data in flight.

And lest I overlook, or lest I forget,

There’s products and processes still to mention yet.

SSIS, SSAS, In-memory design

SSRS, DataZen, and Power BI

All of these things, all of these tools

Are waiting to be used, are waiting for you.

You just start down the path, a direction you know

And soon you’ll be learning, your brain all aglow

And, oh, the places you’ll go.

And once you get there, wherever you go.

Don’t forget to write, and let somebody know.

Blog, tweet, present what you’ve mastered

And help someone else get there a little faster.

Feel free to leave a comment if you like, or follow me on Twitter: @codegumbo

#DevDataBhm inaugural event

I’m presenting today at DevDataDay in Birmingham.  First time in a while for me, but I’m excited about the opportunity to go to sessions again and learn something (more on that in a bit).  I realize this blog has been a bit dusty for a while, but I’m gonna try to do better about that.

TOPIC:   Theory of Cloud Database Administration
SPEAKER: Stuart Ainsworth
ABSTRACT: 
The cloud is more than just a marketing term; it’s a model for designing scalable, distributed systems and assigning ownership to various components of those systems.  Data is a crucial part of that model, and there are lots of challenges ahead.  This presentation will explore the history of cloud computing, the current state of data in the cloud (using SQL Server 2016), and the impact on career choices for data people.

#BigData is coming; what should SQL Server people do about it?

I’ve been presenting a lot on Big Data (specifically Hadoop) from the perspective of a SQL Server DBA, and I’ve made a couple of recent observations.  I think most people are aware of the fact that data generation is growing at a staggering rate, with some estimates as high as 44 zettabytes by the year 2020; what I think is lacking in the SQL Server community is a rapid movement among database professionals to expand their skills to highly scalable Big Data platforms (like Hadoop) or streaming technologies.  Don’t get me wrong; I think there’s people out there who have made the transition (like Michelle Ufford; SQLFool, now Hadoopsie), and are willing to share their knowledge, but by and large, I think most SQL Server professionals are accustomed to working with our precious relational system.

Why is that?  I think it boils down to three reasons:

  1.  The SQL Server platform is a complex product, with ever increasing opportunities to learn something new.  SQL 2016 is about to drop, and it’s a BIG release; I expect most SQL Server people to wrap themselves up in new features and learn something new soon.  There’s always going to be a need for deep expertise, and as the product continues to mature and grow, it requires deeper knowledge.
  2. Big Data tools are vast, untamed, and very organic.  Those of us accustomed to the Microsoft development cycle are used to having a single official product drop every couple of years; Big Data tools (like Hadoop) are open-source, prone to various forks, and very rapidly developed.  It’s like drinking from a firehose.
  3. It’s not quite clear how it all fits together.  We know that Microsoft has presented some interesting data technologies as of late, but it’s not quite clear how the pieces all work together; should SQL Server pros learn Azure, HDInsight, Hadoop?  What’s this about U-SQL?  StreamInsight, Spark, Cortana Analytics?

The first two reasons aren’t easily solved; they require a willingness to learn and a commitment to study (both of which are difficult resources to commit).  The third issue, however, can be easily addressed by the following graphic.

This is Microsoft’s generic vision of a complete end-to-end analytics platform; for the data professional, it’s a roadmap of skills to learn.  Note that relational engines (and their BI cousins) remain a part of the vision, but they’re only small pieces in an ever-increasing ecosystem of database tools.

So here’s the question for you; what should SQL Server people do about it?  Do we continue to focus on a very specific tool set, or do we push ourselves (and each other) to learn more about the broader opportunities?  Either choice is equally valid, but even if you choose to become an expert on a single platform in lieu of transitioning to something new, you should understand how other tools interact with the relational system.

What are you going to learn today?

Reason 1234 for attending #SQLPass: The stretch

I’ve done a lousy job of encapsulating my thoughts about the Professional Association for SQL Server’s Summits for the last few years, but here’s a quick thought about one of the reasons I love this conference: the stretch.

What do I mean by stretch?  It’s the exposure to new ideas, new concepts, things I may never use.  It’s easy for technical people to get obsessed about our struggles of the day; we invest a lot of mental energy into figuring out problems that require immediate solutions, so we often pick educational sessions that fit into our current solution requirements (i.e. I’m having issues with ETL, so let me go pick up a few sessions on BIML).  That’s satisfying, but it’s not a stretch.

The stretch is thinking about issues and problems which I don’t see ahead.  It’s the creative detours, the opportunities to explore ideas that are just, well, fun to think about.  There’s two benefits to this; first, by allowing the mind to wander, it actually gives me a chance to have an inspirational moment about my current issues (the “aha” moment).  Second, by learning a little bit about something foreign to me, it gives me an advantage if the situation ever arises that I may need to know something about that topic (i.e. we don’t use Azure now, but we might in the future).

What’s your stretch at Summit?  I’m going to squeeze in a class on R, and maybe some USQL.

Coding naked.

Me, circa 1992. This is not the most embarrassing picture of that time period.

Me, circa 1992. This is not the most embarrassing picture of me from that time period.

True story: when I was in college, I was the lead singer (if you can call it that) for a goth rock Christian rave band called Sandi’s Nightmare.  I had zero musical talent, and relied solely on the strengths of my band-mate (Brad; you still rock); however, while my singing ability was dubious at best, I was pretty theatrical.  We had a total of 3 awesome shows before we moved on to other things, but it was a lot of fun while it lasted.  Our biggest hit was a cover of Lust Control’s Dancing Naked, a fantastic little number about David’s celebration of the ark returning; his passion and excitement overwhelmed his embarrassment.

I promise, I’m not making this stuff up; it was pretty edgy for the Bible belt of  Northeast Louisiana in the early 90’s.  I mean, I had an earring and everything.   The point of this post (besides reveling in my musical abilities and tastes of 25 years ago) is that sometimes we just need to be naked.

Let me explain; I made choices that in hindsight are difficult to defend (like my haircut). However, there was a reason for the choice at the time, and even though it may not have been the greatest idea, I’ve learned something from every experience I had.  While there are things that I would do differently if given the choice and knowledge I have today, I don’t regret many things.  That’s what the metaphor of nakedness is all about; being open and honest about not only your successes, but your choices along the way.  You can’t change the past, but you can learn from it.

Becoming a Naked Developer

I’ve spent most of my professional career as a SQL developer working for one company (13 years in November);  I was the first developer hired by that company, and my job was to build an integration platform transforming syslog data into reports, all on a shoestring budget.  I strung something together in a couple of weeks, and in the first month, I took a process that normally took two weeks to do by hand down to a day; my boss threatened to kiss me (I was flattered, but declined).

Was it pretty (the code, not the boss)?  No.  Was it successful?  Yes.

Fast forward a couple of years to 2004: I’m a member of small team building a solution to analyze more data than I’ve ever seen in my life (terabytes of data from firewalls and Windows machines); we had to import and semantically normalize data in near-real-time.  We built a solution using DTS, and then moved it to .NET and MSMQ.

Was it pretty? No.  Was it successful?  Yes.  Part of that code is still being used today, and my company has thrived off of it.  We were acquired by a larger company 8 years ago, and we’ve kept moving forward.

Every few years, we’d talk about doing a rewrite, and we’d bring in some outside expertise to evaluate what we’ve done and help us come up with a migration plan.  Some consultant would come in, look at what we did, and shake their heads at us.  Each time, we (the remaining original developers) would attend meeting after meeting as our code was dissected and judged; we sat naked (metaphorically) in front of a team of experts who told us everything that was wrong with the choices that we made.  At the end of each visit, we’d regroup and complain about the agony of judgement. In all honesty, we would have done things differently, given our current experience (and modern technology).  However, the simple truth is: we won the day.  We continue to win the day.

During some of these initial code reviews, the consultants were jerks.  They’d spend a lot of time bemoaning how horrid everything was; I’d be embarrassed and try to explain what I would have done differently if only I had the time.  The consultants would eventually propose some complex solution that would cost hundreds of thousands of dollars to implement.  Our company would consider the offer, stick it in a file somewhere, and we’d keep on keeping on.   Clunky code that works is better than expensive code with the same outcome.

Over time, I learned to live with the embarrassment (as you can tell from this post); I started listening to what the consultants would say and could quickly figure out which ones had good advice, and which ones were there to just be seagulls.  People that focused on solving the immediate problems were much easier to talk to than the people who wanted to rip out everything and start over.  Don’t get me wrong; sometimes you need to start over, but there is  a cost associated with that decision.  It’s usually much easier to start where you are, rather than trying to reinvent the wheel.

What’s my point?

First and foremost, don’t dwell on your past mistakes.  If you can honestly say that you built something that worked within the constraints of the time period, no matter how clunky or silly it seems now, then it was the right decision.  Working software works.  Laugh at it, learn from it, and let it go; repair what you can, when you can, but keep moving forward.

Second, learn to code naked.  Whatever it is that you’re trying to do at the moment, share it.  Open yourself up to code review, to criticism; exposure brings experience.  Granted, you’ll meet some seagulls along the way, but eventually, you’ll figure out who’s there to help you grow as a developer.  Foster those relationships.

Third, encourage people to grow; recognize that the same grace you desire for your coding mistakes is applicable to others as well.  Be kind, be gentle, but be honest; seek to understand why they made the design choices they did, and be sensitive to the fact that if it works, it works.  Sometimes we get caught up on trying to figure out the right way to do something, and ultimately end up doing nothing because it’s too difficult to implement it according to the textbook.

Life is too short to not enjoy what you do; try to share that joy without embarrassment.

THIS is the most embarrassing picture of me from that time period.  I thought being an actor was the best way to meet girls… Yes, those are tights I am wearing…