58 Search results

For the term "management".

Oh, The Places You’ll Go! – #SQLSeuss #SQLPASS

Last week, I had the privilege to speak at the annual PASS Summit; I got to present two different sessions, but the one I’m the most proud of was my Lightning Talk: Oh, the Places You’ll Go! A Seussian Guide to the Data Platform. I bungled the presentation a bit (sorry for those of you who want to listen to it), but I feel pretty good about the content. I’ve presented it below, with the slides that I used for the talk.

The goal of this presentation was to explore the Microsoft Data Platform from the perspective of a SQL Server professional; I found this great conceptual diagram of the platform from this website a while back, and wanted to use it as a framework. I figured the best way to teach a subject was the same way I teach my 3-year-old: a little bit of whimsy.

Enjoy.

You have brains in your head

And SQL Skills to boot

You’ll soar to great heights

On the Data Platform too

You’re on your own, and you know what you know,

And YOU are the one who’ll decide where to go.

You’ve mastered tables, columns and rows, OHHHHH MYYYY

You may even have dabbled in a little B.I.

You’re a data professional, full of zest,

But now you’re wondering “What comes next?”

Data! It’s more than just SQL,

And there’s a slew of it coming, measured without equal.

Zettabytes, YotaBytes, XenoBytes and more

All coming our way, faster than ever before.

So what should we do? How should we act?

Should we rest on our laurels? Should we lie on our backs?

Do we sit idly by, while the going gets tough?

No… no, we step up our game and start learning new stuff!

 

Oh, the places you’ll go!

ARCHITECTURE

Let’s start with the Theories,

The things you should know

Designing systems as services,

Are the route you might go.

Distributed, scalable

Compute on Demand

The Internet of Things

And all that it commands.

Infrastructure is base,

Platform is in line

Software and data

Rest on top of design

Once you’ve grasped this

Once you’ve settled in

You’ve embraced cloud thinking

Even while staying on-prem.

But beyond the cloud, there’s data itself.

Structured, polyschematic, binary, and log

Centralized or on the edge,

Some might say “in the fog”

Big Data, Fast Data, Dark, New and Lost

All of it needs management, all at some cost

There’s opportunity there to discover something new

But it will take somebody, somebody with skills like you.

Beyond relational, moving deep into insight

We must embrace new directions, and bring data to life

And there’s so many directions to go!

ADMINISTRATORS

For those of you who prefer administration

System engineering and server calibration

You need to acknowledge, and you probably do

You’ll manage more systems, with resources few.

Automation and scripting are the tools of the trade

Learn powershell to step up your game.

Take what you know about managing SQL

And apply it to more tech; you’ll be without equal

Besides the familiar, disk memory CPU

There’s virtualization and networking too

In the future you might even manage a zoo,

Clustering elephants, and a penguin or two.

 

But it all hinges on answering things

Making servers reliable and performance tuning,

Monitoring, maintenance, backup strategies

All of these things you do with some ease.

And it doesn’t matter if the data is relational

Your strategies and skills will make you sensational

All it takes is some get up, and little bit of go

And you’re on your way, ready to know.

So start building a server, and try something new

SQL Server is free, Hadoop is too.

Tinker and learn in your spare time

Let your passions drive you and you’ll be just fine

DEVELOPERS

But maybe you’re a T-SQL kind of geek,

And it’s the languages of data that you want to speak

There’s lots of different directions for you

Too many to cover, but I’ll try a few

You could talk like a pirate

And learn to speak R

Statistics, and Science!

I’m sure you’ll go far

Additional queries for XML and JSON

Built in SQL Server, the latest edition.

You can learn HiveQL, if Big Data’s your thing

And interface with Tez, Spark, or just MapReducing

U_SQL is the language of the Azure Data Lake

A full-functioned dialect; what progress you could make!

There’s LINQ and C-Sharp, and so many more

Ways to write your code against the datastores

You could write streaming queries against Streaminsight

And answer questions against data in flight.

And lest I overlook, or lest I forget,

There’s products and processes still to mention yet.

SSIS, SSAS, In-memory design

SSRS, DataZen, and Power BI

All of these things, all of these tools

Are waiting to be used, are waiting for you.

You just start down the path, a direction you know

And soon you’ll be learning, your brain all aglow

And, oh, the places you’ll go.

And once you get there, wherever you go.

Don’t forget to write, and let somebody know.

Blog, tweet, present what you’ve mastered

And help someone else get there a little faster.

Feel free to leave a comment if you like, or follow me on Twitter: @codegumbo

One (Last) Trip to the Emerald City for #SQLPASS

On Monday, I’m flying out to the Emerald City (Seattle, WA) for the annual gathering of Microsoft database geeks known as the PASS Summit; as always, I’m excited to see friends and learn new stuff. However, this will probably be my last Summit. Over the last few years, my career trajectory has taken me away from database development and administration, and it’s time that I start investing in the things that now interest me (technology management, and operational culture). My goals for the next year are to attend conferences like the DevOps Enterprise Summit and the SRECon; I want and need to learn more about making IT efficient, and managing large-scale applications.

I’m not entirely disconnecting from the SQL community; I still plan to stay active and involved in our local chapter (AtlantaMDF), and part of the organizing committee for our SQL Saturdays. I still want to be a data-driven professional; I’m just not a data professional. That’s a subtle distinction, but it’s important to me. I’ll still sling code part-time and for hobbies, but I’m really trying to hone in on what I enjoy these days, and it’s process, procedures, management, and cultural change in IT (all IT, not just SQL Server).

So, this year will be different for me; instead of trying to network and schmooze and elevate my own SQL skillset, I’m going to hang out in sessions like “Overcoming a Culture of FearOps by Adopting DevOps” ,Agile Development Fundamentals: Continuous Integration with SSDT“, and
Fundamentals of Tech Team Leadership“. I may visit some courses on Cloud development and Analytics, but mostly, I want to enjoy spending some time with folks that I may not see again for a while.

I truly hope to see you there; I owe a lot to all of you, so I’m probably going to have a huge bar tab after buying rounds. Should be an exciting week.

#DevOps “We are all developers”

https://youtu.be/RYMH3qrHFEM

While thinking about the Implicit Optimism of DevOps, I started running through some of the cultural axioms of DevOps; I’m not sure if anyone has put together a comprehensive list, but I have a few items that I think are important. Be good at getting better is my new mantra, and now, I’m fond of saying “We are all developers”. I remember eating lunch at SQL Saturday Atlanta 2016 listening to a database developer describing this perspective to a DBA, and hearing how strongly the DBA objected to that label. I tentatively agreed with the developer, but recently, I’ve gotten more enamored with that statement.

Having worked as both a developer and an administrator, I get it; there’s an in-group mentality. The two sides of the operational silo are often working toward very different goals; developers are tasked with promoting change (new features, service packs, etc). DBA’s are tasked with maintaining the stability of the system; change is the opposite of stability. Most technical people I know are very proud of their work, which means that there’s often a desire for accuracy in the work we do. If a DBA is trying to make a system stable, and you call them a developer (think: change instigator), then it could be perceived as insulting.

It’s not meant to be.

Efficient development (to me) revolves around the three basic principles of:

  1. Reduce – changes should be highly targeted, small in scope, and touch only what’s necessary.
  2. Reuse – any process that is repeated should be repeated consistently; and,
  3. Recycle – code should be shared with stakeholders, so that inspiration can be shared.

From that perspective, there’s lots of opportunities to apply development principles to operational problems. For my DBA readers (all three of you), think about all the jobs you’ve written to automate maintenance. Think about the index changes you’ve suggested and/or implemented. Think about the reports you’ve written to monitor the performance of your systems. Any time you’ve created something to help you perform your job more efficiently, that’s development.

DevOps is built on the principle of infrastructure as code, with an emphasis on giving developers the ability to build the stack as needed. Google calls its implementation of DevOps principles Site Reliability Engineering, and characterizes it as “what you get when you treat operations as if it’s a software problem”. Microsoft is committed to DevOps as part of its application lifecycle management (although it’s notably cloud-focused). When dealing with large-scale implementations, operations can benefit from the application of the principles of efficient development.

We are all developers; most of us have always been developers. We just called it something else.

The Implicit Optimism of #DevOps

One of my favorite podcasts lately is DevOps Café; John Willis and Damon Edwards do a great job of talking about the various trends in IT management, and have really opened my eyes to a lot of different ways of thinking about problems in enterprise systems administration. On a recent podcast, John interviewed Damon about his #DOES15 presentation, “DevOps Kaizen: Practical Steps to Start & Sustain a Transformation“. During that conversation, Damon mentioned a phrase that really resonated with me: Be Good at Getting Better.

At the heart of the DevOps philosophy is the desire to improve delivery of services through removal of cultural blockages. Success isn’t measured by the amount of code pushed out the door or the number of releases; it’s the ability to continuously improve over time. Companies that experiment (even with ideas that don’t work) learn a different way to approach any problem that they face. The freedom to experiment means that failure is not an outcome; it’s a method of improvement.

The optimism of that appeals to me; I think if you’re focusing on continuous improvement, then you’ve implicitly accepted two fundamental principles of optimism:

  1. Change is necessary for growth, and
  2. Things CAN improve (you just need to figure out how).

There’s some beauty in that; if you’re an organization facing overwhelming technical debt, it’s not uncommon to sink into a spiral of despair, where changes are infrequent for fear of breaking something. Mistrust breeds, as organizations point fingers at other teams for “failing to deliver”. You quit working toward solutions, and instead focus on fighting fires and maintaining some sort of desperate last stand.

You’re better than that.

DevOps is a cultural change; it’s an optimistic philosophy focused on changing IT culture while being open to different strategies for doing so. If you can commit to Be Good at Getting Better, you can change. It may be slow, it may be frustrating, but every day is an opportunity to incrementally move the ball forward in delivering quality business services. The trick is not to focus on where to begin, but simply to begin.


Where’s your slack?

I’ve been rereading the book Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency recently.  As I alluded to in my last post, my life has been rough for the last few months.  My nephew’s passing took the wind out of an already saggy sail; I’ve spent a great deal of time just trying to balance work, family, and life in general.  Some people turn to counselors; I turn to project management books.

The premise of the book is that change requires free time, and that free time (slack) is the natural enemy of efficiency.  This is a good thing; if you are 100% efficient, you have no room to affect change.  Zero change means zero growth.  I’ve been a proponent of slack for a while (less successfully than I’d like); it makes sense to allow people some down time to grow.  Just to be clear, slack isn’t wasted time; it’s an investment in growth.  Slack tasks include:

  • Research into interesting projects.  Lab work allows you to experience the unexpected, which gives you time to prepare for the unexpected in production
  • Building relationships. Teams are built on trust, and trust is earned through building relationships.  Teams that like each other are more likely to be successful when it comes to problem solving.
  • Shadow training.  Allow team members to work in other teams for a while; learn how the rest of the company operates.

In short, slack is necessary in order to promote growth; if you want your organization to stay ahead of it’s competition, cutting resources in the name of efficiency is sure-fire plan for losing.  The best advice for slack time is the 80/20 rule; run your team at 80% capacity, and leave 20% for slack.  In the case of emergency, slack time can be temporarily alleviated, but it’s the responsibility of management to return to normal work levels as soon as possible.

So what does this mean for me personally?  In the name of efficiency, I let slack time go.  I work a full time job, a couple of different consulting gigs, act as a chapter leader for AtlantaMDF, and am an active father.  I have no hobbies, and suck at exercise.  I love to travel, but trips are planning exercises in and of themselves.  In short, I have zero slack to deal with emergencies.  When something goes wrong and time gets compromised, I immediately feel guilty because I’ve robbed Peter to pay Paul in terms of time.  That’s not living,

I’m done with that.

Change is incremental, so I’m not planning on upsetting the apple cart just yet, but I am trying to figure out ways to make my slack time more of a recharge time.  Don’t get me wrong; I waste time.  I sit and stare at Facebook like the rest of the modern world; I binge on Netflix when new series drop.  That’s not slack, and it doesn’t recharge me. Slack is using free time to grow, to change.  My goal is to find an hour a week for growth-promoting free time.  I’ll let you know how I’m doing.

 

(Personal) Kanban Myths: The Myth of The Important Task

Continuing in my efforts to chronicle myths of kanban utilization, I thought I would tackle the second biggest misconception I see surrounding kanban boards.  As I discussed in my previous post, many people mistake kanban to be a process for task management, when in reality, it’s a visualization of some other process.  The key takeaway is that you should spend some time making your board match your process; a kanban board should emulate your workflow, not the other way around.

So you’ve invested the time, and you now have a complex board that accurately reflects how you do work.  You’re humming along, getting things done.  Life is good, right?

Almost.  If you’re just using a kanban board to visualize a process, there’s a temptation to accept the following:

Myth 2: Kanban is a visualization tool primarily focused on (important) task management.

This is partially true; in industrial kanban, workers may use a kanban board to keep track of individual issues as they move throughout the workflow.  Managers, however, should primarily use the tool to look for opportunities to continuously improve their processes.  Once your kanban board matches your process, it becomes easy to understand where bottlenecks occur (both resource allocation and/or unnecessary processes).   Tuning workflow is a critical part of kanban utilization.

For personal kanban, however, managing resource allocation becomes a bit of challenge; how do you manage yourself?  You’re already too busy working through your pile of stuff.  Unless you can recruit other friends or family members (the Tom Sawyer approach), it’s unlikely you’ll be able to adjust resource allocation.  You can, however, begin to look for opportunities to tune processes.  How?

This is where the conversation has to drift away from kanban a bit; as a tool, a board allows you to visualize workflow and primarily focus on improvement, but in and of itself the measure of improvement isn’t part of the board.  In other words, you can see how things work, but there’s no built in visualization for determining if something has room to improve.  You have to decide what that method of improvement will be.  To improve your processes, you must define the metrics for improvements.  Those metrics are known more commonly as goals.

Goals are a critical component of a successful kanban implementation.  For example, if you have a personal goal of “I want to lose 50 pounds in the next year”, that goal should influence your decision on which tasks to pursue (and what the priority of those tasks are).  In other words, if your kanban board shows that you’re getting a lot done, but no tasks are associated with the goal of losing weight, you’ve got some room to improve your processes.

So, in summary:

  1. Spend some time making your board match your processes (at least 30 days).
  2. Define your goals (metrics for improvements)
  3. Take some time to tweak your processes to align them with your goals.

Minor incremental adjustments are more likely to be adopted than sudden and swift changes (see my management notes about change curve).  Kanban is a long-term tool, but can be highly effective at improving workflow.

 

2014 Year In Review

Finally finding some time to sit down and write this post; of course I’m squeezing it in after work, and before my wife and son come home, so there’s no telling how far I’ll get. This post is probably best treated as a stream of consciousness effort, rather than my usual agonizing over every word. 2014 was a mixed bag of a year; lots of good stuff, and lots of not-so-good stuff; I’ll try to start with the good:

2014 Professional Highs

As I’ve mentioned before, I was promoted to management in my day job a few years ago; in October of 2014, my kingdom expanded. Instead of managing a team of SQL Server DBA’s, my department was consolidated with another small group, and I now manage the IT infrastructure for our Product Group. It’s not a huge jump, but it is an opportunity for me to get involved with more than just SQL Server and databases; I’m now managing a team of sysadmins as well, so I’m getting a crash course on virtualization, server administration, and networking. It’s been fun, but a bit challenging.

I haven’t neglected my SQL Server roots, however; I presented to over a dozen user groups & SQL Saturdays last year (which is a lot for full-time desk jockey). I ultimately delivered two killer presentations at Summit in November, which boosted my confidence tremendously after 2013’s less-than-stellar performance. Blogging was steady for me (23 posts on my blog), but I did have a chance to write a piece on Pinal Dave’s blog (Journey to SQL Authority); that was a great opportunity, and one I hope to explore more. In addition to blogging and community activity, I also finally passed the second test in the MCSA: SQL Server 2012 series (Administration; 70-462). I’m studying for the last test (Data Warehouse; 70-463), and then I need to start getting some virtualization certs under my belt.

Finally, a big professional step forward for me was that I became a Linchpin (part-time); I’ve had a great deal of respect for this team of SQL Server professionals over the years, and I was very blessed to be able to step in and help on a few projects this year. I’m hoping for more. It’s a great way to test the waters, even if I’m not ready to dive into full-time consulting yet.

2014 Professional Lows

I got nominated for Microsoft MVP (twice); I didn’t get it (twice).

 

2014 Personal Highs

Big year for travel for my wife and I; we went to Jamaica and Vancouver, as well as Nashville, Chattanooga, St. Louis, Charlotte, Seattle, Hilton Head, Myrtle Beach, and Ponte Vedra. We saw two killer shows: George Strait and Fleetwood Mac; I also got to see one of my favorite bands, The Old 97’s. Our son turned a year old, and it’s been a lot of fun watching him grown and discover new things. 2014 was a year of joy in a lot of ways….

2014 Personal Lows

2014 was also a year of sorrow for me; if you follow me on Facebook, you know how proud I am of my son. What became less well-known is that I have two teenage daughters from my previous marriage; they turned 17 and 15 this year. In September of 2013, my daughters decided that they didn’t want to spend as much time with me and their stepmother. Over the last year, I’ve had to come to terms with the fact that my daughters aren’t planning on changing that any time soon, and they have no desire to have a relationship with their brother. That’s a pain that I’ll never get over; I love all of my children, and all I can do is pray that someday things will change. The only reason I feel compelled to mention it publically is that I don’t want them to become invisible; I have three children, even if I don’t get to see two of them very often. I also feel like I’ve reached a turning point; I was VERY depressed last year because of this situation, and I’m ready to move forward in 2015.

 

Summary

2014 was more good than bad, but I’m looking forward to 2015. I’ve always believed that you should play the hand you’re dealt, and make the most of it. Life is good, and it’s only getting better.

 

 

 

#SQLPASS–Who’s Making It Rain?

 

As promised in my previous post (#SQLPASS–Good people, bad behavior…), I’d like to start diving in to some of the controversies that have cropped up in the last year and critically analyze what I consider to be “bad decisions”.  This first one is complex, so let me try to sum up the players involved first (with yet another post to follow about the actual decision).  Please note that I am NOT a fan of conspiracy theories (no evil masterminds plotting to rule SQL Server community), so I’m trying to avoid inferring too much about motive, and instead focusing on observable events.

A lot of the hubbub over the last couple of weeks about the Professional Association for SQL Server wasn’t just about the election or the password controversy, but about the decision to become simply PASS in all marketing materials (gonna need a new hashtag for twitter). So much controversy, in fact, that Tom LaRock, current Board President, wrote an excellent blog post about building a bigger umbrella for Mike.  I applaud Tom for doing this; it’s a vision, and that’s a great thing to have.  However, I wanted to take this metaphor, and turn it on its side; if we need umbrellas, then who’s making it rain?  Let’s take a look at the pieces of the puzzle.

 

Community as Commodity

To figure out the rainmakers, we need to define what the value of the Professional Association for SQL Server is.  If you’re reading this post, I bet you can look in a mirror and figure it out.  It’s you.  Your passion, your excitement, your interest in connecting and learning about SQL Server is the commodity provided by the organization.  We (the community) have reached a certain maturity in our growth as a commodity; we recruit new members through our enthusiasm, and we contribute a lot of free material to the knowledge base for SQL Server.  At this point, it’s far easier to grow our ranks than it would be to start over.   

However, the question I would ask is: what do YOU get out of membership?  For most of us, it’s low-to-no cost training (most of which is provided by other community members).   The association provides a conduit to connect us.   The value to you increases when you grow. Exposure to new ideas, new topics, a deeper understanding of the technology you use; all of these are fuel for growth.  In short, as individuals, community members profit most from DEPTH of knowledge.

The more active you are in the community, the more likely you’ll be able to forage out valuable insight; how many of you are active in the Professional Association of SQL Server?   According to this tweet from the official twitter account, 11,305 people have active profiles with the organization.  While that’s not a great metric for monitoring knowledge seekers, it does provide some baseline of measure for people who care enough to change their profiles when prompted. 

 

Microsoft Needs A New Storm

The Professional Association for SQL Server was founded to build a community of database professionals with an interest in learning more about Microsoft SQL Server; the founding members of the organization were Microsoft and Computer Associates, who obviously saw the commodity in building a community of people excited about SQL Server.  The more knowledge about SQL Server in the wild, the more likely that software licenses and training will increase.  Giving away training and knowledge for a lost cost yields great dividends in the end.

This is not a bad thing at all; it’s exciting to have a vendor that gives away free stuff like training.  However, it appears that Microsoft is making a slight shift away from a focus on SQL Server.  What makes me think this?

  • It’s getting cloudy (boy, I could stretch this rain metaphor): software as a service (including SQL as a service) is a lot more profitable in the long run than software licensing.  By focusing more on cloud services (Azure), Microsoft is positioning itself as a low-to-no administration provider.  
  • Electricity (Power BIQuery): Microsoft is focusing pretty heavily on the presentation layer of traditional business intelligence, and touting how simple it is to access and analyze data from anywhere in Excel “databases”.  Who needs SQL Server when your data is drag-and-drop
  • The rebranding of SQL Server Parallel Data Warehouse: Data warehouse sounds like a database; Analytics Platform System sounds sexier, implying that your data structures are irrelevant.  Focus on what you want to do, not how to do it.

The challenge that Microsoft faces is that is has access to a commodity of SQL Server enthusiasts who don’t exactly fit the model of software-as-a-service; those of us that are comfortable with SQL Server on premise haven’t exactly made the leap to the cloud.  Also, many DBA’s dabble in Excel; they’re not Analytics practitioners.  In short, Microsoft has Joe DBA, but is looking for Mike Rosoft (see what I did there?), the Business Analyst.  Mike uses Microsoft tools to do things with data, not necessarily databases.  The problem?  Mike doesn’t have a home.   In order to maximize profits, Microsoft needs to invest in the growth of a larger and more diverse commodity.  In short, Microsoft wants a BROADER audience, but they want them to be excited and passionate about their technology.

Rain Dancing With C&C

The Professional Association for SQL Server has been managed by Christianson & Company since 2007.  While the Professional Association for SQL Server Board of Directors is made up of community volunteers, C&C is a growing corporation with the traditional goal of any good for-profit company: to make money.  How does C&C make money? They grow and sell a commodity.  If the Professional Association for SQL Server grows as an organization, C&C’s management of a larger commodity increases in value.   As far as I can tell, the Professional Association for SQL Server is C&C’s only client that is managed in this way.

The community gets free/low-cost training; C&C helps manage that training while diverting the cost to other players (i.e., Microsoft and other sponsors).  If Microsoft is looking for a broader commodity, C&C will be most successful if they can serve that BROADER audience.   The Professional Association for SQL Server’s website claims to serve a membership of 100,000+; that number includes every email address that has ever been used to register for any form of training from the association, including SQLSaturday’s, 24HOP, and Summit.  Bigger numbers means increased value when trying to build a bigger umbrella.

Yet, this 100,000+ membership is rarely reflected in anything other than marketing material.  Only 11,305 of them are eligible to vote; less (1,570) actually voted in the last election.  5,000 members are estimated to attend Summit 2014.  Perhaps the biggest measure of activity is the number of attendees at SQLSaturdays (18,362).  Any way you slice it, it seems to me that the number of people that are actively seeking DEEPER interactions are far fewer than the BROAD spectrum presented as members.  Furthermore, it would seem that reaching more than 100,000 members is challenging; if only 11,000 members are active in the community, and they’re the ones recruiting new members, how do you keep growing?  You reach out to a different audience.

 

Summary

I feel like it’s important to understand the commercial aspect of community building.  In short:

  • Microsoft needs to reach a broader audience by shifting focus from databases to simply data;
  • Christianson & Company will be able to grow as a company if they can help the Professional Association for SQL Server grow as a commodity;
  • The community has reached critical mass; it’s far easier to add to our community than it would be to build a new one.
  • The association has reached several members of the community (100,000+); far fewer of them are active  (11,305).

Where am I going with this?  That’s coming up in my next post.  While I don’t deny the altruism in the decision by the Board of Directors to reach out to a broader audience, I also think we (the commodity) should understand the financial benefits of building a bigger umbrella.

Managing a Technical Team: Building Better

Heard a great podcast the other day from the team at Manager Tools, entitled “THE Development Question”.  I’m sorry to say that I can’t find it on their website, but it did show up in Podcast Addict for me, so hopefully you can pick it up and give it a listen.  I’ll sum up the gist here, but it’s really intended to be a starting point for this blog post.  In essence, Manager Tools says that when a direct approaches you (the manager) with a question, one of the best responses you can offer is another question:

“What do you think we should do?”

Their point is not that management is a game of Questions Only, but that leaders want to develop others and development comes through actions; employees have lots of reasons for asking questions, but a good manager should realize that employees need to be empowered and able to take action for most situations.  If an employee is constantly waiting on approval from the manager, then the manager becomes the bottleneck.

Mulling on this a couple days made me realize that there’s a potential hazard for most new technical managers related to the issue of employee development; are we doing enough to make our employees better engineers than we were?  Let me walk you through my thinking:

  1. Most new technical managers were promoted to their position from within their company, and it was usually because they were the best operator (i.e, someone who was skilled at their job as an engineer).
  2. Most new technical managers have a tough time separating themselves from their prior responsibilities, particularly if those prior responsibilities were very hands-on with a product\service\effort that’s still in use today (e.g., as a developer, John wrote most of the code for the current application; as a manager, John still finds himself supporting that code).
  3. If you were the best at what you did, that means that the people you now manage weren’t.  Actual skill level is debatable, but most of us take a lot of pride in what we do.  Pride can overemphasize our own accomplishments, while downplaying the accomplishments of others.

This is a problem for technical management, because the goal of a good manager is NOT to solve problems, but rather to increase efficiency.   Efficiency is best achieved by distribution; in other words, you as a technical manager could learn how to improve your own technical skills by 10%, but if your employees don’t grow, your team’s not really making progress.  On the other hand, if you invest in your directs’ growth and each f them improves their technical skills by 10%, it’s a bigger bang for your buck (unless you only have one employee; if that’s the case, polish your resume).

Here’s the kicker: Sacrificing your technical skills while building the skills of your employees will pay off more in the long run than continuing to build your own technical knowledge alone.  You WANT your employees to be better engineers than you were because you gain the advantage of the increased skills that brings to the table distributed and magnified by the number of employees you have.   I’m not saying that you should completely give up your passion for technology; it’s helpful for managers to understand the challenges their employee’s face without necessarily being an expert (that’s a fundamental principle of Lean Thinking; “go see” management).  However, you should strive to be the least technical person on your team by encouraging the growth of the rest of your team.

So let me ask you: “What are you doing to develop your employees today?”

Hadoop for the SQL Server DBA – Initial Challenges

I’ve been intrigued by the whole concept of Big Data lately, and have started actually presenting a couple of different sessions on it (one of which was accepted for PASS Summit 2014).  Seems only right that I should actually *gasp* blog about some of the concepts in order to firm up some of my explanations.  Getting started with Hadoop can be quite daunting, especially if you’re used to relational databases (especially the commercial ones); I hope that this series of posts can help clear up some of the mystery for the administrative side of the house.  Before we dive in, I think it’s only fair to lay out some of the initial challenges with discussing Big Data in general, and Hadoop specifically.  Depending on your background, some of these may be more challenging than others.

Rapid Evolution

Welcome to the wild, wild west.  If you come from a commercial database background (like SQL Server), you’re probably accustomed to a mature product.  For Microsoft SQL Server, a new version gets released on what appears to be a 2-4 year schedule (SQL 2005 -> 2008 -> 2012 -> 2014); of course, there’s always the debate as to what constitutes a major release (2008 R2?), but in general, the core product gets shipped with new functionality, and there’s some time before additional new functionality is released.

Hadoop’s approach to the release cycle is much looser; in 2014 alone, there have been two “major” releases with new features and functionality included.  Development for the Hadoop engine is distributed, so the release and packaging of new functions may vary within the ecosystem (more on that in a bit).  For developers, this is exciting; for admins, this is scary.   Depending on how acceptable change is within your operational department, the concept of rolling out an upgraded database engine every 3-4 months may be daunting.

Ecosystems, not products

Hadoop is an open-source product, so if you’re experienced with other open-source products like Linux, you probably already understand what that means; open-source licensing means that vendors can package the core product into their product, as long as they allow open access to the final package.  This usually means that commercial providers will either bundle an open-source product with their own proprietary side-by-side software (“we interface with MySQL” or “we run on Linux”), or they release their modified version of the software in a completely open fashion and earn revenue from a support contract (e.g., Red Hat).  In either case, it’s an ecosystem, not a canned product.

Hadoop technically consists of four modules:

  • Hadoop Common: The common utilities that support the other Hadoop modules.
  • Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
  • Hadoop YARN: A framework for job scheduling and cluster resource management.
  • Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.

However, take a look at the following framework from Hortonworks (the Microsoft partner for Hadoop):

hortonworks Lots of stuff in there that’s being developed, but isn’t officially Hadoop.  it could become part of this official stack at some point, or it may not.  Other vendors may adopt it, or they may not.   Each of these components has their own update schedule (again, change!), but there is some flexibility in this approach (you can upgrade only the individual components); it does make the road map complex compared to traditional database platforms.

Big Data doesn’t always mean Big Data.

Perhaps the hardest thing to embrace about Big Data in general (not just Hadoop) is that the nomenclature doesn’t necessarily line up with the driving factors; a Big Data approach may be the best approach for smaller data sets as well.   In essence, data can be described in terms of the 4 V’s:

  1. Volume – The amount of data held
  2. Velocity – The speed at which the data should be processed
  3. Variety – The variable sources, processing mechanisms and destinations required
  4. Value – The amount of data that is viewed as not redundant, unique, and actionable

A distributed approach (like Hadoop) is usually appropriate when tackling more than 1 of these four v’s; if your data’s just large, but low velocity, variety, or value, a single installation of SQL Server (with a lot of disk space) may be appropriate.  However, if your data has a lot of variety and a lot of velocity even if it’s small, a Big Data approach may yield considerable efficiency.  The point is that big data alone is not necessarily the impetus for using Hadoop at all.

Summary

Big Data & Hadoop are complex topics, and they’re difficult to understand if you approach them from a traditional RDBMS mentality.  However, understanding the fundamentals of how Big Data approaches are evolving, disparate, and generally applicable to more than just volume can lay a foundation for tackling the platforms.