A large portion of my career has been spent fighting against organizational disarray. Over the years, I’ve become very accustomed to being dropped into an established culture with the intent and promise of doing everything I could to improve it in some way. Sometimes I was alone, and sometimes I was even accompanied by my closest friends and colleagues. While I’ve both succeeded and failed to varying degrees in both situations — and occasionally, even, at the same time — one thing has not changed: this is not an easy task. We were cucumbers, being dropped into jars with the intent to somehow, against the forces of nature, un-pickle pickles.
Around 2001, I was laid off. It was still early in my career, so it was very upsetting. Those who were in IT at the time probably remember the situations that leads to mass layoffs (Dot-Com bubble burst; 9/11). I felt that I was a valuable employee, and had feedback as such, so it was confusing to my still-naive sense of fair play that I should be handed a pink-slip. After the fact, on a phone call with a VP who had left of his own accord around that time, he shared a tidbit that stuck with me ever since, “Managers never lay themselves off.” The truth of it was undeniable, and it lead to a thought experiment that has always nagged at me: if it turned out I was a part of the problem, and not part of the solution, would I have the courage to lay myself off for the betterment of the company?
I have, on many occasions, been a terrible technical lead. For years, I had no idea why people continued to promote me into these positions, even across different jobs. It’s not like I wanted to be a technical lead, they would just walk up and tell me that would be my new job. I was a terrible technical lead, because I could never work out what the job entailed. To become a technical lead, you had to be the most skilled of any of the other engineers — at least that’s the idea. However, to stay skilled you need to code constantly — that’s the “technical” part. The “lead” part, however, never seemed to fit, because to lead, you have to do more than sit at your desk and code all day with your headphones on. How can you stay technical and still lead?
A few years back, I interviewed a woman for a Java software engineering position, and noticed that she had “extensive experience with Hibernate” on her résumé. I remember thinking to myself, “well, what an odd thing to mention having ‘extensive experience’ with. This person must really adore Hibernate.” I mean, it’s not like she put “has extensive experience building applications using Test-Driven Development,” or “has extensive experience modeling transactional domains.” Nope, “extensive experience” with a very specific framework devoted to a very specific task. Great. Let’s talk about that, I guess.
We’ve all heard the phrase, “What gets measured gets done” but what happens when there’s nothing to measure? Recently, I was involved in a project where there was noticeable delay in getting the numbers behind various performance indicators. As it turns out, the stall was due to the fact that there was so little activity to report, people were concerned that showing “no measurements” equated to showing “no work,” which I suspect was never the intention of the Management By Objectives (MBO) process.
Look, I don’t want you to be offended, but I just don’t want to pair with you. No, I fully understand the benefit of pair programming, and have paired successfully quite a bit with other people. The problem isn’t with pair programming as a concept — the problem is you. I just don’t want to pair with you. You know how people say, “Don’t take it personally?” This isn’t one of those times — you should take this personally. Fundamentally, I am less productive as an engineer and less happy as a person when I pair with you. As a result, I’m not going to.
Ever heard this one: “Come on everyone! If we’re going to make the deadline, we’re going to have to put in some hours!” or “Jimmy is really working hard — look at all the hours he’s putting in!” Yes, Jimmy is putting in a lot of hours. Yes, Jimmy comes in at 9:00 am, and leaves at 11:00 pm. Jimmy is putting in hours. Jimmy is also writing crap that has to be re-written, missing requirements, and pumping out more bugs than functional production ready code. But boy, he sure is putting in hours.
Not a lot of companies do Agile well. I know some very smart people who think Agile is an utter hoax — a gimmick, used to make buckets of money by selling snake oil to gullible organizations that are just struggling to stay relevant. I don’t think it’s because Agile hasn’t been effectively evangelized. I certainly don’t think it’s because there is some fatal flaw with Agile methodologies that makes them abstruse or impractical. I think it’s because switching to Agile is painful, and people are opposed to pain.
Obviously, if you want a high quality product with a clean codebase and low defect counts, you need to be very careful and methodical in your approach to developing software. Ready-Fire-Aim is a sure road to disaster, leading to chaos and confusion. Not only that, but it speaks to a general immaturity and lack of professionalism of the development team and its members. That is, unless you iterate rapidly and continuously improve.
Poorly written code creates exceptions. Bad deployment extends service outages. SLA violated due to extreme acts of nature. Let’s face it, bad things happen as part of life, and also because people are human and mistakes are made along the way. An analogy may be made about going to restaurant and ordering some food, the entire dinner/experience depends on a chain of events starting from freshness of ingredients, to expertise in the kitchen, to something as simple as hospitality and greetings. When something goes awry at the restaurant… someone on the staff will take ownership and assume responsibility. Often with that, comes an apology and some immediate offer of amelioration. We’ve come to expect that as part of customer service. The same analogy quickly breaks down when applied to the technology industry. Ironically, when things go badly, we tend to blame the technology.
It’s Monday morning, and you walk up to your desk, cubicle, office, whatever, and set your things down. You open up your laptop or wake up your computer, take a sip of your $4.00 coffee, and look around the floor. Wait… something isn’t right. You notice 6 people you’ve never before seen huddled together with laptops. They’re drawing on whiteboards and pointing and arguing and laughing. Well now, don’t they just seem right at fucking home? Something’s not right, here — you can feel it. Relax. It’s much worse than you think.
The psychology of the average software engineer is fascinating. Not only do they develop a sense of entitlement towards management, as well as an attitude of elitism toward Product and QA, they will also seek to segment themselves from other software engineers through the selection of their programming language. Following a fraternal instinct, the overwhelmingly male software engineering community clumps into programming language cliques, and once the bond is established it can last their entire career. Sadly, much like college frat-boys attempting to define themselves through Greek letters, defining yourself by your programming language is both pathetic and myopic.
NFC [ Near Field Communication ] is a clever outgrowth from RFID. This group of short-range wireless communication standards will enable all kinds of conveniences previously un-attainable with our so-called mobile devices. Mostly phones, but practically any un-wired NFC-capable device, eventually, will be able to communicate witho other NFC-capable devices, giving the consumers a wide range of features for commerce, information exchange and ad-hoc authentication. Unfortunately, NFC continues in the trend where design for security came in as an after-thought, rather than a primary focus, going in. Perhaps it’s not fair to place the security burden on NFC, after all, it’s merely a low-level transport mechanism. What’s the worst that could happen?
Ok. Your team is backed into a corner. You know what the right decision is — technical or political — but are, presumably, adamantly disagreed with in your organization. In spite of knowing what you’re in for, you want to take a stand. You need to do two things, quickly and effectively. First, you need to credentialize yourself in front of everyone involved at the same time. Second, you need to focus on the contrasting effect each decision will have on the company, both in the immediate future and in the long term.
School has conditioned us to think that someone who does not know the right answer off the top of their head is an idiot. At this very moment, all around the world, students are called on by their teachers to answer a question, and are publicly humiliated when they don’t know the correct answer. Their peers are taught that it’s OK to jeer at someone who doesn’t know the right answer – even if they don’t know the right answer themselves. As adults in a technology profession, we see this same mentality manifested as interviews that go badly because of one wrong answer (“They should have known that!”) and first impressions that are forever stained (“That guy doesn’t know what he’s talking about”). If only it was that simple.
I have been fortunate enough to work with highly competent people, most of my professional life. Whether they were software engineers, system administrators, project managers, business analysts or executives. Each has been smart, savvy, extremely skilled in their craft and very often, great leaders. In cases where they were my peers, they were great partners in accomplishing whatever challenging tasks at hand. That a group of people were successful and continued to be successful can be attributed to the fact that we trusted one another to do the right thing, and provide the feedback, understanding and support when needed. Trust is the hallmark and the foundation of a great team.
So your organization isn’t responding to reason. What started out as a team that would attempt to exemplify the way your organization wanted to do product development has become a micro-managed monstrosity, full of stakeholders, and absent of accountability. You’ve decided that in the face of this blame-storm you keep hearing referred to as a “team effort” (which you’re starting to believe!), you have to take a stand. Here’s what you’re in for:
A closing interview question I sometimes like to ask is, “Is writing software more like stacking bricks, or playing high speed chess?” Asked my own question, I might answer, “It’s like playing high speed chess in order to figure out which bricks to craft so that I can stack them.” Stacking bricks, with no caveat to explain their origin, couldn’t be a more incorrect metaphor for developing software. It is monotonous, predictable, and not mentally taxing in the slightest. Playing high speed chess, on the other hand, is a grueling intellectual process, and one that cannot be sustained indefinitely.
One of my most visible faults early on in my career (and something I still struggle with to this day) was my tendency to rigidly represent my own (or my team’s) interests, regardless of how large or small. I say rigidly, because I would tend to hold the same position in spite of the political or emotional erosion to external relationships that might be caused. In short, I didn’t really give a damn about any external team’s preferences — I knew what my team wanted, and that was that. Over the years, I’ve learned that this is not the best way to do business.
At a glance, it may not be immediately be clear how ops engineers and software engineer would see eye-to-eye on very many things. For the sake of uptime, ops tend to dislike change, while developers create and enhance, by continuously introducing refinements ( “change” ) — two seemingly opposing ends of the spectrum. However, the part that cause one group to resemble the other is the inconsistency of people. When it comes to being human, one type of engineer may be indistinguishable from the other. The reliance on people to “do the right thing” repeatedly, ironically, is the greatest threat in most organizations when it comes to productivity and efficiency. It is the people that’s most likely to break down. You, the people, are the weakest link.
In the English language we don’t say, “They made so many mistakes in the past that they know how to avoid them in the future.” Instead we say, “They’re experienced”. Making mistakes is what gives you experience. If you come out of college, and for your whole career never made one mistake, then only one of two things have happened: 1) You are bull[sh*t]ing yourself 2) You have never tried anything outside of your comfort zone. “Comfort zone” refers to things you already know how to do well. When you step outside your comfort zone, you will make mistakes, and the question then is how to recover.
There is a trend with my posts about software design where I write a lot about how it is a creative process. This is largely because for years, software design was considered to be busy work, and I take extreme issue with that philosophy. I find producing consistently effective software designs to be a very difficult task, and I deeply respect those who are good at it. I also find that those who are good at it have in common that they are all inspired workers.
I once created an engineer performance grading system to help management understand when an engineer was ready to be promoted from junior, full, senior, and through to principal. There were seven criteria, one of which was “Grace Under Pressure”. As you become more experienced, your ability to deal with the pressure of tight deadlines, fluctuating requirements, and idiot co-workers should increase as you find creative ways to cope with the stress. Dealing with stress becomes the hallmark of someone with a lot of experience. One of the best ways to deal with stress, I think, is humor.
It’s a simple to envision scenario. As a perspective shopper crosses the brick-and-mortar threshold, between embedded RFID and NFC, what’s inside of my wallet is quickly scanned. This isn’t your grandparents’ era of business analytics based on years and years of historic data warehouses, this is near-real time data gathering and heuristic algorithmic triggers that may sample the cologne I’m wearing, track my line of sight eye movements between the aisles and possibly even processing bits of my DNA left behind. Am I describing some scene from Minority Report? Hardly. This is active pursuits by some of the top mobile consumer technologies company currently. Remember last holiday season’s attempts to track supposedly anonymized GPS signals inside of malls?
One of the most common points of extreme contention I find between Agile and Waterfall practitioners involves a heated debate about the appropriate time in the project’s life cycle to be making important decisions about the software’s architecture. In one corner, we have the Waterfall approach of Big Design Up Front (BDUF), and in the other corner we have the Agile approach of Emergent Design. I’d like to outline some of the key differences between the two design philosophies because I think the right choice becomes obvious when they’re contrasted appropriately.
In the software industry, there are always jobs. Sure the larger economy might tank, unemployment might skyrocket, but there are *always* jobs when it comes to writing software. This is one of the reason why people go into writing software in the first place — guaranteed employability. If you’re an out of work software developer, and think I’m being unkind try this: drop your asking price below market – they’ll hire you. The problem is, a job is one thing, but finding the right job is another entirely.
In 1995, there were approximately 5 million mobile connections. A short 15 years later, that number is closer to 5 billion a number that’s rapidly approaching and will handily surpass the actual population of the planet. Look around, and look on your person, it’s likely you’ll find a smart phone, maybe a tablet or even an older MP3 player. This isn’t even counting such antiquated computing resource like a laptop, and it’s easy to see why the calculation is going to average over four such devices per person. To say there will be growth in mobile is a bit of an understatement, once you consider all the other “widgets” yet to come — from embedding into appliances to health-related devices, like glucose meters, heart monitors or even plain old thermometers. My point is, lots of mobile (platforms) invites for lots of mobile apps. Seems like daily, that I hear about someone becoming a mobile application developer.
Speed and quality are universally contentious. Engineers always choose quality, but claim to value speed. Business owners always choose speed, but claim to value quality. Each side is skeptical of the other’s intentions when their actions seem to contradict their claims. Why does such an ironic and contentious relationship exist between the funder and the implementer? This post is not a referendum on either side: it’s a bi-directional concession letter, between Competent Business Owner and Competent Engineer.
With every new job, there is a short but finite honeymoon period — it’s called that, because similar to marriage, there is an initial rush of adrenalin and endorphins and obviously, the promise of the new opportunities — if there was not promise, why bother leaving one position for another? — and everyone bask in that glow. In time, those feelings might change, and reality will gradually come back into focus. Familiarity will erode the novelty and the real challenges of the role will become apparent. Some employees already recognize this, but many are not fully aware/cognizant… your first ninety days on the job holds the greatest indicator of nearly all your future success in that role.
Held captive in a status meeting, your moment of shame approaches. Vulnerable and exposed, like a naked baby laid before encroaching doom, you await your turn. Your peers’ bold claims, naive honesty, and feeble excuses are recorded for posterity by a stone faced facilitator. They work their way around the room, thinly masking disdain at any answer that is not a crisply delivered “I’m done”. Finally, the time of judgement is upon you. As you fumble to explain your lack of completion, you are interrupted by a question that has struck down even the strongest among us: “When will you be done?” Fighting back a swelling tide of emotion, you try desperately to think of what you can say that you haven’t already. Step aside my child, let me handle this.
One of the most difficult challenges I face in my professional life is is maintaining a healthy working relationship with people who I believe are deeply incompetent. Incompetency is, for me, extremely difficult to stomach — far more difficult than, say, laziness or apathy, because whereas those might point to an attitude problem, incompetency reveals that the basic skills necessary to effectively perform daily tasks are missing. To further exacerbate the issue, one of my [many] personal character flaws is that I find it extremely difficult to relate to or to support someone I do not respect. Because of this, I’ve not only become acutely aware of the truth behind the Peter Principle, but I’ve also picked up on an even more dangerous corollary that’s become more and more prevalent in the workplace. For the sake of discussion, I’ll refer to it as the Napier Principle.
I love information technology. I’ve been fortunate, in that I’ve always known what I wanted to do [professionally] and then be able to pursue that with vigor and passion. Over time, as I move up and through my career ladder, I’ve deliberately aligned myself with people who’ve garnered my respect, and conversely, people who recognize my values. As such, I’ve worked for a series of companies fitting a certain profile. Until recently… with some changes in my [personal] life, I suddenly find myself craving something different, something outside of what I’ve known. Making career changes is not the simplest of tasks and required that I exercise some skills I’ve not used in some time.
Want to find out if an engineer performance tested their application? Tell them to do one thing for you: Make it crash. That’s right, bring the network traffic to a crawl; force it to run out of memory; take away disk space. Have them demonstrate that they know the limits of their application so well, they can make it fall over at will. If they’re bad ass, they’ll whip out a test script and start it up. In a few minutes, the application should crash in a predictable way — predictable if the application is well designed. But really, my point here is to try to make it crash but not be able to.
While on site at a client the other day, I heard one of the FTEs make the statement about their brand new legacy application that “there was too much technical debt, so it needed to be rewritten”. I cringed a little bit, but I considered that I was just being pedantic, and decided not to nit-pick at the misuse of the term “technical debt”. Looking back, I’m not sure I should have let that statement go. I probably should have shed light on the important distinction that should be made between accruing technical debt and creating a mess.
This week, an incident happened with Knight Capital when their “trading algorithms” allegedly cost the firm hundreds of millions of dollars. Within hours of the story, various camps have been quick to denounce the algorithms and the automation that was supposed to save the day. Of course, the problem is in automation, because automation is supposed to reduce errors and prevent outages and boost security and mitigate risks and improve the bottom line and make ice cream sundaes with a cherry on top. Except when it doesn’t, and now this latest mishap only adds to the argument against such automated practices.
Corporate culture is obsessed with the term “leadership”. They pride themselves on being “leaders”, and for fostering “leadership” within their organization. We even have the idea that everyone is a “leader” in his or her own right, each taking personal accountability over what they do every day and “leading” in their own special way. I glaze over when people talk about leadership. I fully expect that you’re glazing over just reading the opening of this article. It’s old. It’s stale. We’ve heard it all before. At this point it’s boring. Someone who takes ownership of cobbling together a PowerPoint before a big presentation, and the people who lead troops to storm Normandy ain’t the same type of leader. They ain’t even in the same ballpark. Those types of leaders didn’t want to lead, they were drafted into it and rose to the occasion. To the rest of us, these are the people we want to call leaders.
“So, hey, you’ve got that report ready for review at 4 o’clock, right?”
“So, uh…I thought I had until Monday for that…”
“‘SO-A?’? No, not on SOA, on ‘World Domination and All Things Evil.’”
“So…no, I said, ‘so, uh‘…”
“…So…I don’t really know what you’re saying to me, but if you could have that report ready by 4, it would be great…”
If you’re a senior-or-above talent, you know how work longer hours without it significantly impacting your work product. For this, I’m talking about the 40-50 hour range. Most people start to fall off around the 50-60 hour range, and we all start producing crap around 60-80 hours. Beyond 80 hours, it probably time to look for a new job. Having said this, let’s focus on the 40-50 hour range. If your employer suggests or requires that you work these extra hours, it’s probably not going to destroy your home life, or wreck your sense of well-being, so the impact should be minimal, if not negligible. Why not work these extra hours? Because you’re not getting paid for them.
Mobile is changing our lives. We now have nearly un-dreamt amount of computing resources in our pockets, and it has deeply enriched our day to day experiences, especially for the current consumption-centric life styles we lead. But, companies seem to be resistant in adopting the same outlook, and are in fact, not embracing the BYOD movement. MP3 players were OK, but corporate IT seems so resistant about hooking up your Samsung Galaxy Nexus for Email, or adding you to the WiFi network. Don’t they get the value? That’s just the business being slow and monolithic and missing out on the wonderful opportunities, right? Not so fast.
What do you suppose the average person thinks of when they hear the words “software engineer”? Perhaps they think of a famous (or infamous) entrepreneur who’s been forcing them to restart their PC for years. They might even think of a hip, well-built hacker with an earring who assembles software “worms” by racing to reconstruct some antagonistic digital Rubik’s Cube1. But I would bet that the majority of the population would think of a middle-aged man with high blood pressure and vitamin D deficiency. Well folks, I’m here to tell you that while that last group may have been right for years, the Age of Milton is over.
In yesteryear, in the time of our coding forefathers, ancient project managers would gauge a developer’s productivity my measuring the lines of code added per unit time. This was a convenient and intuitive metric, as there was seemingly a correlation between number of lines coded and number of features added. When companies then started rewarding developers according to this metric, it lead to a rash of bad programming habits – most notably copy and paste coding. In modern times, we consider ourselves enlightened, and claim to have realized that not only is there no direct correlation, but that attempting to measure developer productivity in this way is both meaningless and destructive. Yet, if you corner the average development manager and ask them to compare developer productivity, many of them would not be able to resist the urge to pull metrics from source control. This due to a lack of awareness of a complete reversal in the way in which lines of code reflect productivity.
The sphere is lit up with the latest tale of woe, which has befallen a prominent writer in the technology space. It is a terrible event, to experience this particular invasion in one’s life, and not to mention the loss of not just the sense security but the actual loss of invaluable data, especially pictures of children that cannot be replaced/retaken. While I empathize with the plight to reconstruct his life, and appreciate the journalistic approach toward soliciting insight from the alleged hackers… I’m not entirely in agreement with the finger-pointing to Amazon and Apple. At least not in the same way that’s being discussed. Yes, two-factor authentication is desirable. Yes, it’s a good lesson for security vs. ease-of-use. Yes, entrusting information to others expose oneself to risks, when the other party isn’t demonstrating the same diligence at managing your information. However, the failure somewhat overlooked, once again, is that social engineering was the attack vector — similar to the CloudFire incident from a few months ago — and no amount of technology will solve this no-tech grandfather challenge.
Nobody wants to manually test software. “That’s not true! I do!” you say, missing my point. If you think you want to manually test software, then you need to dig deeper into what that entails: To do your job well, every time any developer changes anything to any part of the system you need to re-test everything. Anything less and you’re not doing your job. If you’ve heard different then they’re lying to you. *That* is what Quality Assurance is – the assurance of quality. The only way you can be sure that some developer didn’t break something is to test the whole system again no matter how minuscule the change. On a simple mobile app, that’s not so bad; on an enterprise system, you may end up throwing yourself down a flight of stairs just for a change of pace.
Yesterday morning, along with hundreds of thousands others online, I watched the live HD video feed of the Mars Science Laboratory Curiosity successfully touch down at Gale Crater as planned/designed. It is a significant milestone, and quite the hallmark of success for a complicated mission. I couldn’t help but think about the dichotomy that is NASA — the rare mix of size, bureaucracy and performance. Over the years, we’ve witnessed their triumphs and their failures, as well as some epic recoveries from disastrous missteps that plague the largest of enterprises. One observation became clear to me, as I listened to the debriefing panel: if you’re not making something better than you’re not relevant.
I have a predictable threshold for Macy’s. Not too long ago, I was shopping there with a girl — which is to say that she was shopping, and I was serving dual life-sentences back-to-back for a crime I didn’t commit. The likeliest explanation for my unjust, albeit fashionable, incarceration was that she was a heartless shrew who had no concern for my long-term psychological or emotional well-being. In the unlikely event, however, that she was not pure evil, but merely that the allure of fragrances and cosmetics had acutely undermined her ability to feel specific empathy for the opposite sex — the scents having temporarily transmuted her into a sort of sociopathic shopaholic — I felt it would be in my best interest to point out that not only was Macy’s not where I would like to spend 84 hours of a Saturday afternoon, but also that it was utterly absurd to spend $1.4M — which is only a slight exaggeration — on eye shadow.
I’ve already proposed an approach that will encourage Ops to avoid doing more work. Now, I’m going to expand on that less-effort trajectory, and share the following fortune-cookie wisdom: “Doing nothing is better than doing something…” although, you have to add “…stupid” to the end of that, to truly gleam this particular gem. Let’s face it, if the smartest and brightest people were always at the helms, there’d be a real dearth of topics for discussion here at feyn.com. Because there are under-qualified decision makers in the mix, who often measure performance with misapplied KPI or othere misguided metrics, there is a constant push to demonstrate value by doing something. That is probably the worst combination when it comes to operational soundness and security — doing something for the sake of doing, especially when there is an unlikelihood of doing something smart.
In 2005, socioeconomic researchers at Harvard, in conjunction with the local police force, performed a social experiment in Lowell, Massachusetts. After identifying over 30 of the highest crime rate areas in Lowell, the police forces were instructed to clean up litter from the roads and sidewalks, replace broken street lights, and implement a series of other small “cleanup” initiatives for 15 of the 30 areas. In the other 15, the police were instructed to continue operating normally. The result was an almost immediate 20% drop in police calls in the cleaned up areas. This is an interesting case-study for Broken Windows Theory.
First, let’s define “fired”: At some point, you were informed that your company has made the decision that they no longer want you working there. There are 3 broad categories for which you can get fired: The first is a violation of company policy, which includes absenteeism, tardiness, poor personal hygiene, threatening a fellow employee, discrimination, and sexual harassment. If you’ve been fired for any of these you deserved it, and you better get your life right before you look for another job. The second is incompetence, but most companies require extreme incompetence for a long period of time even after their attempts to train you have failed before they fired you. If you’ve been fired for that, well, you may not be smart enough or hard working enough for this line of work. Blame your parents for not making you study and do your chores. Finally, there is getting fired for standing up for what you believe in, but what you believe in is not in line with the what the company wants.
Software engineers have an uncommonly difficult job. Not only are they tasked with the monumental challenge of accurately modeling a business in digital interactions, they also need to be constantly permuting the different ways they could be solving their problems and the pros and cons of each possible implementations. To make matters more difficult, they’re having to constantly juggle the ever-evolving needs of the client and an employer who thinks having something done yesterday is still late. You’d think that with all of this, we’d find little time to resist the progressive development philosophies and methodologies that have proven themselves for years from entrenching themselves in our day to day lives.
No organization would say they strive for mediocrity, yet so often they inadvertently reward employees for mediocrity and punish them for excellence. This slippery slope begins by attempting to define a job by the lowest common denominator of skills and responsibilities. Indeed, we hire for this lowest common denominator, and upon hiring we inform our new employees that they will have their performance measured, with an emphasis on executing in accordance with their new job’s requirements – job requirements that were determined through an examination of lowest common denominator expectations. There is often a hint at how an employee can exceed expectations, but no matter how this is presented it typically traces back to number of hours worked above those required. Having hired for mediocrity, set expectations for mediocrity, and offered incentives for longer hours which will lead to an eventual reduction in overall performance, we have successfully created a drone – a fungible resource that can be replaced or scaled a moment’s notice should the need arise. This, in essence, is the role of a middle management – to recruit and groom drones incapable of stepping outside of a company’s documented expectations.
It still amazes me, sometimes, that SQL Injection ever came into vogue, becoming one of the poster children of web application vulnerability. It’s outright jaw-dropping that in 2012, an iconic web company would fall victim to this technique. I could go on and on, about the number of things that went awry and/or should’ve been done. But this appears to be a chronic failure, with each generation of software engineering re-inventing the same bug all over again, like an endless nightmare of unlearned remedial lessons. SQL injection attack is a variation on one of the oldest secure computing tenet no-no’s, and that is the implicit granting of permissions.
Look, I don’t see what the big [flippin] deal is. We all curse – all the time. Stop [bull pooing] about it – you do, they do, we all do. When you’re around your friends having a drink, you curse like a [mother flippin] sailor on shore leave. So why is it not allowed in the work place? Because it offends people. Well you know what? It offends me that I can’t curse! Now I have to spend all my [gosh darn] time tiptoeing around your [flipped] up [bull poo] instead to telling you like it is! That’s not only frustrating, it’s a waste of valuable company resources!
It’s a well known fact among software professionals that the moment your friends and family discover your technical competency, you will become their indentured technical support servant for life. Whether it’s re-installing Windows or replacing ink cartridges, no task is too demeaning: if it involves a computer, you’re the first phone call they make. And why not? This is your fault, after all.
I started out titling this as The Challenge With Mobile, but the thoughts that keep me awake and go bump in the night are really troubling. I wonder and worry if people even recognize this brave new, slightly dystopian, world of technology we have created for ourselves — one which the phone is never off. An always-on and always-connected digital frontier, full of irresponsible citizens who fail to exercise their civic responsibilities on minding their own perimeter defense… thus as a result, endangering my co-existence within that space. If the initial wave of personal computers joining the Web unleashed a wave of malice and destruction, you ain’t seen nothing yet.
A regular critique I hear of engineers who are new to projects or codebases is that they provide almost immediate negative feedback about what they see, and are in no position to do so. The feedback tends to be received as being premature, unsolicited, and out of context, and is therefore undesired. To further complicate things, the engineer typically genuinely wants to improve things, and isn’t merely interested in bitching. I’ve personally received this feedback multiple times, and frankly, I’ve had just about enough of it. Rather than meet this constructive criticism with hostility, I’d like to propose a different approach for the receivers of the feedback. I’d also like to offer a peaceful compromise.
As I watch the flight attendant go through the pre-flight safety speech, I cannot help but wonder how many people are paying attention, and more importantly, in a “real” emergency, if people will actually find their nearest exits. That’s not just a problem plaguing airline passengers. I routinely observe managers, developers and engineers ignore smart practices and safety procedures, and head blindly into tasks without proper planning, ill-informed, or worse yet… motivated by fear. It’s not wonder they, along with their code and systems, end up in a prison of their own creation — the kind of legacy scenario we retell like ghost stories, nonetheless, people continue to not heed this information. Knowing where the exits are will help you to avoid getting trapped in your burning jail cell.
It’s human nature to be trusting. We don’t want to think people are out to get us, because we don’t want to live in constant fear. I get that. As a normal human being, you can’t walk through life being afraid of your shadow and paranoid that someone is out to get you. However, as a software developer writing internet deployed code, that exactly how you have to think. If you are constantly vigilant, do everything right, cross all your t’s and dot all your i’s, you will still introduce vulnerabilities without knowing it. Sometimes, the attack will come in ways that will blow your mind…like say a camera phone in a coffee shop.
Let’s briefly take a step back. Yes, we’re all very concerned with how best to drive software development. We love our methodologies and our philosophies — our little software calendar platitudes and acronyms. We love Martin Fowler, Kent Beck, Eric Evans, Bob Martin, so on and so forth. But really, what is the point? This article is for everyone who has ever been subjected to my ongoing ramblings and critiques of software, processes, and team dynamics. I’d like to try to offer you a moment — if ever so brief — of catharsis. These were my intentions. Here was my point.
Fresh out of college, you have no idea what you’re doing. This fact is true no matter how much you paid for you college education, where you went, what your GPA was, or what Latin title was bestowed upon graduation. You have no clue, because you’ve never had a real job – this is your first. The number one gauge of being good at a job is experience in doing that job, and you have none. You may think that your college education is an indicator of how quickly you will master your job, but there is no correlation. It might actually work against you, because if you believe you already know everything you won’t be open to learning anything. If you want to get good at a job and you have no experience, you’ll need to apprentice under someone who can show you the ropes.
Since I’ve already started the fire for more Agile in operations, it makes sense to actually discuss what exactly is involved in doing just that. After all, this isn’t just envy of my fellow software development brethren– then again, who wouldn’t want to be a hip and Agile developer? — these are real methodologies and enlightenment gleamed through blood, sweat and tears and savvy Ops should outright
borrow steal those tough-earned wisdom from the software teams. If nothing else, only to avoid doing any real work so that we may continue to be grumpy and misanthropic stonewalls that system administrators are known for. And play StarCraft.
“Good morning everyone. First, let me say how excited I am on this, the first day of our new project! The goal of the project is to build a single family home, and we have a month to build it. Working closely with our team of engineers and architects we have broken down all the individual requirements for the house, estimated how long they will take, and all we need to do today is prioritize them. Since building houses is intuitive to everyone, this shouldn’t take too long, but I’ve blocked out 2 hours just in case we need extra time. Let’s get started!”
In Working Effectively With Legacy Code, Michael Feathers defines “legacy code” as code without tests. I love this definition for two reasons: first, it provides a succinct, objective way to measure whether code is legacy or not; secondly — and this really is the main reason I love it — it asserts that any code, written by anyone, at any time, without accompanying tests, is legacy the moment it’s written.
Management loves, and loves to tout about, training. Management distrusts training, because it highlights broken people, systems and processes. How can you love and distrust something at the same time? The dichotomy is a simple reflection that as it is designed today, many if not most training programs are ineffective and are nothing short of last-ditch efforts at salvaging un-qualified employees. This is especially true within the technology sector, a group of professionals that may benefit the most from training yet reaps the least as a result of mandate, motivation and momentum associated with training. There has to be a better way.
The 2000 Presidential Election was mired in controversy over which candidate received the popular vote. Whether due to faulty hardware, or a state-wide psychomotor-disfunction epidemic, a large number of Florida ballots were unclear about which candidate they endorsed. “How could something as parochial and deterministic as aggregating multiple choice answers lead to such a quagmire?”, you might ask, and indeed, it makes for an entertaining story.
How good are you – really? Are you a master of your craft, merely mediocre, or do you suck? How would you know? Asking other people tends to yield nothing more but polite indirection and flattery, or at the other end of the spectrum rudeness masquerading as honesty. Within your organization, there are annual reviews, but they only gauge your performance against set expectations, not how you stack up to your peers. Without true visibility into other organizations, much less how their employees perform, you’re left with how they describe their employees: Typically infallible and elite. In truth, there is no yardstick to assess your own abilities compared to others, so what are you to do? How do you know if you need to improve, if you’re perfectly adequate, or if you’re so good that you need no further improvement?
Go to a developer-oriented gathering and you’ll hear this: “I have no interest in learning <xyz>” where <xyz> represent some kind of operational tasks or knowledge. Why should they? System administration is not really essential to software engineering, and conversely, ops teams have similar disinterest in writing code. Or they would be doing each other’s job already. That doesn’t mean there aren’t lessons to be learned from one another. In fact, the emergence of devops reflect just that recognition. It’s time for operations to adopt and apply the same discipline and knowledge that their brethren in the software camp have gradually refined over the years. It’s time for agile operations.
Stop adding features. That one mantra — if followed consistently — would improve the majority of user interfaces. In terms of concrete evidence, it’s out there in droves, but I leave it to the reader to discover the truth of it. In essence, humans have a tough time finding what they need in a sea of what they don’t. The more features you add, the more the likelihood that you’re adding what the user doesn’t need at a given point in time. This guideline, as simple and intuitive as it is, is an extremely difficult pill to swallow for business that make their money selling new features.
As problem solvers, we dream of a positive vision of the future, based upon the belief that the challenges and problems we face are solved by good design. OK, stop the playback. Back in the real world, the dominoes occasionally get knocked down in sequences un-imagined and worse, in ways where our complex and richly-integrated systems cannot address. Some of it will be act-of-god happenstance, some will be introduced by the ever fallible humans, while others are just malicious intent. It’s a harsh reality, but that is and have always been the Internet in spite of funny cat GIFs. When you are creating a solution — from authentication to authorization — make sure the design takes some consideration for all intents, not just the ones derived from planned and expected user stories.
The Boy Scouts of America have a rule, “Always leave the campground cleaner than you found it.” I was never a Boy Scout, but I found out about this rule when I read Bob Martin’s Clean Code. I thought it was such a splendid philosophy to apply to software engineering and codebases that I’ve probably referenced it a hundred times. Sure, it rolls off the tongue nicely, and teaches children to clean up after themselves, but I knew there was something subtle about it that I liked that wasn’t immediately apparent — something more fundamental, revolving around diligence, discipline, and dedication. It occurred to me that the underlying philosophy I was so attracted to was this revolutionary concept of giving a shit.
It’s not the easiest pill to swallow, and more importantly, imagine trying to convince the management team to not only implement an external security review program, but to provide incentives for the resulting discoveries — that’s right, we’re going to pay others for our mistakes. Yikes! Let’s face it, there will be bugs in software, despite development process, review and QA. Items will get missed, and unintended feature or behavior will creep into the code base. The systems have become complex. The interactions are not always planned or even manageable with 3rd parties. Likely, only non-developers won’t acquiesce to that simple truth. It’s an understandable sentiment by the business stakeholders, as their focus isn’t on design or implementation of complex application logic. However, there should be one common ground within any organization, and that is the need to be diligent stewards of their customers, and by extension, their customers’ information. Security is the bedrock of excellent customer service.
The majority of meetings are a waste of time for the majority of people in them, yet we call them mercilessly whenever we feel there is the slightest communication need. Meetings have no cost that can be directly measured, but a benefit that can: People sat in a meeting for a period of time. Really, as stated, that can only be a good thing — especially if it’s the right people. With this in mind, great care is taken in the selection and culling of meeting attendees. After all, without the right people, how can it have the right outcome? This is where the concept of the meeting breaks down, as “the right outcome” is rarely understood much less articulated to attendees.
There’s a skill that I’ve rarely seen among recent college grads walking into enterprise codebases, and that’s maintaining legacy code. It must be shocking to expect to walk off campus and on to a green field project, only to be met with a “where the hell did these 1.5 million lines of code come from?” sentiment. I’ve observed three particularly difficult challenges that less-experienced engineers face when working with legacy codebases: rapid understanding, adapting design, and finally, implementing bug-free modifications.
I never get why people panic at any point during a deployment. When you’re doing a deployment you’re in only one of two situations: 1) You’ve never deployed the app before 2) You have. If you never deployed the app, there’s no audience to complain if things go wrong, so take your time and figure them out – no need to panic. If you have deployed the app before, you damn sure better have a rollback rip chord in place. Fact is, the vast majority of deployments are done without even the semblance of a rollback plan. The best they have is to restore the database from dump files and build an earlier source control tag. That’s not a plan, that’s inviting disaster.
I have a few a stories I’ve told over the years to explain my origins in software engineering. One of the more humorous ones is that I wrote down on a piece of paper “Doctor, Lawyer, Software Engineer”, and chose the one that didn’t legally require a college degree. While that story is actually true, the reason Software Engineer ended up on the paper to begin with was that I had an instant, lasting attraction to the notion of being able to model real world interactions for the cost of electricity. Potentially even more than that was the allure of creating something out of nothing.
The constant and rapid pace of technological innovations creates easy opportunities for advancement. In this era of fast adoption and, occasionally, fast expiration, it’s not easy to slow down and examine how the changes have affected our lives. While I love the utility available to me in this inter-connected world, I am not a fan of the dichotomy of providing free service and requiring business profitability that has emerged as the default playbook for achieving and measure success as a company. Consumers have become increasingly naïve in their willingness to give up the power of purchase, and in turn, companies see the individual not as a customer, but simply another addition to the user base collection. When there is no price to pay, you are not a customer; you are just a product being sold.
If, as a developer, you care about security, you need to be constantly running pentests against your own code. Constantly – and I’m not talking about buying an off the shelf tool that will do the scanning for you. Those are important, but they’re something that QA or Operations can use to cross-check your work. What I mean is good, old fashioned, trying to break into the software you just wrote. This shouldn’t be too hard, you wrote it! You know where you usually slack off, so you’re in the best position to find vulnerabilities in your own code.
Terrorism comes in many forms. It’s commonly typified by leveraging hostage situations to undermine national policy and to facilitate foreign influence, or personal ulterior motives. An increasingly common incarnation of what I consider to be corporate terrorism manifests itself as so-called “knowledge anchors”: employees who have been with a company long enough to have acquired intimate knowledge about the business’ problem domain, and have subsequently outlasted any one else who might also have acquired the knowledge.
A lot has been written about execution. Yet it clearly remains a challenge and an elusive goal to both individuals and teams. I could start espousing my own theories of execution, but I have no desire to add to the stream of management mumbo-jumbo regarding roles, responsibilities, metrics and results. Make no mistake, it is precisely the desire for results — the point B, in going from point A to point B — that leads us to examine and question execution. Too often, even capable people [and teams] focus on making history, when the focus should be on making impact.
Here’s a scenario: two women are at Starbucks, talking about how their respective weeks have been. One woman pauses, then exclaims, “I’m just so excited about parenthood! I can’t stop thinking about whether it’s going to be a girl or a boy!” “Oh my gosh! Are you pregnant?!”, the other woman responds, understandably shocked, given that her friend has yet to show any signs of a pregnancy. “Well, not completely. I’ve never really been sold on the whole pregnancy process – you know, the mood swings, the cravings, the added weight. It’s not really right for me; I’m just interested in the ‘having a child’ part. I guess you could say I’m mostly pregnant.”
Agile. Don’t roll your eyes. Yeah yeah yeah you say, and I appreciate that – at least you’re being honest. Agile ain’t panning out. All of these good things were supposed to come out of it, but it all just seems like a disorganized mess. People are complaining, work isn’t getting done, and all the while you’re reading about how good it is. You don’t get it. I understand. You wouldn’t get it, would you? You’re doing it all wrong. You made the common mistake of thinking that somehow Agile was a relief from the rigors of Waterfall methodologies, and that you could be on a perpetual process vacation.
Work in software development long enough, and one day, you too will hear this phrase uttered by someone, “I used to code.” While the intention may be one of empathy or solicitation, as the identification of the supposed, shared, past is meant to build bonds. Inevitably, this utterance almost always forms a divisive line between those who write software and those who’ve stopped writing software to perform another role. The simple observation is this: developers seem to not respect careers paths past the immediate creation of code, while management — be it project, or product, or even executive — usually resort to this declaration, as some form of critique on effort, resourcefulness and most likely, timeliness of delivery.
Deploying sophisticated software onto the internet is not easy. It involves lots of fine, intricate steps being executed in exactly the right sequence, and if any one of a thousand steps is not done in precisely the right order, the whole thing will fail. Knowing this, what do we do? We put a bunch of people on standby just in case anything goes wrong, and make sure that everyone is on a conference call and in a too-massive-to-communicate-effectively chat room at 2:00 am so that they can be Johnny-on-the-spot if something goes to wrong. Here’s a tip: If you are so nervous that something will go wrong that you need to have dozens of sleep-drunk people around ready to fix something they might have broken (or harass someone else who may have broken it) then you have too many damn humans involved in a process that needs to be fully automated.
Sleek UI design and smooth user experience have become the norm, and a whole generation of users have grown up without knowing and understanding the risks of being online. Who could blame them? Being conscious and aware takes effort, and the marketing machines routinely churn out the chorus of “let us take care of it for you.” I mean, who would want to be concerned with virus/malware, that’s so… “PC” in this post-Apple world. A sea of [Mac] users have been groomed for the easy, hands-off, existence. Their complacency is to be expected. And ripe for exploitation.
I’ve never received a compliment for making something simple seem complex. I’ve never had a conversation that began “Hey John, I think you’re great, as evidenced by the fact that I never know how the $@%# to begin attempting to use your code!” That would, of course, be absurd. Yet, for the longest time, I rarely thought about my fellow engineers when designing my solutions that they would, inevitably, one day maintain — or refactor.
True operation mavens know that downtime is inevitable. It’s going to happen, despite your best efforts. A blip, a stumble, some cable will get cut. Increasing the “nines” carries quite the price tag, and may not be the best way to maximize ROI. The plans for disaster recovery needs to be balanced, so that focus isn’t solely on the prevention of catastrophes. Equally important, is the rapid recovery for business continuance. Because that is the true goal of uptime — to serve pages, apps and data, to provide for the customers, and continue the revenue stream. This is no longer an insurmountable task, given the resources and knowledge at hand.
When we see a crappy user interface we know it. We can’t articulate it, but we know it. We look at it, have that sinking feeling in our gut, and try to figure out a way to say something about it. But we can’t. We’re afraid that we’ll offend someone, or worse yet have the wrong opinion. And really, that’s all it is – your opinion. That’s really the problem. If your assessment of a user interface is only your opinion, how can it be valid? It’s valid because you *know* it’s crappy. You don’t need a degree in human-computer interaction, industrial design, or usability engineering to know it’s crappy – it just is.
As much as I value and protect my own privacy, when the roles are reversed, I like to be Big Brother at every step of the way. Perhaps, that is why I go to some extremes when it comes to protecting my personal information, because I’m very aware the kind of “Big Data” collection and what will yield from data mining the habits of people on every aspect of their lives. As it turns out, defending the one is not sufficient, because you cannot police the entire [social] network.
It’s extremely difficult to find talented software engineers — especially in today’s market. I have my own theories as to why that is, ranging from the fact that software engineering is still a relatively young profession, all the way to deeper philosophies about how competent people are at their jobs, in general. What’s more, companies that lack talented engineers typically have a very difficult time finding talented engineers, and this is not coincidental.
Well, we’re all writing services now. This is a service, that is a service. We need a persistence service, a web service, service generating service, and services that service other services. Please stop. Put the word, “service” down and think about what you’re doing and saying. Everything you’re writing ain’t a service. “Service” ain’t the new term we’re using for “code”. You’re not always writing a “service”, sometimes you’re just writing a class. When you’re writing a service, it’s got a higher purpose and meaning that your average piece of software. It’s got to service something, or someone.
The security wires are still buzzing about the LinkedIn compromise. Again, as I’ve stated recently, a good post-mortem takes time and it’s best to ignore all the hype and speculation until most — if not all — of the facts can be established. What is surprising, is how much coverage there is about LinkedIn’s problem, as compared to the near-complete silence on Verisign’s management not being made aware of breaches dating back to 2010 that only came to light in 2012. That news is scary. This story is just irritating because of the number of opportunities for LinkedIn to have performed this upgrade without the hand being forced.
Hand me your credit card, you idiot. No, when things start running slowly don’t just blame the hardware. Yes, hardware can affect performance, but if you just did a release, and at the same time you started to see a spike in system resources without a spike in load, the problem is in your software. “But hardware is cheap! We can just buy more hardware!” or the more contemporary, “Cloud instances are cheap! We can just spin up more instances!” You twit.
All the software and audit and compliance in the world is useless, when a single person opens the door for the Big Bad Wolf to waltz in. Yes, code review is important. Absolutely, audit is essential. And without a question, process can save lives. None of that matters if the person entrusted with the key is readily duped by conversation. Social Engineering, it’s a grand-daddy when it comes to security risks. Sadly, technology has yet to come up with the panacea for stupidity. Just look at what happened to CloudFlare.
Maintenance sucks. I don’t care who you are, or what you’re into, maintaining legacy code sucks. You spend your days getting asked to fix other people’s bugs in a labyrinth of crappy code that was clearly written by angry monkeys who could inexplicably hurl feces at a keyboard in a manner that would yield compiling code. In fact, it’s not called “maintenance” if the code is good — that’s just normal development. If you’re on maintenance, then you have done something very wrong at some point in your life — or you’re just starting out. Much like anyone new to anything, you’ve got to pay your dues. In software development, that’s working maintenance.
You need a plan. You’ve just been told by your boss that the highly visible, habitually failing flagship project – yep, the one that’s a year behind schedule – just became your problem. “Don’t worry, you can only come out of this looking good,” you hear, among other things, like “hand-picked” and “Swat Team” – each one making you cringe more than the last. You’ve been here before, and while the stakes may be different, one thing is certain – You. Are. Fucked.
Successful organizations thrive by their talents [people], and it’s mostly a losing battle because so many hiring decisions are simply… bad. Over the years, pundits and experts offer up many theories and philosophies on how to recruit, and then retain, superior personnel. It really comes down to just three things — Proficiency, Passion and Personality. That’s the secret. No more. No less. Interviews, tests and profiles are but the tools to establish how a candidate measure up under each areas.
The people who know me best are rarely shocked when I take a new job. After all, I work in a highly volatile industry, and it’s never long before something potentially more enticing than my current situation shows itself. I like to think this is true for everyone, with the possible difference being that I tend to react to these new opportunities, whereas many people do not.
Shockingly, the word “arrogant” has haunted me my whole career – and really my whole life. I’ve looked up the definition of “arrogant” more times than I care to admit, trying to contrast it with “confidence” and “humility”. I’ve been told more than once that I needed to work on being “humble”, and have genuinely tried to be so. Tried and failed. In terms of personal soul searching, “arrogance” has occupied a disproportionate amount of my staring-at-the-ceiling-trying-to-fall-asleep time. I don’t want to be arrogant – I don’t like the very idea of it, and am ashamed and confused that I seem to attract the description with alarming frequency. Ironically, an aspect of arrogance is an abundance of unwarranted confidence, yet I have no confidence that I can conquer the label of arrogance.
The question is simple — do I trust the entity behind a particular website? The answer is less so, unfortunately. Misguided efforts at [micro]managing cookies, User Agent IDs and IP proxies betray the simple fact that I cannot hide from being myself. This was a slightly painful realization, once I had a glimpse behind the curtains and saw that the Wizard is not only great and powerful, He is everywhere, and rightly so. In a world of constant vigilance, even the ones casting no shadows are as visible as the endless tweeting of teeth brushers.
Would quality be better if there were no Quality Assurance people? Let’s face it, most engineers abuse QA. They write a bunch of crap they’ve never tested, declare it’s done to much fanfare, and throw it over the wall to QA while they sit back knowing it’s going to take QA at least a day or two to start writing up bugs – more than enough time to get in some solid games of foosball. Furthermore, once they have *said* they are done, the pressure is off of them, and on QA to find them, at which point the engineer can sit back and pick and choose what they feel like fixing under the guise of “triage”.
When your boss comes demanding you scale your team, don’t immediately fire up all the recruiters you know and start sifting through resumes looking for the perfect hire. First, give some thought as to what you were just asked to do. You weren’t really asked to hire more people (that’s expensive and risky), you were asked to boost productivity. The first place to look for more productivity is at your current team – especially at the novices. Most teams have some novice to average players who could use improvement, but we tend to ignore them assuming they can’t get better at what they do.