Tag Archives: Information Technology

Infrastructure-as-Code Is Still *CODE*

After working in a DevOps environment for over a year, I’ve become an automation acolyte. The future is here and I’ve seen the benefits when you get it right: improved efficiency, better control and fewer errors. However, I’ve also seen the dark side with Infrastructure-as-Code (IaC). Bad things happen because people forget that it’s still code and it should be subject to the same types of security controls you use in the rest of your SDLC.

That means including automated or manual reviews, threat modeling and architectural risk assessments. Remember, you’re not only looking for mistakes in provisioning your infrastructure or opportunities for cost control. Some of this code might introduce vulnerabilities that could be exploited by attackers. Are you storing credentials in the code? Are you calling scripts or homegrown libraries and has that code been reviewed? Do you have version control in place? Are you using open source tools that haven’t been updated recently? Are your security groups overly permissive?

IaC is CODE. Why aren’t you treating it that way?

devops_borat

Tagged , , , , , ,

NTP Rules of the Road

There’s nothing more infuriating than watching organizations screw up foundational protocols and NTP seems to be one of the most commonly misconfigured. For some reason, people seem to think the goal is to have “perfect” time, when what is really needed is consistent organizational time. You need everything within a network to be synchronized for troubleshooting and incident management purposes. Otherwise, you’re going to waste a lot of energy identifying root causes and attacks.

It’s recommended to use a public stratum one server to synchronize with a few external systems or devices at your network perimeter, but this should only be configured if you don’t have your own stratum zero GPS with a stratum one server attached. I can’t tell you how many times I’ve seen a network team go to the trouble to set this up and the systems people still point everything to ntp.org.

Everything inside a network should cascade from those perimeter devices, which is usually a router, Active Directory system or stratum one server.  This design reduces the possibility of internal time drift, the load on public NTP servers and your firewalls, and the organizational risk of opening up unnecessary ports to allow outgoing traffic to the Internet. Over the last few years, some serious vulnerabilities have been identified in the protocol and it can also be used as a data exfiltration port by attackers.

In addition to the IETF’s draft on NTP “best practices,” the SEI also has an excellent guidance document.

While it’s not realistic to have your own stratum zero device in the cloud, within AWS, it is recommended to use the designated NTP pool specified in their documentation.

Oh, and for the love of all that is holy, please use UTC. I cannot understand why I’m still having this argument with people.

Tagged , , , , ,

Apple, I Wish I knew How to Quit You

My baptism into the world of computers occurred the Saturday my dad brought home an Apple IIc when I was a kid. He was always fascinated by electronic gadgets and this seemed to be the latest device that would sit in a room (and eventually gather dust) next to his ham radio and photography equipment.  The damn thing didn’t do much, but it still felt magical every time I had the opportunity to play games on it; that funky beige box with a square hunk of plastic on a leash moving an arrow around a screen. At the time, it didn’t feel like anything more than a fancy toy to me. I certainly had no idea that this interaction would foreshadow my own love-hate relationship with the entire technology industry.

Years later, I turned into an Apple acolyte while working in the IT department of an East Coast university. I was an MCSE and had become thoroughly exhausted battling Windows drivers and the BSOD circle of Hell.  After I moved to Unix administration, the first version of OS X felt like a relief after long days of command line combat. While it had an underlying CLI that worked similarly to the Unix platforms I was used to, Apple provided a more stable desktop that was painless for me to use. Unlike Windows, the hardware and the OS generally seemed to work. Sure, I didn’t have all the software packages I needed, but virtualization solved that problem. While I lost some of my expertise in building and fixing personal computers, I no longer seemed to have any interest in tinkering with them anymore. During a mostly effortless, decade-long relationship with Apple, I would jokingly tell friends, “Buy a Mac, it will make you stupid.”  I became an evangelist, “Technology is just a tool. Don’t love the hammer, love what you can make with it.”

But as often happens in romances, this one has hit a rocky patch. Lately, trusting the quality of Apple products feels akin to believing in Santa Claus, honest politicians or a truly benevolent God.

The relationship started to flounder about 6 months ago, when I was unlucky enough to have the hard drive fail in my 2013 27″ iMac. It had been a workhorse, but the warranty had expired a few months before and I was in the unwelcome position of choosing between options ranging from bad to worse. I could try to replace it myself, but after looking at the IFIXIT teardown, that idea was quickly nixed. The remaining choices were to pay Apple or buy a new system between release cycles.  After running the cost/benefit analysis in my head the last option made the most sense, since the warranties on both of my laptops were also expired. With some trepidation, I purchased the newly redesigned 2016 MacBook Pro with an external LG monitor. And here’s where my struggles began.

Within a few days of setting up the laptop, I noticed some flakiness with USB-C connections. Sometimes external hard drives would drop after the system was idle. In clamshell mode, I often couldn’t get the external monitor to wake from sleep. I researched the issues online, making recommended tweaks to the configuration. I upgraded the firmware on my LG monitor, even though the older MacBook Pro provided by my workplace never had any issues. Then I did what any good technology disciple would do and opened a case with Apple, dutifully sending in my diagnostic data, trusting that Support would cradle me in their beneficence. The case was escalated to Engineering with confirmation that it was a known timing issue to be addressed by a future software or firmware patch. In the meantime, I refrained from using the laptop in clamshell mode and got used to resetting the monitor connection by unplugging it or rebooting the laptop. I also bought a dock that solved my problems with external hard drives. Then I waited.

Like the rest of the faithful, I watched last week’s WWDC with bated breath. I was euphoric after seeing the new iMac Pro and already imagining my next purchase. But a few days later, after having to reset my external monitor connection four times in a 12-hour period, I emailed the Apple Support person I had been working with, Steve*, for a status update on the fix for the issue with my MacBook Pro. I happened to be in Whole Foods when he called me back and the poetic justice of arguing over a $3000.00 laptop in the overpriced organic produce aisle wasn’t lost on me.

Support Steve: Engineering is still working on it. 

Mrs. Y.: What does that mean? Do you know how long this case has been open? I think dinosaurs still roamed the earth.

Support Steve: The case is still open with Engineering.

Mrs. Y: If you’re releasing a new MacBook Pro, why can’t you fix mine? Is this a known issue with the new laptops as well? If not, can’t you just replace mine?

Support Steve: That isn’t an option.

Mrs. Y: Please escalate this case to a supervisor.

Support Steve: There’s no supervisor to escalate to. Escalation goes to engineering, I only have a supervisor for administrative purposes.

Mrs. Y: I want to escalate this case to the person you report to, because I’m not happy with how it’s being handled.

Then the line went dead.

While Support Steve never raised his voice, for the record, a polite asshole is still an asshole.

This experience has done more than sour me on Apple, it’s left me in a full-fledged, technology-induced existential crisis. How could this happen to me, one of the faithful? I follow all the rules, I always run supported configurations and patch my systems. I buy Apple Care!

I even considered going the Hackintosh route. But I don’t know if I can go back to futzing with hardware, praying that a software update won’t make my system unusable. And Apple’s slick hardware still maintains an unnatural, cult-like hold over me.

So, like a disaffected Catholic, I continue to hold out hope that Apple will restore my faith, because it’s supposed to “Think Different” and be the billion-dollar company that cares about its customers.  But I can’t even get anyone to respond on the Apple Support Twitter account, leaving me in despair over why my pleas fall on deaf ears.

brokeback_apple

*I’m not making this up, the Apple Support person’s name is actually “Steve.” But maybe that’s what they call all their support people as some kind of homage to the co-founder.

Tagged , , ,

Booth Babe Shame

Yesterday on Twitter I came across the following in my feed:

Screenshot 2016-06-23 13.46.05

I was horrified and angry, responding with the following:

Screenshot 2016-06-23 14.05.41

I also posted something similar on Linkedin, Facebook and Google+.  I never heard from the vendor, but I did hear back from some pretty offended men and women, both technologists and non-technologists.

Screenshot 2016-06-23 14.15.34

Why are we still having this conversation? Why would ANY vendor think it’s appropriate to make make the staff at a technical conference dress like employees at Hooters?

There’s a bitter irony in reading article after article about improving the diversity in STEM fields and then seeing this. I’m getting pretty tired of feeling like a second class citizen in this industry. I’d like to be able to go to a conference and know that I’ll be treated with the respect due a senior-level technologist, not minimized because I happen to have a uterus. Clearly, I’m not the only person who feels this kind of behavior is no longer acceptable and shouldn’t be tolerated. I encourage anyone equally appalled to call out the vendor in question. If we can’t appeal to their morals, maybe they’ll think twice if it impacts their bottom line.

UPDATE: I’ve been corrected regarding the venue. This event took place at a Vegas night club, but I’m STILL offended. I don’t see any Booth Bros. Why did this vendor feel the need to have women promote their product this way?

UPDATE 6/28: Looks like people at Nutanix are embarrassed, but I’m not sure if this will actually translate to any awareness on their part.

“In a blog post Friday, Julie O’Brien, vice president of corporate marketing at Nutanix, apologized for the stunt while also offering an explanation of what went wrong.”

This was in my feed this morning: Nutanix CMO After Risque Conference Stunt: I Will Resign If It Happens Again.

 

Tagged , ,

Fixing a Security Program

I’m still unsettled by how many security programs are so fundamentally broken. Even those managed and staffed by people with impressive credentials. But when I talk to some of these individuals, I discover the key issue. Many seem to think the root cause is bad tools. This is like believing the only thing keeping you from writing the Next Great American novel is that you don’t have John Steinbeck’s pen or Dorothy Parker’s typewriter.

In reality, most of the problems found in security programs are caused inferior processes, inadequate policies, non-existent documentation  and insufficient standards. If buying the best tools actually fixed security problems, wouldn’t we already be done? The truth is that too many employed in this field are in love with the mystique of security work. They don’t understand the business side, the drudgery, the grunt work necessary to build a successful program.

For those people, here’s my simple guide.  I’ve broken it down to the following essential tasks:

  1. Find your crap. Everything. Inventory and categorize your organization’s physical and digital assets according to risk. If you don’t have classification standards, then you must create them.
  2. Document your crap. Build run books. Make sure you have diagrams of networks and distributed applications. Create procedure documents such as IR plans. Establish SLOs and KPIs. Create policies and procedures governing the management of your digital assets.
  3. Assess your crap. Examine current state, identify any issues with the deployment or limitations with the product(s). Determine the actual requirements and analyze whether or not the tool actually meets the organization’s needs. This step can be interesting or depressing, depending upon whether or not you’re responsible for the next step.
  4. Fix your crap. Make changes to follow “best practices.” Work with vendors to understand the level-of-effort involved in configuring their products to better meet your needs. The temptation will be to replace the broken tools, but these aren’t $5 screwdrivers. Your organization made a significant investment of time and money and if you want to skip this step by replacing a tool, be prepared to provide facts and figures to back up your recommendation. Only after you’ve done this, can you go to step 6.
  5. Monitor your crap. If someone else knows your crap is down or compromised before you do, then you’ve failed. The goal isn’t to be the Oracle of Delphi or some fully omniscient being, but simply more proactive. And you don’t need to have all the logs. Identify the logs that are critical and relevant and start there: Active Directory, firewalls, VPN, IDS/IPS.
  6. Replace the crap that doesn’t work. But don’t make the same mistakes. Identify requirements, design the solution carefully, build out a test environment. Make sure to involve necessary stakeholders. And don’t waste time arguing about frameworks, just use an organized method and document what you do.

Now you have the foundation of any decent information security program. This process isn’t easy and it’s definitely not very sexy. But it will be more effective for your organization than installing new tools every 12 months.

 

Tagged , , , , , , , ,

Splunk Funk

Recently, I was asked to evaluate an organization’s Splunk deployment. This request flummoxed me, because while I’ve always been a fan of the tool’s capabilities, I’ve never actually designed an implementation or administered it. I love the empowerment of people building their own dashboards and alerts, but this only works when there’s a dedicated Splunk-Whisperer carefully overseeing the deployment and socializing the idea of using it as self-service, cross-functional tool.  As I started my assessment, I entered what can only be called a “dark night of the IT soul” because my findings have led me to question the viability of most enterprise monitoring systems.

The original implementer recently moved on to greener pastures and (typically) left only skeletal documentation. As I started my investigation, I discovered  a painfully confusing distributed deployment built with little to no understanding of  “best practices” for the product. With no data normalization and almost non-existent data input management, the previous admin had created the equivalent of a Splunk Wild West, allowing most data to flow in with little oversight or control. With an obscenely large number of sourcetypes and sources, the situation horrified Splunk support and they told me my only option was to rebuild, a scenario that filled me with nerd-angst.

In the past, I’ve written about the importance of using machine data for infrastructure visibility. It’s critical for security, but also performance monitoring and troubleshooting. Log correlation and analysis is a key component of any healthy infrastructure and without it, you’re like a mariner lost at sea. So imagine my horror when confronted by a heaping pile of garbage data thrown into a very expensive application like Splunk.

Most organizations struggle with a monitoring strategy because it simply isn’t sexy. It’s hard to get business leadership excited about dashboards, pie charts and graphs without contextualizing them in a report. “Yeah baby, let me show you those LOOOOW latency times in our web app.” It’s a hard sell, especially when you see the TCO for on-premise log correlation and monitoring tools. Why not focus on improving a product that could bring in more customer dollars or a new service to make your users happier?  Most shops are so focused on product delivery and firefighting, they simply don’t have cycles left for thinking about proactive service management. So you end up with infrastructure train wrecks, with little to no useful monitoring.

While a part of me still believes in using the best tools to gain intelligence and visibility from an infrastructure, I’m tired of struggling. I’m beginning to think I’d be happy with anything, even a Perl script, that works consistently with a low LOE. I need that data now and no longer have the luxury of waiting until there’s a budget for qualified staff and the right application. Lately, I’m finding it pretty hard to resist the siren song of SaaS log management tools that promise onboarding and insight into machine data within minutes, not hours. Just picture it: no more agents or on-premise systems to manage, just immediate visibility into data.  Most other infrastructure components have moved to the cloud, maybe it’s inevitable for log management and monitoring. Will I miss the flexibility and power of tools like Splunk and ELK? Probably, but I no longer have the luxury of nostalgia.

 

 

Tagged , , , , ,

The Endpoint Is Dead To Me

Another day, another vulnerability. This month the big non-event was Badlock, following the recent trend of using a splashy name to catch the media’s attention so they could whip the management meat puppets into a paranoid frenzy. Many in the security community seem unperturbed by this latest bug, because let’s face it, nothing can surprise us after the last couple of really grim years.

But then more Java and Adobe Flash bugs were announced, some new iOS attack using Wi-Fi and NTP that can brick a device, followed by an announcement from Apple that they’ve decided to kill QuickTime for Windows instead of fixing a couple of critical flaws. All of this is forcing  me to admit the undeniable fact that trying to secure the endpoint is a waste of time. Even if you’re using a myriad of management tools, patching systems, vulnerability scanners, and DLP agents; you will fail, because you can never stay ahead of the game.

The endpoint must be seen for what it truly is: a limb to be isolated and severed from the healthy body of the enterprise at the first sign of gangrene. It’s a cootie-ridden device that while necessary for our users, must be isolated as much as possible from anything of value. A smelly turd to be flushed without remorse or hesitation. No matter what the user says, it is not valuable and nothing of value to the organization should ever be stored on it.

This doesn’t mean I’m advocating removing all endpoint protection or patching agents. But it’s time to get real and accept the fact that most of this corporate malware is incapable of delivering what the vendors promise. Moreover, most often these applications negatively impact system performance, which infuriates our users. But instead of addressing this issue, IT departments layer on more of this crapware on the endpoint, which only encourages users to find ways to disable it. One more security vulnerability for the organization. A dedicated attacker will figure out a way to bypass most of it anyway, so why do we bother to trust software that is often more harmful to business productivity than the malware we’re trying to block?  We might as well give out “Etch A Sketches” instead of laptops.

google_on_my_etch_a_sketch_by_pikajane-d3846dy

Tagged , , , , , ,

Mindful Security

Recently, I was invited to participate in a podcast on the topic of mindfulness. A friend and former editor of mine thought it would be helpful to discuss how the practice could potentially help stressed-out and oversubscribed IT professionals. I’ve actually had a meditation practice on and off for the last eight years and sometimes it seems like it’s the only thing keeping me sane. Especially in the high-pressure realm of information security. While I’m not an expert on meditation, I’ve spent a considerable amount of time studying and trying to understand its impact. Hopefully, people will find the information useful as an alternative method of reducing the inevitable stress and anxiety we’re all feeling in this industry lately.

Tagged , , ,

The Question of Technical Debt

Not too long ago, I came across an interesting blog post by the former CTO of Etsy, Kellan Elliott-McCrea, which made me rethink my understanding and approach to the concept of technical debt. In it, he opined that technical debt doesn’t really exist and it’s an overused term. While specifically referencing code in his discussion, he makes some valid points that can be applied to information security and IT infrastructure.

In the post, he credits Peter Norvig with the quote, “All code is liability.” This echoes Nicholas Carr’s belief in the increased risk that arises from infrastructure technology due to the decreased advantage as it becomes more pervasive and non-proprietary.

When a resource becomes essential to competition but inconsequential to strategy, the risks it creates become more important than the advantages it provides. Think of electricity. Today, no company builds its business strategy around its electricity usage, but even a brief lapse in supply can be devastating…..

Over the years, I’ve collected a fair amount of “war stories” about less than optimal application deployments and infrastructure configurations. Too often, I’ve seen things that make me want to curl up in a fetal position beneath my desk. Web developers failing to close connections or set timeouts to back-end databases, causing horrible latency. STP misconfigurations resulting in network core meltdowns. Data centers built under bathrooms or network hub sites using window unit air conditioners. Critical production equipment that’s end-of-life or not even under support. But is this really technical debt or just the way of doing business in our modern world?

Life is messy and always a “development” project. Maybe the main reason DevOps has gathered such momentum in the IT world is because it reflects the constantly evolving, always shifting, nature of existence. In the real world, there is no greenfield. Every enterprise struggles to find the time and resources for ongoing maintenance, upgrades and improvements. As Elliott-McCrea so beautifully expresses, maybe our need to label this state of affairs as atypical is a cop-out. By turning this daily challenge into something momentous, we make it worse. We accuse the previous leadership and engineering staff  of incompetence. We come to believe that the problem will be fully eradicated through the addition of the latest miracle product. Or we invite some high-priced process junkies in to provide recommendations which often result in inertia.

We end up pathologizing something which is normal, often casting an earlier team as bumbling. A characterization that easily returns to haunt us.

When we take it a step further and turn these conflations into a judgement on the intellect, professionalism, and hygiene of whomever came before us we inure ourselves to the lessons those people learned. Quickly we find ourselves in a situation where we’re undertaking major engineering projects without having correctly diagnosed what caused the issues we’re trying to solve (making recapitulating those issues likely) and having discarded the iteratively won knowledge that had allowed our organization to survive to date.

Maybe it’s time to drop the “blame game” by information security teams when evaluating our infrastructures and applications. Stop crying about technical debt, because there are legitimate explanations for the technology decisions made in our organizations and it’s generally not because someone was inept. We need to realize that IT environments aren’t static and will always be changing and growing. We must transform with them.

Tagged , , , , , ,

Why You Shouldn’t Be Hosting Public DNS

As a former Unix engineer who managed my share of critical network services, one of the first things I do when evaluating an organization is to validate the health of infrastructure components such as NTP, RADIUS, and DNS. I’m often shocked by what I find. Although most people barely understand how these services work, when they break, it can create some troublesome technical issues or even a full meltdown. This is especially true of DNS.

Most problems with DNS implementations are caused by the fact that so few people actually understand how the protocol is supposed to work, including vendors.The kindest thing one can say about DNS is that it’s esoteric. In my IT salad days, I implemented and was responsible for managing the BIND 9.x infrastructure at an academic institution. I helped write and enforce the DNS request policy, cleaned up and policed the namespace, built and hardened the servers, compiled the BIND binaries and essentially guarded the architecture for over a decade. I ended up in this role because no one else wanted it. I took a complete mess of a BIND 4.x deployment and proceeded to untangle a ball of string the size New Zealand. The experience was an open source rite of passage, helping to make me the engineer and architect I am today.  I also admit to being a BIND fangirl, mostly because it’s the core software of most load-balancers and IPAM systems.

This history makes what I’m about to recommend even more shocking. Outside of service providers, I no longer believe that organizations should run their own public DNS servers. Most enterprises get along fine using Active Directory for internal authentication and name resolution, using a DNS provider such as Neustar, Amazon or Akamai to resolve external services. They don’t need to take on the risk associated with managing external authoritative DNS servers or even load-balancing most public services.

The hard truth is that external DNS is best left to the experts who have time for the care and feeding of it. One missed security patch, a mistyped entry, a system compromise; any of these could have a significant impact to your business. And unless you’re an IT organization, wouldn’t it be better to have someone else deal with that headache? Besides, as organizations continue to move their services to the cloud, why would you have the name resolution of those resources tied to some legacy, on-premise server? But most importantly, as DDoS attacks become more prevalent, UDP-based services are an easy target, especially DNS. Personally, I’d rather have a service provider deal with the agony of DDoS mitigation. They’re better prepared with the right (expensive) tools and plenty of bandwidth.

I write this with great sadness and it even feels like I’m relinquishing some of my nerd status. But never fear, I still have a crush on Paul Vixie and will always choose dig over nslookup.

 

Tagged , , , , ,
%d bloggers like this: