Category Archives: Tech

Installing Spotify on Fedora 24

I’ve recently been using Spotify on PS4 and wanted to get it on my desktop. Unfortunately, the official instructions from Spotify are for Ubuntu and other Debian-based distros.

Thankfully, there’s a way around all of that using rpmfusion. Below are the commands you will need to run in order to get it installed.

$ sudo dnf install --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
$ sudo dnf install lpf-spotify-client
$ lpf update

Installing Pi-Hole on Fedora 24

A few months ago I decided to move away from Windows 10 and go back to a Linux-based desktop. I chose Fedora 24, based on my Red Hat experience. Regardless, I’ve been wanting to install Pi-Hole for a while and finally got around to it. There are a couple caveats, but overall, it’s a pretty seamless install.

First, ensure that you are completely up to date before you begin:

$: sudo dnf update -y

Once you are all up to date, simply open a terminal and run the following command:

$: curl -L https://install.pi-hole.net | bash

Pi-Hole will run through the script and tell you everything is groovy, but that’s not actually the case on Fedora. There are a couple things you need to do.

Step 1: Symlink the pihole binary to /usr/sbin/pihole:

# ln -s /usr/local/bin/pihole /usr/sbin/pihole

Step 2 (only if you were previously DHCP):

Remove duplicate ONBOOT setting in network script. The Pi-Hole installer attempts to make your IP Configuration static if it wasn’t already. This will leave you with two ONBOOT settings. Leave ONBOOT=none and remove the others.

Step 3: Update your lists:

$: sudo /opt/pihole/gravity.sh

Now, you can log in to your router and point the DNS servers to your Pi-Hole box. In my case, Pi-Hole is running on my desktop @ 192.168.1.20. Therefore, I logged into my router and set the DNS server to 192.168.1.20.

screen-shot-2016-11-05-at-8-25-21-pm

Now, all of the traffic on my network will be protected from ads, removing the need to install an ad-blocker on every browser/device. I’m also shielded from ads on devices that can’t install ad-blockers, such as my PS4.

I hope this was informative and helps someone out there who had the same issue I had.

Thus far, I’ve only been using it for 10 minutes and already 2% of my traffic has been blocked due to ads. Hallelujah!

screen-shot-2016-11-05-at-8-30-25-pm

Cheers!

Phish Food: Chumming the Waters on Social Media

Most of us are willing to help others in times of need. We want to trust in others to do the same and generally want to see the best for others. Perhaps the innate desire to trust in and help others is an evolutionary trait humans developed to help us survive, or perhaps we do it simply because of our internal convictions. Either way, more often than not, we want to help others when asked. This is precisely why social engineering attacks are extremely successful methods of infiltrating companies. Whether it’s a phone call to the front desk of an organization trying to get information about those who work there, or an email with an attachment claiming to be an unread fax, most of us let our trust get the best of us which could end up costing the company.

One of the most famous and successful hackers, Kevin Mitnick, relied on social engineering to carry out the majority his hacks. To this day, many hackers are able to gain access to networks and sensitive information utilizing very similar techniques. In fact, many penetration testers will tell you that the easiest way into a network is to simply ask for credentials. This can be in the form of a phishing email, phony website, or, in my experience, even a spoofed phone call from a ‘help desk’ employee.

Before social media, it was sometimes quite difficult to gather enough information about a target to craft a convincing phising campaign. However, with the advent of Facebook, LinkedIn, and the multitude of other social media sites, it is now much easier (no more dumpster diving!). Typically, an attacker will utilize Open Source Intelligence (OSINT) tools to profile an organization. These tools use multiple techniques to scour the Internet for any information pertaining to the target individual and organization. Because many of us today are so apt to share everything on social media, and because these OSINT tools are free and easy to use, the profiling process is much quicker and yields a lot of valuable data.

After an attacker has gathered information about their target, they craft a convincing phishing email. Perhaps the email is spoofed to look like it’s coming from the CEO of the company, asking for their password to be sent to them because they are out of the office and it’s extremely urgent. Or, perhaps the attacker has stood up a website that looks like the victim’s webmail access portal. The attacker then convinces the victim that it’s an “upgraded portal with more functionality” and the victim has been “specially selected out of a handful of people to help test it.” No matter the vector of the attack, the end goal is the same: steal credentials.

Let’s stop making it easy for attackers and start utilizing the same tools the hackers use to get a better idea of what’s out there. First and foremost, if at all possible, do not share any information about the organization you work for on social media (this includes LinkedIn). Also, try to avoid listing your corporate email address anywhere on the internet. Go through each of your social media profiles and beef up the privacy settings to ensure that none of your details are available publicly. Then, Google yourself. See what comes up. Many times you will find data that may have been published years ago that you simply forgot about, just make sure to go back and clean it up. Finally, after you feel that you have sufficiently erased yourself from the Internet, run some OSINT tools on yourself and your organization. The three tools that I like to use are Maltego, FOCA Pro, and Shodan.

While social media is a great tool to keep us connected in both our personal and professional lives, it can also be a tool used by attackers if your privacy settings are configured properly. Let’s stop chumming the waters for phishing campaigns and be more cognizant of what we are sharing online. By over-sharing, we are putting both our personal and corporate assets at risk.

Who Watches the Watchmen?

In the healthcare industry, those practicing in the field must take the Hippocratic oath and swear to uphold specific ethical standards. This standard helps promote the idea of “do no harm” and healthcare practitioners take this oath very seriously. But what about the Information Technology industry? How do we ensure that those we give ultimate power to in our organizations are not abusing their power and are acting in the best interest of the company?

There is nothing similar to the Hippocratic oath for systems administrators, network engineers, or security analysts. Although we would like to hope that throughout our interview processes and background checks, we are hiring morally upstanding folks, there really is no overall ethical oath that professionals in Information Technology subscribe to. The majority of us will respect ourselves enough to hold high ethical standards, but there is also a minority of those who won’t. So, how can we ensure that our employees are not abusing their access?

You may argue that you have logs, you have a SIEM, if anything were to happen, you could pour through your logs and build a forensic timeline to answer the who, what, when, where, and how. However, this is after the fact, after the damage has already been done. Wouldn’t it make more sense to put safeguards in place to prevent the damages from occurring in the first place?

Think of it in terms of your primary bank. You have both your checking and savings accounts in there. Maybe you have some CDs, a HELOC, and a mortgage with them as well. Would you be comfortable if any of the tellers could simply walk into the vault by themselves with no surveillance and no safeguards and do whatever they like under the presumption of trust? Would it be acceptable for the same teller to be able to modify your account information on your mortgage, HELOC, etc. without a second set of eyes and an approval process? I would hope your answer is a resounding “No.”

Thus, we need to start putting surveillance, approval processes, and alerting around our privileged access in our environments. Think about how many individuals in your environment have some type of elevated access. This is not only in the scope of your Domain Administrators, but also the DBAs in your environment that have access to the databases that hold your company’s most sensitive records. How do you know that none of the folks you’ve hired, and trust with your data, are not abusing their power? Especially when the same individual that could access that “Finance” directory on your file share can, at the same time, disable logging and remove all evidence of any nefarious activity?

Here’s an example of how abuse of privileged access could play out: ACME, Inc. just hired a brand new Systems Administrator and, as part of the onboarding process for that team, the new-hire has been given a secondary account with domain admin privileges. ACME has a SIEM, and all of their servers have agents on them to forward logs to the SIEM. One night, after a long troubleshooting session, the new hire feels under appreciated and wants to know the salaries of other employees at ACME. Knowing full well about the SIEM, they first stop the services of the log forwarding agents on the file servers and domain controllers. They then grant themselves access to the directory in which the salary data is stored and make a local copy to a USB. They then delete all logs on the fileserver(s) and domain controllers for the timeframe in which he was snooping. Finally, they enable the agents again and walk out the door with a USB full of ACME’s salary data. Honestly, are your SIEM administrators going to notice that the next day? More than likely not.

Imagine the same scenario, but with a Privileged Access Management (PAM) solution in place. The new Systems Administrator has no idea what their password is to their domain admin account, and they must check it out prior to escalating privileges. Once checked out, anything the Systems Administrator does with their RDP/SSH session is recorded and stored in an encrypted format. That alone would deter most from continuing with the efforts of scraping salary data; however, if one were feeling foolhardy and decided to continue, the session recording would have full evidence of everything they did. To take it a step further, one could even configure the PAM solution to require an approval for checkout. Thus, management would get a request for escalation of privilege and must approve before the new Systems Administrator can move forward. These policies would go a long way to deter privilege abuse, and the session recording features are just another tool in your forensics arsenal.

Think about how many admins you have in your organization who at any time could carry out the first scenario I described without any types of checks and balances. Is your corporate data and reputation something you are willing to risk on hopes that everyone will always be happy and always do right by the company? To take that a step further, do you trust the contractors and vendors that come in for one-time installs enough to not control and monitor their activity? It comes down to due diligence, and in the realm of privileged accounts, we need to control, verify, and monitor access to those accounts. Otherwise, at any moment, one of your employees could be wreaking havoc, planting logic bombs, or stealing sensitive company information, and you wouldn’t know until the damage has already been done.

You Can’t Secure What You Don’t Know About

Do you know how many assets are on your network right now? I’m talking exactly how many, without a shadow of a doubt. Don’t feel bad if you can’t answer this with a confident “of course I can!” If you can’t, you are in the (unfortunate) majority. Although at first glance, you might think “of course I can answer that, my agent-based software-updating software tells me I have 2,000 workstations and 1,000 servers” or “of course I can answer that, there are 3,457 computer objects in Active Directory” but in reality, it’s not that easy.
The unfortunate truth is that many organizations have no idea how many devices are on their network, or what individuals are doing on those devices. For example, how many organizations disable all open ports throughout their building and only enable them upon request? Furthermore, out of those who answer “yes, I do that” to the previous question, how many are securing existing ports in use to automatically disable if it detects a different MAC address on that port? If these safeguards are not implemented on physical network ports, any visitor (or employee) can connect whatever network device they want. Think of all the open physical wall jacks in your building, if they are all hot, all of those ports are ways folks can start infiltrating your network.
I’ve personally seen an employee plug up a Linksys router on an open port in a conference room because they wanted WiFi for their phone. Think about that, a completely open access point directly plugged in to your internal network. The only way it was discovered was an access point/WiFi reconnaissance scan conducted by the security team. Who knows how long it had been on the network, who had connected to it, or what those individuals did while on the network. So, with all that said, are you really comfortable relying on Active Directory or your agent-based A/V or agent-based patching solution to tell you what’s on your network?
You cannot secure what you do not know about, thus, finding the unknowns in your network and maintaining a complete asset inventory is the first step in building a secure network. A great way to start identifying what is in your network is by conducting asset discovery scans. Typically, asset discovery is bundled in with vulnerability scanning that is conducted during hours of low network activity. By scanning daily, and alerting on any new assets that appear on your network segments, you can ensure that you are not only building a list of known assets, but also discovering any unknowns that may be on your network. By taking this first step of discovery of both assets and the vulnerabilities on those assets, you are building the foundation for a secure network.

Malware and How to Deal with It

The information security landscape is evolving daily and it seems like there are just as many products out there as there are exploits. For example, there are intrusion detection systems (IDS), intrusion prevention systems (IPS), file integrity monitoring (FIM), data loss prevention (DLP), next-generation firewalls, anti-virus, anti-malware, email gateways, honeypots, and even products that use artificial intelligence and behavioral analytics to find threats on your networks.

With all the potential solutions out there, it’s easy to get overwhelmed when all you really want to do is protect your network and ensure maximum availability. I’ve seen many approaches to securing a network, from the easy-but-extreme example of air-gapping a network, to going overboard and purchasing as many security products as possible. In security, as with anything, you must strike a comfortable balance between securing your network and not interrupting your users’ workflow while still balancing budgetary concerns and time constraints.

I saw a need for someone in security to strike that balance, cut through the FUD (Fear, Uncertainty, and Doubt), which is why I submitted my article “Malware and How to Deal With It” to the ISSA journal. It was selected for publication for the Journal’s July 2015 issue as well as one of the four best articles of 2015 for the December issue. I hope you enjoy reading it as much as I enjoyed writing it.

Article

How to Update Tripod

My wife is a brilliant photographer and uses the Tripod WordPress theme to run her site. The issue is that their documentation is kind of lacking, so I figured I would document the process in hopes to help others.

It’s actually pretty simple, it’s just not mentioned in their documentation as “Update”.

  1. Make a backup! SFTP to your server and copy down your entire directory to your local machine. You will also want to login to your phpMyAdmin to make a backup of your database.
  2. Update your WordPress to the latest version.
  3. Optional: Install Maintenance Mode plugin and set your site in maintenance mode.
  4. Download the latest version of the theme from envatomarket/themeforest.com. If you download the full version + documentation. It will come a .zip. Unzip it.
  5. SFTP to your site and delete the tripod theme ( ../../../wp-content/themes/tripod)
  6. Go to Appearance > Themes > Add New > Upload Theme > Choose File and choose the File named tripod_installable_theme_v_x.x.zip and click Install Now.
  7. Activate theme and make sure everything looks good.
  8. Optional: Disable Maintenance Mode.

Hopefully this helps you out. If you need assistance, just comment and I’ll do my best to answer your questions.

Use Java Whitelisting To Further Secure Your Organization

If you were to ask any sysadmin what their biggest vulnerability is at the desktop level, they are most likely to say “Java.” In fact, Java has such a bad reputation for being exploited that it’s the butt of many IT jokes. Typically, if you mention Java to a sysadmin, you will very quickly hear a disappointed sigh in response.

Unfortunately, it’s almost impossible to get rid of across the organization because so many business processes rely on Java-based applications. Thankfully, starting in Java 7u40, Oracle has allowed sysadmins to control Java with a whitelist. The whitelist acts almost like a firewall in that the default rule is deny-all. You can whitelist Java applets based on domain or signing key.

This new capability allows sysadmins to secure their organization with a few .vbs scripts, some GPO, and a .jar file.

Software Prerequisites: Java JDK, Java JRE

Disable Java Cache

Throughout my testing, it has become apparent that disabling Java cache yields the best results. I would recommend doing this prior to starting the implementation of the Java whitelist. Below are User Login .vbs scripts that will disable Java Cache for XP and Windows 7. I am by no means a .vbs expert, so you may be able to tweak this into one script to handle both operating systems.

Link to XP

Link to 7

Generate a Code-Signing Certificate & Java Keystore

You will need to sign your white list .jar file in order for it to be processed by Java. I was able to generate a code-signing cert from our local CA in our Windows Domain. Getting that cert into a Java Keystore is a little tedious, but not difficult. Here’s what I did:

  1. Use Certificates MMC Snap-In to export certificate with private key (Example: C:\Users\username\cert-and-key.pfx)
  2. Open cmd.exe, navigate to the JDK bin directory (C:\Program Files\Java\jdk1.7.0_45\bin)
  3. Import .pfx to Java keystore with the following command
    1. Keytool -importkeystore -srckeystore C:\Users\username\cert-and-key.pfx -srcstoretype pkcs12 -destkeystore C:\Users\username\mykeystore.jks -deststoretype JKS
    2. Make note of the alias (something like le-codesigningcertificate-*) and copy it to a safe place.

Create the whitelist (DeploymentRuleSet.jar)

Now comes the fun part: creating the actual whitelist. The whitelist is basically a .xml file, packed into a .jar file (think tarball) and signed with your certificate. Oracle has a great example of said .xml file here: https://blogs.oracle.com/java-platform-group/entry/introducing_deployment_rule_sets

I’ve found it easiest to create the ruleset.xml file in the JDK bin directory as to avoid any issues with absolute paths in the following commands. Once you have your ruleset.xml created, it’s just a matter of creating the .jar file, signing it, and sticking it in the right place. The following commands assume you are in the JDK bin directory, and your ruleset.xml is also in that directory.

  1. Create the .jar file by issuing the following command:
    1. jar -cvf DeploymentRuleSet.jar ruleset.xml
  2. Sign the .jar file (you will need the alias from the previous section for this step):
    1. jarsigner -verbose -keystore C:\Users\username\mykeystore.jks -signedjar DeploymentRuleSet.jar DeploymentRuleSet.jar <paste alias from previous section here>
  3. Copy whitelist to correct location:
    1. Windows: copy DeploymentRuleSet.jar C:\Windows\Sun\Java\Deployment\
    2. Mac: cp DeploymentRuleSet.jar /etc/.java/deployment/

Confirm whitelist is being applied

The whitelist should get applied as soon as the .jar file is copied to the correct location. To test this, open the Java Control Panel and navigate to the Security Tab. You can then click on the “View the active Deployment Rule Set” link to see what whitelist is taking effect.

javasecuritytab

Further Notes

This took me a couple of tries to get right, so don’t get discouraged if you don’t get it right the first time. One very valuable piece of the Java whitelist is that it allows the sysadmin to specify which version of Java to run on each site. Therefore, if your organization has an application that is limited to a specific Java version, you can now lock down that version of Java to that specific application, while allowing all other applications to run the latest version of Java.

sudo /usr/bin/vim: Not a Good Idea

I’ve seen this configuration in /etc/sudoers before, but I wanted to explain a little more about why it is not a good idea to do this. First of all, you are editing your sudoers file with visudo, right? (RIGHT?!) If not, you should be. The reason being is that when you use visudo, it does a syntax check on the /etc/sudoers file before comitting. If you have it bunked up, it will let you know, and will allow you to fix the problem before you commit (you always want to fix before you commit). If you simply edit /etc/sudoers file, bunk up the syntax, and commit it anyway, there’s a good chance that NONE of your sudo config will work.

Now, lets get on with putting /usr/bin/vim in the sudoers file. I can see why one would do this, perhaps you have web admins that don’t use a code repository and simply make backups on the dev box and edit the configs on the machine. Probably not the best idea, but it happens everyday. You likely have a group in /etc/groups called something like “webdevs” populated with the names of your web developer accounts:

webdevs:x:599:jsmith,plawrence,ljames,mpayne

Thus, your sudoers file might have a line in it that is similar to (this assumes you have a host alias for the development web servers set to DEVWEB):

%webdevs    DEVWEB = /usr/bin/vim /docroot/index.html

This seems like an innocent thing right? I mean, how much damage can they do? You’ve locked them down to just being able to edit the index.html file in /docroot, right?

WRONG!

The funny thing about vim is that you can press the escape key, then type:

:shell

And it will drop you to a shell, and when run in sudo, it’s not just any shell, it’s a root shell. Now everyone in your webdevs group can get a root shell!

So, how do we fix this? Well, we use the program “/usr/bin/sudoedit” in place of /usr/bin/vim. Now, if the user tries the same :shell trick, it drops them to a non-elevated shell.

tl;dr:

  • Use visudo when editing /etc/sudoers
  • If users are allowed to utilize /usr/bin/vim with sudo privs, you’ve just given them a root shell
  • Use /usr/bin/sudoedit in place of /usr/bin/vim when attempting to allow users to edit privileged files