Sunday, December 20, 2009

Cybercriminals Bypassing Two-Factor Authentication

Two-factor authentication -- used to protect online bank accounts with both a password and a computer-generated one-time passcode -- is supposed to be more secure than relying on a single password. But Gartner Research VP Avivah Litan warns that cyber criminals have had success defeating two-factor authentication systems in Web browsing sessions using Trojan-based man-in-the-middle attacks. Confidential information is everywhere, so it must be protected Typo Squatting and Cross Site Scripting are just a couple of the recent threats facing the presidential candidate web sites, according to researcher Oliver Friedrichs. Confidential information is everywhere, so it must be protected A Gartner Research note written by Litan explains that in the past few months, Gartner has heard from many banks around the world that rely on one-time-password authentication systems. Accounts at these banks have been compromised by man-in-the-middle attacks -- the report uses the term "man-in-the-browser" -- despite the use of two-factor security.

One technique that the fraudsters have been using to bypass security controls is call forwarding.

"[B]anks that rely on voice telephony for user transaction verification have seen those systems and processes compromised by thieves who persuade telecom carriers to forward legitimate user phone calls to the thief's cell phone," the report says. "These targeted attacks have resulted in theft of money and/or information, if the bank has no other defenses sufficient to prevent unauthorized access to their applications and customer accounts."

Tuesday, October 13, 2009

BIND 9.5's new features

BIND, which originally stood for Berkeley Internet Name
Daemon, is a suite of DNS (domain name system) software
that provides a DNS server, DNS resolver library, and various
DNS-related tools.

BIND dates back to the early 1980's where it was designed
to serve the needs of distributed computing communities and to
be compatible with the naming service planned for the DARPA
Internet. Since the mid-1990's, BIND has been maintained and
developed by Internet Systems Consortium (ISC) which has become
well-known for their support for many open source projects,
funding and development of BIND and other open source
software, and design and advocacy of many Internet standards.
BIND was rewritten and version 9 was released in September
2000.

According to the Infoblox 2007 DNS Survey, 70% of the Internet's
estimated 11.7 million name servers ran BIND. (Microsoft's
DNS Server ran on 2.7%.) BIND is provided in the default
installations of NetBSD, FreeBSD, OpenBSD, and DragonFly
operating systems. Plus, it is the frequently recommended
DNS server used on most Linux distributions and various
Unix flavours.

For over a year, ISC has been developing and testing many new
features for BIND 9.5. This article will quickly summarize some of
the significant new features:
• GSS-TSIG support
• DHCID
• Statistics support for named via XML
• UDP Socket Pool
• Handling EDNS timeouts
• O(1) ACL processing

GSS-TSIG, or the Generic Security Service Algorithm for
Secret Key Transaction Authentication for DNS, is documented
in RFC 3645. It is an update for Secret Key Transaction
Authentication.

GSS-TSIG is the authentication mechanism of choice for DNS
dynamic update in Microsoft Active Directory.

It is potentially useful for other things, said Rob Austein of
ISC, but the big push for BIND 9.5 was to allow named (the
BIND DNS server) to act as the DNS server for an Active Directory
zone.

GSS-TSIG is a composite of GSSAPI and TSIG – a wrapper
layer built on top of a wrapper layer. It is insanely general, said
Austein, but the common usage is DNS wrapping TSIG wrapping
GSSAPI wrapping SPNEGO wrapping Kerberos 5 – thus for
practical purposes it is a mechanism for using Kerberos 5 to authenticate
DNS.

BIND added the new DHCID Resource Record (RR) type
to keep up with standards. The DHCID RR is used for encoding
DHCP information and DHCP servers and clients use it to identify
DHCP clients with a DNS name with a strategy of reducing
conflicts in the use of fully-qualified domain names. The data is
a one-way SHA-256 hash computation. More details are in RFCs
4701 and 4703.

BIND 9.5 adds an experimental HTTP server and statistics
support for the DNS server via XML. It is not a web-based configu-
BIND 9.5's new features ration interface, but a statistics feed that happens to use the HTTP
protocol for delivery because it is flexible and very well-supported,
said Evan Hunt of ISC.

Also BIND 9.5 makes it a bit harder to play games with insecure
DNS by brute force attack on the 16-bit DNS ID space,
said Austein. The server provides a pool of UDP sockets for queries
to be made over, for example, using eight ports instead of
one in effect adds three more bits to the search space.

BIND 9.5 makes fallback to plain DNS from EDNS due to timeouts
more visible. EDNS (Extension Mechanisms for DNS)
have been available for around eight years and many servers (and
all root servers) support it.

The problem is that some firewalls do not support EDNS by
default, said Mark Andrews of ISC. Also there are some authoritative
servers that fail to respond when they see a EDNS query
rather than return an error code as is required, said Andrews. Timeouts
may mean network problems, dead servers, broken middle
boxes, and broken authoritative servers.

Falling back to plain DNS will help with the later, said Andrews,
but has a negative impact on DNSSEC (which requires
EDNS) especially when there are overloaded links causing
packet loss.

On timeouts, named retries EDNS with a 512 octet UDP size
(which usually allows EDNS to get through a firewall as it
is generally not fragmented and is within the sizes allowed by
plain DNS) and then tries plain DNS if still needed. The server
logs this to draw attention to the issue and to get any non-RFC
compliant boxes replaced or re-configured, said Andrews.

Andrews said at some point soon, BIND will not fallback from
EDNS to DNS on timeout. He suggests the following for BIND administrators
for EDNS:
• Firewalls and NAT boxes need to handle fragmented
responses both in and out of order.
• Firewalls need to handle EDNS responses.
• Broken authoritative servers need to be replaced or upgraded
which first means they need to be identified.

Also BIND 9.5 introduces a new ACL-processing engine.
Instead of storing ACLs (i.e., allow-query, allow-recursion,
et cetera) as linear lists that have to be searched every time
a query comes in, they are now modified radix trees. There
should not be any change in the way things are configured,
said Evan Hunt of ISC, but sites with ACLs containing more
than one or two addresses should hopefully see an uptick in
queries per second.

More details about BIND 9.5 features can be found in the
BIND Administrator Reference Manual and manual pages.

Monday, October 5, 2009

File Access Permissions on Linux

File protection with chmod
chmod 400 file To protect a file against accidental overwriting.
chmod 500 dir To protect yourself from accidentally removing, renaming or moving files from this directory.
chmod 600 file A private file only changeable by the user who entered this command.
chmod 644 file A publicly readable file that can only be changed by the issuing user.
chmod 660 file Users belonging to your group can change this files, others don't have any access to it at all.
chmod 700 file Protects a file against any access from other users, while the issuing user still has full access.
chmod 755 dir For files that should be readable and executable by others, but only changeable by the issuing user.
chmod 775 file Standard file sharing mode for a group.
chmod 777 file Everybody can do everything to this file.

Special modes sticky bit
• sticky bit
– chmod +t
• when set on
– file: if sticky bit set, after job execution, the command is kept in memory
– directory: can only change files in this dir when user is owner of the file or has appropriate permissions see /tmp

Special modes set id
• set user id bit SUID
– chmod u+s
• set group id bit (SGID)
– chmod g+s
• when set on
– binary file: when run it runs with the group and or user of the file not the group/user of the person running it.
– directory: (SGID only) every file created in the directory takes same group as the directory, not the creator's group.
note: existing and copied files keep their group id)

Special modes numeric (octal) representation
0 setuid, setgid, sticky bits are cleared
1 sticky bit is set
2 setgid bit is set
3 setgid and sticky bits are set
4 setuid bit is set
5 setuid and sticky bits are set
6 setuid and setgid bits are set
7 setuid, setgid, sticky bits are set

Special modes textual representation
• SUID: If set, then replaces "x" in the owner permissions to "s", if owner has execute ermissions, or to "S" otherwise. Examples:
-rws------ both owner execute and SUID are set
-r-S------ SUID is set, but owner execute is not set
• SGID: If set, then replaces "x" in the group permissions to "s", if group has execute permissions, or to "S" otherwise. Examples:
-rwxrws--- both group execute and SGID are set
-rwxr-S--- SGID is set, but group execute not set
• Sticky bit: If set, then replaces "x" in the others permissions to "t", if others have execute permissions, or to "T" otherwise. Examples:
-rwxrwxrwt both others execute and sticky bit are set
-rwxrwxr-T sticky bit is set, but others execute is not set

Monday, September 28, 2009

Patch Release Timing for Red Hat and Microsoft

Both vendors have adopted a policy of releasing patches. Microsoft’s policy groups patches on a monthly cycle, whereas Red Hat bundles “errata” together when possible but often releases security patches as needed.

Specifically, Microsoft’s policy is to release patches on the first calendar Tuesday of every month; although Red Hat states that there is a grouped release of patches there is no indication of any static date or release cycle allowing system administrators to schedule patch management cycles.

Wednesday, September 16, 2009

Monitoring

MIS need to monitor security mailing lists, review vendor notifications and Web sites, and research specific public Web sites for the release of new patches. Monitoring will include, but not be limited to, the following:
• Scanning network to identify known vulnerabilities.
• Identifying and communicating identified vulnerabilities and/or security breaches to chief information security officer (CISO) and CIO.
• Monitoring CERT/Microsoft/Symantec/cybersecurity, notifications, and Web sites of all vendors that have hardware or software operating on network.
Review and evaluation
Once alerted to a new patch, MIS will download and review the new patch within four hours of its release. MIS will categorize the criticality of the patch according to the following:
• Emergency -- an imminent threat to network
• Critical -- targets a security vulnerability
• Not Critical -- a standard patch release update
• Not applicable environment
Regardless of platform or criticality, all patch releases will follow a defined process for patch deployment that includes assessing the risk, testing, scheduling, installing, and verifying.
Risk assessment and testing
MIS will assess the effect of a patch to the corporate infrastructure prior to its deployment. The department will also assess the affected patch for criticality relevant to each platform (e.g., servers, desktops, printers, etc.).
If MIS categorizes a patch as an Emergency, the department considers it an imminent threat to network / system. Therefore, assumes greater risk by not implementing the patch than waiting to test it before implementing.
Patches deemed Critical or Not Critical will undergo testing for each affected platform before release for implementation. MIS will expedite testing for critical patches. The department must complete validation against all images (e.g., Windows, UNIX, etc.) prior to implementation.
Notification and scheduling
MIS' management must approve the schedule prior to implementation. Regardless of criticality, each patch release requires the creation and approval of a request for technical change (RTC) prior to releasing the patch. CISO will decide when notifying staff is necessary.
Implementation MIS will deploy Emergency patches within eight hours of availability. As Emergency patches pose an imminent threat to the network, the release may proceed testing. In all instances, the department will perform testing (either pre- or post-implementation) and document it for auditing and tracking purposes.
For new network devices, each platform will follow established hardening procedures to ensure the installation of the most recent patches.
Auditing, assessment, and verification
Following the release of all patches, MIS staff will verify the successful installation of the patch and that there have been no adverse effects.
User responsibilities and practices
It is the responsibility of each user -- both individually and within the organization -- to ensure prudent and responsible use of computing and network resources.

Wednesday, September 9, 2009

A Secure Nagios Server

Nagios is a monitoring software designed to let you know about problems on your hosts and networks quickly. You can configure it to be used on any network. Setting up a Nagios server on any Linux distribution is a very quick process however to make it a secure setup it takes some work. This article will not show you how to install Nagios since there are tons of them out there but it will show you in detail ways to improve your Nagios

Introduction

Nagios is a monitoring software designed to let you know about problems on your hosts and networks quickly. You can configure it to be used on any network. Setting up a Nagios server on any Linux distribution is a very quick process however to make it a secure setup it takes some work. This article will not show you how to install Nagios since there are tons of them out there but it will show you in detail ways to improve your Nagios security.
You may be wondering why should I need to think about securing my Nagios server? Well, think about the amount of information the attacker can get if they compromise it.

All the examples below assumes you are using Ubuntu. However these examples will help any user running a Nagios server to make it more secure since the concepts will still apply.

Web interface

If you installed Nagios with one of the quick start guides out there, chances are that you setup the web interface. Since Nagios uses Apache to display it there are many security options.
Below is an example of apache configuration for a Nagios web interface:


Options ExecCGI
AllowOverride None
Order allow,deny
Allow from all
AuthName "Nagios Access"
AuthType Basic
AuthUserFile /usr/local/nagios/etc/htpasswd.users
Require valid-user


The 'Allow from' option is used to provide access to only a certain IP address and/or network. The above example allows any IP address to access the web interface. The other security options are used for authentication. 'AuthType' defines the type of authentication being used. There are two types you can choose from Basic or Digest. Basic authentication will transmit your passwords and username as clear text. However using Digest the passwords are transmitted as MD5 digests which is more secure then in clear text.

After making some security improvement we get the below.


Options ExecCGI
AllowOverride None
Order allow,deny
Allow from 192.168.4.
AuthName "Nagios Access"
AuthType Digest
AuthDigestFile /usr/local/nagios/etc/htpasswd.users
Require valid-user


Now only computers on the 192.168.4.0 network can have access to the web interface. Also we are now using Digest authentication instead of the insecure method of Basic authentication.

Now we need to add users and passwords to allow accesses to the web interface. To add a new user using digest authentication use the below command:

# htdigest -c /usr/local/nagios/etc/htpasswd.users realm username

Digest is more secure then Basic authentication but the best way keep your username and passwords safe is to use SSL.

Make sure that you restart apache if you make any configuration changes.

# /etc/init.d/apache2 restart

Best Practices
This sections lists some of the best security practices when setting up an Nagios server.

Don't Run Nagios As Root There should be an normal user called nagios. If Nagios is running as root then if Nagios gets compromised then the attacker can do anything they want to your system.
Lock Down The Check Result Directory Make sure that only nagios has read/write access to the check result directory otherwise an attacker can send fake host and service checks. This directory is normal at /usr/local/nagios/var/spool/checkresults
Use Full Paths In Command Definitions When defining commands, make sure to specify the full path and not the relative one to any scripts or binaries you’re executing.
Secure Remote Agents Some example are NRPE, NSClient, and SNMP. Below we will look at steps to secure the NRPE remote agent.


Secure Remote agents
This sections we will look at ways you can make NRPE more secure. This remote agent is used to execute programs on an remote host for doing checks like the load or disk usage. Since we don't want any programs or users being able to execute commands on our remote machines it's important to spend some time to make NRPE more secure.

Since NRPE come with support for TCP wrappers we can define which hosts have access to it.

Example /etc/hosts.allow

nrpe:192.168.1.91

This will allow only 192.168.1.91 to be able to use this remote agent on this host. You should replace this with the IP address of your Nagios client. Note this should be used on both your Nagios server and client.

NRPE should never run as root or any other superusers it should only be run as a nagios user in the group nagios. In /etc/nagios/nrpe.cfg you can check weather or not it's running as nagios.

Example part of /etc/nagios/nrpe.cfg

nrpe_user=nagios
nrpe_group=nagios

Another part of NRPE that can be a security hole is allowing command arguments. We don't want attacks to send malicious arguments that can compromise our system. Some times we need to allow Nagios to send command arguments but if you don't need it to be enable which most times they are not needed then you should definitely disable them.

To disable them edit /etc/nagios/nrpe.cfg and make sure that you have the below line:

dont_blame_nrpe=0

Make user you restart nrpe to if you make any changes to nrpe.cfg. For more information on how to secure NRPE please read the file called SECURITY in the packages source file.

Secure Communication channels
Any time you communicate over a network you should be thinking about how can I make this more secure. This is where SSL is needed.

NRPE allows you to enable it to use SSL but your package must have configured it with the –enable-ssl option. If NRPE is configured to use SSL note, both the client and the server instance must have it enabled to work.

Next we should also configure SSL so that we don't send our web interfaces passwords in clear text.

# openssl genrsa -des3 -out server.3des-key 1024
# openssl rsa -in server.3des-key -out server.key
# openssl req -new -key server.key -x509 -out server.crt -days 365
# chmod 600 server.key
# rm server.3des-key
# mv server.crt /etc/ssl/
# mv server.key /etc/ssl/private/

Now that we have generated our certificate we need to tell Apache to use them.


In your Apache configuration you will need to add the SSLRequireSSL options for example:


SSLRequireSSL
Options ExecCGI
AllowOverride None
Order allow,deny
Allow from 192.168.4.
AuthName "Nagios Access"
AuthType Digest
AuthDigestFile /usr/local/nagios/etc/htpasswd.users
Require valid-user


Remember to restart Apache.

# /etc/init.d/apache2 restart

Where to go from here?

Now you should feel confident that your Nagios server is more secure from attack. The next step is to just install security updates when they are released.

Tuesday, September 1, 2009

The growth of keyloggers

Enterprise security has traditionally been centered around gateway protection, preventing unauthorized access from the outside and securing access to the Internet from inside the company. The last few years have seen the industry's attention extending to include endpoint security, which shares some of the same security threats as the gateway, but also presents new concerns.

Enforcing endpoint security is made more difficult due to attackers having physical access to the target machines. Beside negligent or disgruntled employees, an organization's computers can be completely accessible to outsiders. For example, bank PCs are often positioned on a teller's desk, with the computer's backside and wiring exposed to customers. Even PCs inside locked offices are accessible to outsiders during off hours.

Employees may also snoop around in corporate data which they shouldn't have access to, such as other employees' e-mails or financial records. Common methods include connecting USB flash drives and copying data or adding network access points (a practice known as ‘bridging,' where new connection points are opened into a previously isolated network). A subtler but more powerful attack entails leaving behind keyboard eavesdropping modules, known as keyloggers.


Keyloggers are software or hardware modules primarily meant to steal passwords and other sensitive inputs as they are typed into a terminal. They have evolved from easy to detect resident programs, to more powerful rootkit-style kernel components, and finally to small hardware plugs, which are undetectable to the target system. Their use as a tool for industrial espionage is described in the Joseph Finder novel "Paranoia" in which an attacker installs a keylogger on the target computer and collects them days or weeks later, with megabytes of sensitive data logged inside their flash memories.

Commercially available keyloggers may plug into USB or PS/2 keyboard ports which look similar to a keyboard adapter and go unnoticed unless the user searches for them. Installing them is extremely simple, requiring the same amount of technical knowledge as plugging in a keyboard. Other form factors allow surreptitiously installing keyloggers on the inside of a keyboard, or inside the body of a laptop.

Keyloggers are hard to detect and lead to embarrassing break-ins and because of these considerations most incidents go unreported. The incidents which do become public show that passwords stolen using keyloggers lead to large-scale attacks with huge losses.

Beside keyboard inputs, keyloggers can target credit card swipes, which usually share the same interfaces as keyboards. Once a magnetic card such as a credit card or an access card is swiped, a keylogger will record that data. A keylogger on a bank PC will obtain passwords entered using access cards and a keylogger installed on a cashier's machine can gather thousands of valid credit card numbers per day. Cybercrime networks are willing to buy these records for hard cash, using them for unattended purchases over the Internet and telephone, or creating replicate credit and access cards. Until commercial organizations harden their requirements to include endpoint security, this threat will remain prevalent. Government and corporate regulation bodies like the Payment Card Industry Security Standards need to address the issue by mandating a higher level of endpoint security.

Once a keylogger is logged in, it can run applications such as an Internet browser or an IM session, or it can run queries on a database. It can even mount itself as a flash drive and copy data from local and network storage to internal memory, or it can install malware, spreading infection the internal network. Finally, the device can wait until the attacker collects it. As flash drive capacities increase every year, attackers can walk away with many gigabytes of sensitive information.


To summarize, keyloggers and other similar devices have not yet become the focus of attention for the security industry, but they have already caused severe security miscues with great costs and strong ramifications in the areas of retail and banking. Any industry which stores sensitive data on corporate networks - and today that's every industry - will eventually be forced to upgrade its infrastructure to defend against attacks on its endpoints.

Sunday, August 23, 2009

True Crypt



Free open-source disk encryption software for Windows Vista/XP, Mac OSX and Linux that allows you to


- Creates a virtual encrypted disk within a file and mounts it as a real disk.
- Encrypts an entire partition or storage device such as USB flash drive or hard drive.
- Encrypts a partition or drive where Windows is installed ( pre-boot authentication).
- Encryption is automatic, real-time ( on-the-fly) and transparent.
- Provides 2 levels of plausible deniability, in case an adversary forces you to reveal the password:

1) Hidden volume (steganography) and hidden operating system.
2) No TrueCrypt volume can be identified (volume cannot be distinguished from random data).
- Encryption algorithms : AES-256, Serpent and Twofish. Mode of operation:XTS

Saturday, August 15, 2009

Mozilla Recommends Upgrading from Firefox 3.0.x to 3.5.x

Over the next few days, users of the latest version of Firefox 3.0 will see an information pop-up advising them to upgrade to version 3.5 of the browser. According to a developer blog from Mozilla, the pop-up informs users that Firefox 3.5.2 is twice as fast as Firefox 3.0.13 and includes new features. Previously, in order to stumble upon the new version, Firefox 3.0 users needed to specifically search for updates.

The information pop-up offers users the option of downloading Firefox 3.5 immediately, downloading it later, or skipping it completely. Although the pop-up informs users of potential add-on incompatibilities, users will only find out whether updates for their installed add-ons are available after upgrading to the new version of Firefox. The Mozilla development team says that 90 percent of add-ons have either been updated for version 3.5, or new version have been created

Tuesday, August 11, 2009

A new fascinating Linux kernel vulnerability

Source code for a exploit of a Linux kernel vulnerability has been posted by Brad Spengler (Brad is the author of grsecurity). I have to tell you right now – this was one of the most fascinating bugs I've read about lately.

Why is it so fascinating? Because a source code audit of the vulnerable code would never find this vulnerability (well, actually, it is possible but I assure you that almost everyone would miss it). However, when you add some other variables into the game, the whole landscape changes.

While technical details about this are a bit complex, generally what's happening can be easily explained. The vulnerable code is located in the net/tun implementation. Basically, what happens here is that the developer initialized a variable (sk in the code snippet below) to a certain value that can be NULL. The developer correctly checked the value of this new variable couple of lines later and, if it is 0 (NULL), he just returns back an error. The code looks like this:

struct sock *sk = tun->sk; // initialize sk with tun->sk

if (!tun)
return POLLERR; // if tun is NULL return error

This code looks perfectly ok, right? Well, it is, until the compiler takes this into its hands. While optimizing the code, the compiler will see that the variable has already been assigned and will actually remove the if block (the check if tun is NULL) completely from the resulting compiled code. In other words, the compiler will introduce the vulnerability to the binary code, which didn't exist in the source code. This will cause the kernel to try to read/write data from 0x00000000, which the attacker can map to userland – and this finally pwns the box. There are some other highly technical details here so you can check your favorite mailing list for details, or see a video with this exploit on YouTube at http://www.youtube.com/watch?v=UdkpJ13e6Z0. Brad was able to even bypass SELinux protections with this and LSM.

The fix for this is relatively easy, the check has to be done before assigning the value to the sk structure.
Fascinating research that again shows how security depends on every layer, and how even very expensive source code audit can result in missed vulnerabilities.

Monday, August 3, 2009

System Monitoring With Atop




Atop is a useful tool that displays system load information alongside process information in a similar style to top.

As in the screenshot below illustrates, the top window shows system-level information and the bottom one process information.


The lines in the top window show:

* PRC: Total CPU time in system and user mode, total number of processes and of zombie processes, and the number of processes that exited during the polling interval. The default polling interval is 10 seconds. Use 'i' to change it interactively or 'z' to pause it.
* CPU and CPL: CPU utilization and load (averaged over 1, 5 and 15 minutes).
* MEM and SWP: Amount of memory and swap space that is available and where it's allocated. vmcom and vmlim show how much virtual memory space is committed and what the limit is.
* DSK: disk utilization. avio shows the average number of milliseconds per request.
* NET: Network utilization for the TCP layer ("transport"), the IP layer ("network") and each interface.

All of these use color to indicate if there are any problems.

The bottom window shows active processes (use 'a' to toggle showing all processes). 'g' shows the default process information, or use 'm' to show memory information. VGROW and RGROW on the memory information screen show the increase in virtual and memory usage during the polling interval; check the man page for further information about other columns. Note that you can also kill a process from here by hitting 'k'.

There are various other top-alike programs out there for other resources. Try iftop, for example, to take a look at your network interface statistics! Or htop to get top information in colour and with scrolling.

Monday, July 20, 2009

Defense strategy against spyware

• Keep operating system and browser patched.
• Use hardware or software firewall.
• Use and regularly update quality anti-virus and anti-spy ware programs.
• Read all end user license agreements and privacy policies carefully. If in doubt, don’t install the software.
• Beware of “free” software, often offered in exchange for accepting adware.
• Don’t normally run as administrator. Set up a regular user account for day-to-day work and log on as administrator only to install patches, etc.
• Think about what software actually need.
• Never click on links in email unless you know/can verify who sent it.
• Beware of pomography, online gaming, get-rich-quick and other high-risk web sites.

Monday, July 13, 2009

Spam control

Spam control is big business in organizations. Employees having to deal with unsolicited commercial/bulk mail are something that not only reduces productivity but also eats into the company's bottom-line.

Another thing that eats into the company's bottom-line is the lack of productivity and disturbance caused by Microsoft Windows due to its various vulnerabilities, viruses, worms , trap doors and other malwares not to mention crashes of course.


Spam control has to invariably fall under one of the following categories.
• Bayesian filtering and contextual analysis
• Heuristical filtering based on known keywords/bad words
• CRM114 Markovian chain based filtering
• Vipul's razor approach of DCC (Distributed checksum computation) with manual interference – gmail uses this heavily
• Greylisting to stop spam right at the MTA level
• IP address blacklisting and e-mail address whitelisting.
• TMDA – cure worse than the disease (Only approved senders can send mail)
• RBL lists , spamhaus (politically sensitive spam control techniques)
• Sender Policy Framework(SPF) (not a bad idea per se) but does not work well

Friday, July 10, 2009

New IDS/IPS technology

Recently while parusing the intertubes I ran across a new IDS/IPS technology (PHPIDS) "http://www.php-ids.org". This is an interesting and simple concept that can add an additional layer of security to your web application(s).

Tuesday, July 7, 2009

Benefits of Firewalls

A firewall provides a leveraged choke point for network security. It allows the corporation to focus on a critically vulnerable point: where the corporation’s information system connects to the Internet. The firewall can control and prevent attacks from insecure network services. A firewall can effectively monitor all traffic passing through the system. In this manner, the firewall serves as an auditor for the system and can alert the corporation to anomalies in the system. The firewall can also log access and compile statistics that can be used to create a profile of the system.


Some firewalls, on the other hand, permit only email traffic through them, thereby protecting the network against any attacks other than attacks against the email service. Other firewalls provide less strict protections and block services that are known to be problems.


Generally, firewalls are configured to protect against unauthenticated interactive logins from the outside world. This, more than anything, helps prevent vandals from logging into machines on your network. More elaborate firewalls block traffic from the outside to the inside but permit users on the inside to communicate freely with the outside..


Firewalls are also important since they can provide a single choke point where security and audit can be imposed. Unlike in a situation where a computer system is being attacked by someone dialing in with a modem, the firewall can act as an effective phone tap and tracing tool. Firewalls provide an important logging and auditing function. Often, they provide summaries to the administrator about what kinds and amount of traffic passed through it, how many attempts there were to break into it, etc.

The following are the primary benefits of using a firewall:
• Protection from vulnerable services
• Controlled access to site systems
• Concentrated security
• Enhanced privacy
• Logging and statistics on network use and misuse
• Policy enforcement

Sunday, July 5, 2009

Wireshark 1.2 tutorial: Open source network analyzer's new features

Wireshark is a staple of any network administrator's toolkit, and it can be equally useful for any network solution providers or consultants who troubleshoot business networks. Most of the readers of this tutorial have probably used Gerald Combs' open source protocol analyzer for years. In this edition of Traffic Talk, I'd like to discuss a few new features of Wireshark as present in the 1.2 version released on June 15, 2009. I use Windows XP SP3 as my test platform.

To try Wireshark 1.2, I uninstalled Wireshark 1.0.8. I had no trouble replacing 1.0.8 with 1.2, and I allowed the installer to replace my old version of WinPcap with the newer WinPcap 4.1beta5 bundled with Wireshark 1.2.

I decided to try running Wireshark as a user with no administrative privileges. I relied on manually starting the WinPcap driver called "NPF" in order to give Wireshark the privileges required to sniff traffic on my laptop's wireless NIC. To start NPF manually, I ran the following:

C:\>runas /u:administrator "net start npf"
Enter the password for administrator:
Attempting to start net start npf as user "NEELY\administrator" ...

C:\>sc query npf

SERVICE_NAME: npf
TYPE : 1 KERNEL_DRIVER
STATE : 4 RUNNING
(STOPPABLE,NOT_PAUSABLE,IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0

The "net start npf" command is sufficient to launch Wireshark with sniffing capabilities. I ran the "sc query npf" to show details on the NPF driver.

Now I was ready to start Wireshark, which I did using the desktop icon added during installation. I was surprised to see the following screen.

Wednesday, July 1, 2009

DenyHosts Installation and Configuration

DenyHosts is a script intended to be run by Linux system administrators to help thwart SSH server attacks (also known as dictionary based attacks and brute force attacks).

If you've ever looked at your ssh log (/var/log/secure on Redhat, /var/log/messages on OpenSuSe, etc...) you may be alarmed to see how many hackers attempted to gain access to your server. Hopefully, none of them were successful (but then again, how would you know?). Wouldn't it be better to automatically prevent that attacker from continuing to gain entry into your system?

DenyHosts attempts to address the above... and more by by monitoring invalid login attempts in the authentication log and blocking the originating IP addresses, adding entries to /etc/hosts.deny. DenyHosts will also inform Linux administrators about offending hosts, attacked users and suspicious logins.

Features include:
* Parses authentication log to find all login attempts and filters failed and successful attempts
* Synchronization mode allows DenyHosts daemons the ability to share data via a centralized server to proactively thwart attacks
* DenyHosts Can be run from the command line, cron or as a daemon
* Records all failed login attempts for the user and offending host
* For each host that exceeds a threshold count, records the evil host
* Keeps track of each non-existent user when a login attempt failed
* Keeps track of each existing user (eg. root) when a login attempt failed
* Keeps track of each offending host
* Keeps track of suspicious logins (that is, logins that were successful for a host that had many login failures)
* Keeps track of the file offset, so that you can reparse the same file (/var/log/secure) continuously (until it is rotated).
* When the log file is rotated, the script will detect it and parse from the beginning
* Appends /etc/hosts.deny and adds the newly banned hosts
* Optionally sends an email of newly banned hosts and suspicious logins
* Keeps a history of all user, host, user/host combo and suspicious logins encountered which includes the data and number of corresponding failed login attempts
* Maintains failed valid and invalid user login attempts in separate files, such that it is easy to see which valid user is under attack (which would give you the opportunity to remove the account, change the password or change it's default shell to something like /sbin/nologin
* Upon each run, the script will load the previously saved data and re-use it to append new failures
* Resolves IP addresses to hostnames, if available
* /etc/hosts.deny entries can be expired (purge) at a user specified time

Installation: Use "1-click" installer to install DenyHosts
OpenSuSe 11.1 - Install DenyHosts
OpenSuSe 11.0 - Install DenyHosts

Configuration of Denyhosts:
You can find the main configuration: /etc/denyhosts.conf where most of the settings are good for any normal operation of DenyHosts but you can also tweak it more to suite your needs. look into the comments in this file to know more about the configuration details

Few other important setting:
# vi /var/lib/denyhosts/allowed-hosts
# vi /etc/hosts.allow

you'll want to add in these 2 files the IP(s) you will use to connect to your system that's running Denyhosts so that you aren't inadvertently denied access to your own system(s).

Starting the service and marking it to run on each system reboot:
# service denyhosts start; chkconfig --level 2345 denyhosts on

Tuesday, June 23, 2009

Defensible Network Architecture

A Defensible Network Architecture is an information architecture that is:


1. Monitored. The easiest and cheapest way to begin developing DNA on an existing enterprise is to deploy Network Security Monitoring sensors capturing session data (at an absolute minimum), full content data (if you can get it), and statistical data. If you can access other data sources, like firewall/router/IPS/DNS/proxy/whatever logs, begin working that angle too. Save the tougher data types (those that require reconfiguring assets and buying mammoth databases) until much later. This needs to be a quick win with the data in the hands of a small, centralized group. You should always start by monitoring first, as Bruce Schneier proclaimed so well in 2001.

2. Inventoried. This means knowing what you host on your network. If you've started monitoring you can acquire a lot of this information passively. This is new to DNA 2.0 because I assumed it would be already done previously. Fat chance!

3. Controlled. Now that you know how your network is operating and what is on it, you can start implementing network-based controls. Take this anyway you wish -- ingress filtering, egress filtering, network admission control, network access control, proxy connections, and so on. The idea is you transition from an "anything goes" network to one where the activity is authorized in advance, if possible. This step marks the first time where stakeholders might start complaining.

4. Claimed. Now you are really going to reach out and touch a stakeholder. Claimed means identifying asset owners and developing policies, procedures, and plans for the operation of that asset. Feel free to swap this item with the previous. In my experience it is usually easier to start introducing control before making people take ownership of systems. This step is a prerequisite for performing incident response. We can detect intrusions in the first step. We can only work with an asset owner to respond when we know who owns the asset and how we can contain and recover it.

5. Minimized. This step is the first to directly impact the configuration and posture of assets. Here we work with stakeholders to reduce the attack surface of their network devices. You can apply this idea to clients, servers, applications, network links, and so on. By reducing attack surface area you improve your ability to perform all of the other steps, but you can't really implement minimization until you know who owns what.

6. Assessed. This is a vulnerability assessment process to identify weaknesses in assets. You could easily place this step before minimization. Some might argue that it pays to begin with an assessment, but the first question is going to be: "What do we assess?" I think it might be easier to start disabling unnecessary services first, but you may not know what's running on the machines without assessing them. Also consider performing an adversary simulation to test your overall security operations. Assessment is the step where you decide if what you've done so far is making any difference.

7. Current. Current means keeping your assets configured and patched such that they can resist known attacks by addressing known vulnerabilities. It's easy to disable functionality no one needs. However, upgrades can sometimes break applications. That's why this step is last. It's the final piece in DNA 2.0.

Monday, June 22, 2009

虚拟化大战升温 专家提出四项预测

还记得20年前的Unix大战吗?当Unix大战在1980年代中期开始的时候,哪一种操作系统将统领计算机领域进入21世纪似乎是非常明确的。总的来说,加州大学伯克利分校支持BSD。AT&T支持SystemV。

  现在,BSD偶尔还在管理一些应用。但是,你最后一次听说有人使用System V是在什么时候?
  事实上,在本10年,随着x86虚拟化应用加快增长,操作系统已经不太重要了,更重要的是虚拟化环境:管理程序和其它东西。随着虚拟化领域继续发展到一个重要的阶段,虚拟化显然已经成为本10年相当于当年的Unix大战的战场,特别是在最近几个月。
  当然,这两场大战有一些关键的区别(例如,环境能够共存和本身就没有合作意图)。但是,惊人的相似之处是这两场大战都代表了基础的计算基础设施架构的式样的转变。
  ToutVirtual网站“合适的虚拟世界”博客栏目主编SchorschiDecker最近在一篇题为 “Retrospective,What is NewisOld?(回顾:所谓新东西是旧的吗?)”的博客中提供了一些有关虚拟化环境未来的观点和预测。这个文章值得一读。下面是他对当前重要环境的一 些观点:
  Xen将不会幸存下来,并不是因为它不好,而是因为市场份额决定一切。思杰没有占有低端虚拟化市场。现在,随着其它竞争者的成熟,思杰等公司都面临严重的威胁。Xen不是免费的,也没有KVM和Hyper-V那样便宜,至少没有人们希望的那样便宜。
  Hyper-V还要努力奋斗两年,因为它还很软弱。作为免费的软件,到目前为止只能让你接受它。微软将使Hyper-V取得成功,就像微软在分布式服务器市场打败Novell一样,IT行业的经理们没有胆量或者缺乏胆量不选择微软。
  关于VMware,Decker称,VMware的价格也不便宜。我的这个观点在VMware最近召开的会议上得到了加强。那次会议对于 VMware的功能和价格进行了热烈的讨论。老实说,VMware是专业的,非常清楚的情况是VMware不听我们的意见。VMware已经出现5、6年 时间了并且在继续增加功能,没有参照规模和范围其企业增强现有的功能。Decker在博客中没有提到甲骨文,尽管甲骨文一再把自己定位于虚拟化厂商。今年 4月中旬,甲骨文披露了收购Sun微系统公司的计划。甲骨文上个星期宣布它计划收购VirtualIron。甚至在此之前甲骨文就把自己当成了主要的虚拟 化厂商。然而,现在把甲骨文当作这个角色还为时过早。
  ServerWatch网站执行编辑Amy Newman对于虚拟化大战提出了如下预测:
  思杰将把Xen定位于以客户端为重点。其技术创新的大部分是目前发现的那些技术创新。思杰可能与微软进行更深入的合作。
  Hyper-V将出现在大多数企业,但是,将专门用于大多数中小企业环境。Hyper-V也许不是免费的,但是,它是 WindowsServer2008的一部分,也可能仍以某种形式作为以后的操作系统的一部分。企业最终将需要升级,最初的观点是使用已经有的功能是很容 易和便宜的,特别是如果微软的合作伙伴围绕Hyper-V建立一个生态系统的话。这种事情现在似乎已经在做。
  甲骨文和VMware将争夺财富500强企业市场,基础设施工具将成为关键的差异化因素。这两家公司将把重点放在愿意为自己需要的资源付费的那个市场。只要这种产品实际上能够提供它许诺的功能,这些企业就愿意付费。
  如果我们以Unix大战为榜样,就可以做出第四个预测:并非每一个主要的参与者现在都出现了。对于Unix大战影响最大的可能是Linux的出 现。Linux是在1991年的一个研究生学校的项目中诞生的并且1990年代中期开始快速增长。Linux最终成熟到能够在计算密集型环境中替代 Unix的程度。
  然而,Linux并不是真正的Unix。也就是说,虚拟化的“杀手应用程序”也许同我们现在所知道的虚拟化环境是不同的东西。

Monday, June 15, 2009

Secure alternative to telnet

Telnet is a protocol allowing you to connect to a remote system and run programs and commands on that system. It is very old and still very much in use today.

Unfortunately, Telnet, by default, does not encrypt any data sent over the connection (including passwords), and so it is often practical to eavesdrop on the communications and use the password later for malicious purposes; anybody who has access to a router, switch, hub or gateway located on the network between the two hosts where Telnet is being used can intercept the packets passing by and obtain login and password information (and whatever else is typed) with any of several common utilities like tcpdump and Wireshark.

On the other hand, a program called ssh exists that can replace both telnet and ftp in a secure, encrypted way.

Ssh stands for Secure Shell. It will encrypt each connection with a random key, so that it is impossible or at least very hard for a third party to decrypt the connection and find the password, or spy on you.

Use putty (Here) if you are on windows or use "ssh" command (example shown below) from Linux/UNIX box to connect to the remote server.

# ssh 192.168.0.2
The authenticity of host '192.168.0.2 (192.168.0.2)' can't be established.
RSA key fingerprint is 2b:91:9b:c1:a7:57:91:dc:93:b3:04:50:c0:b9:bd:ba.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.2' (RSA) to the list of known hosts.
Password:

Thursday, June 11, 2009

Securing DNS Servers - Part 2

Last week’s blog article discussed some of the security issues surrounding DNS servers. This article wraps up the topic by offering some tips on securing your server. The good news is most of these problems have been solved and you can take steps to secure your name server and your zone data. Although there is no panacea (see Why the DNS is broken, in plain language), taking these steps will reduce the pool of problem name servers on the Internet.

1. Stay up to date

Make sure your name servers are running the latest version available. Many of the cache poisoning attacks have been fixed in later versions of DNS software. For example, recent versions of ISC BIND added relevancy checks for information in DNS replies, randomized source ports, etc.

2. DNS security

DNS Security Extensions (DNSSEC) adds the ability to ensure authentication and integrity of DNS records through the use of signed DNS zones. With DNSSEC, each record in a zone is signed (and the absence of a record is detectable) and a chain of trust from the root name servers to the zone is used to authenticate the replies as coming from an authorized name server. DNSSEC prevents your domain from being spoofed to DNSSEC-aware resolvers, eliminating cache poisoning attacks. The two draw backs to DNSSEC are that it adds some maintenance overhead for updating zones and keeping the signatures up to date; and not all top level domains (TLDs) are providing for the DNSSEC chain of trust. The second issue has been addressed using other techniques such as DNSSEC Look-aside Validation. A good getting started guide is available: DNSSEC in 6 minutes

3. Turn off public recursive querying

Turning off public recursive querying will not only make you a good net citizen by taking you out of the pool of possible amplification attack servers, it will also close off the theft of service problem described above. To accomplish this, simply add access control over which IPs can query your server. First, you will went to limit general queries to only those IPs which should be able to use you server for DNS resolution (internal network, customers, etc). If using ISC BIND, this can be done in your named.conf’s options section:

options
{
...
allow-query
{
trusted;
};
allow-query-cache
{
trusted;
};
allow-recursion
{
trusted;
};
...
}

acl "trusted"
{
localhost;
192.168.0.0/16; // Internal IPv4
3ffe:470:1f01:642::/64; // Internal IPv6
130.215.24.0/24; // DMZ
};

This will limit the machines allowed to query your name server for any information to those listed in the “trusted” ACL. Second, you will want to open up queries for the zones which your name server publicly publishes:

zone "example.com"
{
...
allow-query
{
any;
};
...
};

This allows queries to the example.com zone from any host on the Internet.

Once these restrictions are in place, test on both a non-trusted machine and a trusted machine. The trusted machine should give an accurate response. The non-trusted machine should get a response indicating that the query was refused and recursion was not available:

> dig A google.com @your-name-server.example.com
...
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 27272
...
;; WARNING: recursion requested but not available
...

4. Turn off AXFR

Finally, to prevent information leakage, you should limit zone transfers to only those machines which provide secondary service to your zones. To accomplish this, add ACLs to turn off zone transfers and then turn them on for each individual zone. For example, with ISC BIND:

options
{
...
allow-transfer
{
none;
};
...
};

zone "example.com"
{
...
allow-transfer
{
10.210.100.1;
3ffe:470:1f01:642::1;
};
...
};

This turns off all zone transfers for all zones served by your name server and then allows example.com to only be transfered by IPv4 address 10.210.100.1 and IPv6 address 3ffe:470:1f01:642::1.

Sunday, June 7, 2009

Why is the Snort IDS still alive and thriving?

No one wants to simply "detect" intrusions. Everyone, quite rationally, wants to prevent intrusions. Leading up to 2003, IDS vendors claimed ever greater capabilities to detect intrusions, with supposedly lower false positive rates. Customers naturally asked the question, "If you can detect it, why can't you prevent it?" Companies selling so-called "intrusion prevention systems" answered "We can!" and dealt a body blow to the IDS market.

The undeniable fact of the matter, however, is that preventing a network-based intrusion requires detecting it. No one has built, or ever will build, a network-based (or host-based, or anything-else-based) system that performs 100% accurate detection, so that means 100% prevention is also impossible. What should you do with events that are not regarded with 100% confidence as being malicious? If you block them, you could deny legitimate business traffic. The sensible alternative is to alert on them and let a human analyst investigate the situation. Hence, we have returned to seeing IDS as a useful tool. IPS, incidentally, is quickly becoming another feature on the network firewall.

Thursday, June 4, 2009

WEB MANAGEMENT Internet Information Services (IIS) sees big changes in Windows Server 2008

Over the years, Internet Information Services (IIS) -- Microsoft's flagship Web server product -- has received a lot of flak for being hacked and compromised. With the release of Windows Server 2008, however, Microsoft had the opportunity to move past those stereotypes and do something really great – and this time, the company came through. In fact, Microsoft and the IIS team went above and beyond what I had expected by completely redesigning and overhauling IIS's core functionality and design.

What's new with IIS?
Microsoft has taken the core functionality of IIS and broken it down into modules. You can take any one of these modules and break them down further by plugging, unplugging or extending them, or even ripping the code out and not using them at all.

In other words, you can turn any module in IIS on or off whenever you want. For example, if you don't use basic authentication in your websites, you can simply remove the code. Furthermore, if your application does not take advantage of common gateway interfaces (CGI), just remove that specific component.

Now when you deploy a brand new Web server, you can choose what components you want and only run those components. This not only allows you to further secure IIS but it also provides a huge performance boost as IIS will run faster than ever before.

Another area that I am impressed with is ASP.NET integration. Currently, ASP.NET sits on top of IIS and compliments it very well. In version 7.0, IIS and ASP.NET are completely integrated. Included in this integration is the entire .NET framework, ADO.NET and the next version of Microsoft's Web services platform, called Indigo.

Ease of use with IIS

So how does all this help you? Well, administrators now have one configuration point for all components as opposed to two or more, which should make life a lot easier on those using IIS.

Wednesday, June 3, 2009

Securing DNS Servers - Part 1

Without a doubt, the most critical infrastructure component on the Internet is DNS. Every other service, including e-mail, depends on it. Yet, surprisingly enough, a large percentage of DNS servers have yet to be secured. In this first of a two part posting, I’ll describe some of the issues surrounding DNS servers. Next week’s post will include steps you can take to secure your servers.

Outside of the usual implementation specific security bugs, there are four main reasons to improve the security of your Internet facing name servers:

1. Spoofing

Over time various methodologies of spoofing DNS results have surfaced using a technique called cache poisoning. This takes advantage of the DNS server’s desire to cache answers for future use in order to cut down on network traffic and reply latency. Initially this was done by providing poisoned data in the additional information returned with a legitimate reply. Lately, these attacks have tried to take advantage of weaknesses in the DNS protocol to poison the DNS server. To read more about cache poisoning and the techniques involved, I recommend the Illustrated Guide to the Kaminsky DNS Vulnerability.

2. Denial of service attacks

A popular method of performing a denial of service attack is by attacking name servers. If a user can’t translate www.google.com to an IP address, they can’t reach the Google web site. A popular technique to knock off a target server is a DNS amplification attack, in which an attacker uses a set of DNS servers which are configured to respond to all recursive queries (regardless of source). In the attack, a relative small query is broadcast with a spoofed sender IP address belonging to the intended victim. Those recursive servers then reply to the victim with a much larger (amplified) response packet. By employing enough recursive servers, the victim, and at times, the recursive servers themselves, are flooded with DNS packets to be processed. For more information on this type of attack, refer to the DNS Amplification Attacks paper by Randal Vaughn and Gadi Evron.

3. Information leakage

For the most part, DNS operates by answering specific questions with specific replies. However, to support synchronizing redundant secondary servers, most DNS servers also allow for a domain’s entire zone to be transferred to the secondary server. However, unless specifically protected, zone transfers are open to any host requesting the information. Typically, a zone file contains all of the publicly available hosts for your site and can be a wealth of information for an attacker.

4. Theft of service

As mentioned in the denial of service attacks section, name servers can be configured to answer recursive queries for any host on the Internet. Even if not used maliciously in attacks, this allows anyone to use your name server for handling DNS client resolution. This is much like promiscuous relaying in e-mail that was common back before spammers arrived on the scene. While it is nice to offer your services to the Internet at large, this can often lead to abuse and the burden of responsibility for the actions of others.

Next week, we will look at steps you can take to protect your servers. In the mean time, your homework assignment is to prepare by determining which hosts should be allowed to use your name server for recursive name resolution (i.e., as a DNS client), and, secondly, for each zone you publish on the Internet, which hosts should be allowed to perform a transfer of the zone (i.e., who are the secondaries for the domain).

Sunday, May 31, 2009

Detecting Malicious PDFs

Last night at the NE Ohio Information Security Forum I gave a presentation on Detecting Malicious PDFs. I'm still not sure if I'm going to release the presentation, but I am going to release a Snort signature that I've found useful for detecting evil PDFs.

alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"Potential Malicious PDF (OpenAction JavaScript)"; flow:from_server,established; content:"%PDF-"; content:"

This signature looks for the PDF header (indicating we're dealing with a PDF) then an /OpenAction followed by /JS. This indicates that JavaScript will be executed as soon as the document is open.

Yes, I realize this signature can be easily bypassed with PDF obfuscation. However, I've found that attackers are not yet using this very much. Let me know if this is useful to you.

Tuesday, May 19, 2009

Reducing load on web server by using reverse proxy - squid

Many large organizations use caching proxy servers to save on network bandwidth utilization (and costs) and improve browsing response times. In fact, an entire industry has grown up around caching proxy appliances. But in the open source world, we’ve had one of the most advanced proxy servers for many, many years. Squid (http://www.squid-cache.org) is to caching proxy servers as Apache is to web servers.

A quick-win method of reducing load on a Web site is to use a reverse proxy, which intercepts requests from clients and then proxies those requests on to the Web server, caching the response itself as it sends it back to the client.

This is useful because it means that for static content the proxy doesn't have to always contact the Web server, but can often serve the request from its own local cache. This in turn reduces the load on the Web server. This is especially the case when the Web server also serves dynamic content, since the Web server hardware can be less tuned to static content (when it is cached by a front-end proxy) and more tuned to serving dynamic content. It is also sometimes the case that although the Web server is serving dynamically created pages, these pages are cachable for a few seconds or maybe a few minutes. By using a reverse proxy, the serving of these pages speeds up dramatically.

Reverse proxying in this manner can also be used alongside the simple load balancing system, where static and dynamic content are split across separate servers. Obviously the proxy would be used on only the static content Web server.

Squid Configuration for Reverse Proxy:
The reverse proxy has to intercept every request, in order to compare it with its cache content. Let's assume we have two machines:

* Web server serving http://www.example.net/ (192.168.0.1)
* squid.example.net (192.168.0.2)

In squid.conf file we begin with the IP addresses, and tell it to listen for incoming requests on port 80.

http_port 192.168.0.2:80 vhost vport
http_port 127.0.0.1:80
icp_port 0
cache_peer 192.168.0.1 parent 80 0 originserver default

A reverse proxy for a public Web server has to answer requests for everybody so we need to add some ACL.

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl reverseproxy dst 192.168.0.1 192.168.0.2
http_access allow reverseproxy
http_access allow manager localhost
http_access deny manager
http_access deny all
deny_info http://www.example.net/ all

You can change your configuration as per your needs.

Sunday, May 17, 2009

(R)ecovery (I)s (P)ossible Linux rescue system

Recovery Is Possible (RIP) is a Slackware-based CD boot/rescue/backup/maintenance system. It has support for a lot of filesystem types (Reiserfs, Reiser4, ext2/3, iso9660, UDF, XFS, JFS, UFS, HPFS, HFS, MINIX, MS DOS, NTFS, and VFAT) and contains a bunch of utilities for system recovery. It also has IDE/SCSI/SATA, PCMCIA, RAID, LVM2, and Ethernet/DSL/cable/PPP/PPPOE network support.

RIPLinux come with lots of tools:

* Programas like fetchmail , curl, wget, ssh/sshd, mutt, links, msmtp, tmsnc, slrn, lftp, Firefox
* Includes packages like cdrwtool, mkudffs and pktsetup for writing backups & files to optical media.
* system monitoring: lshw, atop, htop, dmesg, dmidecode, mount utility (of course, these tools come with most of the Linux distros today but they could be useful to detect I/O errors, BIOS warnings, damaged partitions)
* partitioning: fdisk, cfdisk, Ghost For Linux, GParted, Grub, Partimage, Testdisk (the list of supported partition types includes EXT4, Reiser4 and NTFS)
* fsck.reiserfs and 'fsck.reiser4 to check and repair filesystem ReiserFS and Reiser4.
* xfs_repair to repair a Linux file system xfs.
* jfs_fsck to check and repair a Linux file system JFS.
* e2fsck to check and repair a Linux file system ext2 or ext3.
* ntfsresize for resizing the Windows NTFS without losing data.
* ntfs-3g to write to Windows NTFS.
* chntpw can view information and user passwords on Windows systems.
* cmospwd allows you to retrieve password from CMOS / BIOS.

Wednesday, May 13, 2009

How to set Access/Restrictions on users logins

Time Based Restrictions
These examples will limit the login times of certain users. See /etc/security/time.conf for more information/examples. In order to place time restrictions on user logins, the following must be placed in /etc/pam.d/login:

account required /lib/security/pam_time.so

The remaining lines should be placed in /etc/security/time.conf.

1. Only allow user nikesh to login during on weekdays between 7 am and 5 pm.

login;*;nikesh;Wd0700-1700

2. Allow users A & B to login on all days between 8 am and 5 pm except for Sunday.

login;*;A|B;AlSu0800-1700

If a day is specified more than once, it is unset. So in the above example, Sunday is specified twice (Al = All days, Su = Sunday). This causes it to be unset, so this rule applies to all days except Sunday.

Access Based Restrictions
/etc/security/access.conf can be used to restrict access by terminal or host. The following must be placed in /etc/pam.d/login in order for these examples to work:

account required /lib/security/pam_access.so

1. Deny nikesh login access on all terminals except for tty1:

-:nikesh:ALL EXCEPT tty1

2. Users in the group operator are only allowed to login from a local terminal:

-:operator:ALL EXCEPT LOCAL

3. Allow user A to only login from a trusted server:

-:A:ALL EXCEPT trusted.somedomain.com

Sunday, May 10, 2009

Fighting Spam mails

Spam is flooding the Internet with many copies of the same message, in an attempt to force the message on people who would not otherwise choose to receive it. Most spam is commercial advertising, often for dubious products, get-rich-quick schemes, or quasi-legal services. Spam costs the sender very little to send -- most of the costs are paid for by the recipient or the carriers rather than by the sender.

Spammers get your e-mail addresses from webpages, news groups or domain records (if you have your own domain). There are individuals who use robots to extract the addresses, burn them on CDs and sell them very cheap to other Spammers. If you write your e-mail address in clear text onto your homepage today such that programs can extract it, then you will have a major problem in a few months time and you can't stop it. The problem will be growing every day!

Now lets discuss some common filter techniques and how they work. I will not describe how to configure them exactly in each MTA. Instead I suggest you to read the documentation that comes with the MTA that you have installed. Postfix and Exim are well documented

Realtime Block lists:
These are DNS based lists. You check the IP address of the mailserver that wants to send mail to your server against a blacklist of known spammers. Common lists are www.spamhaus.org. You should however not be too enthusiastic about it and carefully choose the lists since there are also some which block entire IP ranges simply because one spammer had used a dialup connection from this ISP at one point in time.

8 bit characters in subject line:
About 30% of the spam origins in China, Taiwan or other Asian countries these days. If you are sure that you can't read Chinese then you can reject mail which has a lot of 8 bit characters (not ASCII) in the subject. Some MTAs have a separate configuration option for this but you can also use regular expression matching on the header:

/^Subject:.*[^ -~][^ -~][^ -~][^ -~]/

This will reject email which has more than 4 consecutive characters in the subject line which are not in the ASCII range space to tilde. Both exim and postfix can be compiled with perl regular expression support. This method is quite good and keeps out 20-30% of the spam-mail.

Lists with "From" addresses of known spammers:
Forget it. This used to work back in 1997. Spammers today use faked addresses or addresses of innocent people.

Reject non FQDN (Fully Qualified Domain Name) sender and unknown sender domain:
Some spammers use non existent addresses in the "From". It is not possible to check the complete address but you can check the hostname/domain part of it by querying a DNS server.
This keeps out about 10-15% of the spam and you don't want these mails anyhow because you would not be able to reply to them even if they were not spam.

IP address has no PTR record in the DNS:
This checks that the IP address from where you get the mail can be reverse resolved into a domain name. This is a very powerful option and keeps out a lot of mail. I would not recommend it! This does not test if the system administrator of the mail server is good but if he has a good backbone provider. ISPs buy IP addresses from their backbone providers and they buy from bigger backbone providers. All involved backbone providers and ISPs have to configure their DNS correctly to make the whole chain work. If somebody in between makes a mistake or does not want to configure it then it does not work. It says nothing about the individual mail server at the end of the chain.

Require HELO command:
When 2 MTAs (mail servers) talk to each other (via smtp) then they first say who they are. Some spam software does not do that. This keeps out 1-5% of the spam.

Require HELO command and reject unknown servers:
You take the name that you get in the HELO command and then you go to DNS and check if this is a correctly registered server. This is very good because a spammer who uses just a temporary dialup connection will usually not configure a valid DNS record for it.
This blocks about 70-80% of all spam but rejects also legitimate mail which comes from sites with multiple mail servers where a sloppy system administrator forgot to put the hostnames of all servers into DNS.

Some MTAs have even more options but the above are quite commonly available in a good MTA. The advantage of all those checks is that they are not CPU intensive. You will usually not need to update your mailserver hardware if you use those checks.

Sunday, May 3, 2009

Windows7/Vista/XP中的自动运行功能均将被取消

据报微软正计划取消其各款操作系统的一部分自动运行功能。在不久的将来,闪盘等插入PC的移动存储设备将不会在连接到PC上时自动运行。自动运行功能唯一得以保留的地方是在光盘载体上,据调查表明光盘受到病毒感染的几率比闪盘等载体小许多。

  微软总是宣称自己的自动运行功能是安全的,不过一直以来却不断有病毒软件通过这项功能疯狂传播。尽管禁用这种自动运行功能对某些电脑小白来说可能会带来使用上的困扰,但现在微软依然决心在Windows7,Windows Vista以及Windows XP操作系统中禁用这项功能。

Monitoring Bandwidth Usage - iftop

iftop does for network usage what top(1) does for CPU usage. It listens to network traffic on a named interface and displays a table of current bandwidth usage by pairs of hosts.

Overview of IFTOP
* iftop (interface top) derives the name from the standard unix top command. top command displays real-time CPU Usage. iftop command displays real-time network bandwidth usage.

* iftop displays the network usage of a specific interface on the host.

* Using iftop you can identify which host is responsible for slowing down your network.

* To find out which process is causing the problem, note down the port number from the iftop and use netstat -p to identify the process.

* iftop monitors your network activity, and displays a table of current bandwidth.


Download the source code form the iftop website - here and compile/install iftop using following commands

# tar -zxvf iftop-0.17.tar.gz
# cd iftop-0.17
# ./configure
# make
# make install

Using:
Go to your cosole and use command: iftop to start monitoring the bandwidth usage.
you can also specify a particular interface with the -i option: iftop -i eth1, some other options ..
* -p Enables promiscuous mode, so the traffic on any interface (if there is more than once) is checked and counted

* -P Shows also the port that connection is using both on both side

* -N Do not resolve port names, which is the default behavior when you enable the -P option, so it will shows you :www or :80

Wednesday, April 29, 2009

How to check/repair (fsck) filesystem after crash or power-outage

At some point your system will crash and you need to perform a manual repair of your file system. A typical situation would be power loss while you are working on the system. You reboot and the system stops and indicates you must perform a manual repair of the system using fsck.

fsck (file system consistency check) is a command used to check filesystem for consistency errors and repair them on Linux filesystems. This tool is important for maintaining data integrity so should be run regularly, especially after an unforeseen reboot (crash, power-outage).

Usage: fsck [-sACVRTNP] [-t fs-optlist] [filesystem] [fs-specific-options]

Filesystem can be either a device's name (e.g. /dev/hda) or its mount point. fsck run with no options will check all devices in /etc/fstab. It might be neccesary to run fsck from single-user mode

Note: You need to be "root" to use any of the below mentioned command

* Take system down to runlevel one: # init 1

* Unmount file system, for example if it is /home (/dev/sda2) file system then type command:
# umount /home OR # umount /dev/sda2

* Now run fsck on the partition: # fsck /dev/sda2

* Specify the file system type using -t option: # fsck -t ext3 /dev/sda2 OR # fsck.ext3 /dev/sda2

fsck will check the file system and ask which problems should be fixed or corrected. If you don't wanna type y every time then you can use pass -y option to fsck: # fsck -y /dev/sda2

Please not if any files are recovered then they are placed in /home/lost+found directory by fsck command.

* Once fsck finished, remount the file system: # mount /home

Read man page of fsck for more information.
Make sure you replace /dev/sda2 with your actual device name.

Sunday, April 19, 2009

Overall Sendmail Security

1.File and directory permissions

It is imperative that Sendmail's binaries and configuration files have appropriate permissions. Weak permissions on files and directories can easily result in system compromise. For instance:

Everyone who has write access to your sendmail.cf can use the program form of the F command combined with setting the DefaultUser to 0:0 to cause sendmail to execute an arbitrary script as root. If that script happens to make one of your installed shells (or a copy of a shell in /tmp, for instance) a setuid binary, anyone with local access can get root access.

Attackers may also exploit group-writable .forward and :include: files to gain system access as the file owner.

Protecting the aliases file alone is not sufficient as that is merely a source file to generate the alias database, a db3(3) format file called aliases.db in /etc/mail.

Improper directory ownership can result in root-owned files being overwritten or directory owners being replaced.

To help prevent these situations, sendmail will check the permissions of all sendmail-related binaries, configuration files, and directories on the system. You can force an audit with the following command:

% sudo sendmail -v -d44.4 -bv postmaster


Observe the output closely and ensure your system does not fall prey to weak permissions. Once you have solidified the desired permissions on your system, you may want to employ some combination of file immutability and permissions auditing software like Tripwire, Osiris, or mtree(8).

2.Beware recipient programs

Most sendmail configuration files, including .forward files, :include: mailing lists, aliases, and the sendmail.cf configuration file itself, support the execution of arbitrary programs. We mentioned earlier that .forward and :include: mailing list files are parsed and acted upon in the user context. If you've been diligent, these files will be writable only by the owner, ensuring that the execution of programs is intentional. If you've not been careful, users could easily start running programs as other users.

Still, just the fact that these files point to arbitrary programs means you've got another problem to deal with. All of these programs have suddenly become a part of your mail system, and you'll have to audit them, too. Be especially wary of the aliases file: sendmail will take actions on this file in the daemon user context.

You might want to consider restricting users from passing incoming mail to programs by ensuring their shell as specified in the passwd files is not in /etc/shells. You may still allow login by specifying a valid shell that is not in /etc/shells: you could, perhaps, create a /bin/allow-login shell, which is a copy of /bin/tcsh, and ensure /bin/allow-login is not listed in /etc/shells.

Thursday, April 16, 2009

What is PacketFence?

PacketFence is a Free and Open Source network access control (NAC) system. PacketFence is actively maintained and has been deployed in numerous large-scale institutions over the past years. It can be used to effectively secure networks - from small to very large heterogeneous networks. PacketFence has been deployed in production environments where thousands of users are involved. Among the different markets are :

* banks
* colleges and universities
* engineering companies
* manufacturing businesses
* school boards (K-12)

If your network is a breeding ground for worms, PacketFence is for you. If you have no idea who connects to your network and who owns a particular computer, PacketFence is for you. If you have no way of mapping a network policy violation to a user, PacketFence is for you.

Thursday, April 9, 2009

Encrypt-Decrypt file using OpenSSL

The OpenSSL Project is a collaborative effort to develop a robust, commercial-grade, full-featured, and Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols as well as a full-strength general purpose cryptography library. The project is managed by a worldwide community of volunteers that use the Internet to communicate, plan, and develop the OpenSSL toolkit and its related documentation.

OpenSSL is based on the excellent SSLeay library developed by Eric A. Young and Tim J. Hudson. The OpenSSL toolkit is licensed under an Apache-style licence, which basically means that you are free to get and use it for commercial and non-commercial purposes subject to some simple license conditions.

To encrypt a file:

$ openssl des3 -salt -in file.log -out file.des3
enter des-ede3-cbc encryption password:
Verifying - enter des-ede3-cbc encryption password:

The above will prompt for a password, or you can put it in with a -k option, assuming you’re on a trusted server.

To Decrypt: openssl des3 -d -salt -in file.des3 -out file.txt -k mypassword

Tuesday, April 7, 2009

Nmap



-supports many techniques for mapping out networks filled with IP filters,firewalls, routers, and other obstacles,

-used to scan huge networks of hundreds of thousands of machines,supports most operating systems, including Linux, Microsoft Windows, FreeBSD, OpenBSD, Solaris, IRIX, Mac OS X, HP-UX,easy to start out,

-available for free, comes with full source code that you may modify,comprehensive and up-to-date man pages & tutorials;

-has won numerous awards

Thursday, April 2, 2009

OpenSSL 1.0.0 beta 1 Released!

After many, many years of 0.9 status, the OpenSSL team has finally released a beta of version 1.0 of their software: Please download and test them as soon as possible. This new OpenSSL version incorporates 107 documented changes and bugfixes to the toolkit. Click-through to read the rest of the announcement!

OpenSSL version 1.0.0 Beta 1
============================

OpenSSL - The Open Source toolkit for SSL/TLS
http://www.openssl.org/

OpenSSL is currently in a release cycle. The first beta is now released.
The beta release is available for download via HTTP and FTP from the
following master locations (the various FTP mirrors you can find under
http://www.openssl.org/source/mirror.html):

o http://www.openssl.org/source/
o ftp://ftp.openssl.org/source/

The file names of the beta are:

o openssl-1.0.0-beta1.tar.gz
MD5 checksum: 49f265d9dd8dc011788b34768f63313e
SHA1 checksum: 89b4490b6091b496042b5fe9a2c8a9015326e446

The checksums were calculated using the following command:

openssl md5 < openssl-1.0.0-beta1.tar.gz
openssl sha1 < openssl-1.0.0-beta1.tar.gz

Please download and test them as soon as possible. This new OpenSSL
version incorporates 107 documented changes and bugfixes to the
toolkit (for a complete list see http://www.openssl.org/source/exp/CHANGES).

Reports and patches should be sent to openssl-bugs@openssl.org.
Discussions around the development of OpenSSL should be sent to
openssl-dev@openssl.org. Anything else should go to
openssl-users@openssl.org.

The best way, at least on Unix, to create a report is to do the
following after configuration:

make report

That will do a few basic checks of the compiler and bc, then build
and run the tests. The result will appear on screen and in the file
"testlog". Please read the report before sending it to us. There
may be problems that we can't solve for you, like missing programs.

Oh and to those who have noticed the date... the joke is that it
isn't a joke.

Yours,
The OpenSSL Project Team...

Wednesday, April 1, 2009

How to Check and repair mysql tables

mysqlcheck is the command line program to check and repair mysql tables.
It performs the same functions as the check table and repair table query statements.

Examples:
# mysqlcheck bugs
This checks all of the tables in the bugs database.

# mysqlcheck bugs flags groups
This checks the flags and groups tables in the the bugs database

Using the –repair option you can repair tables using the same syntax as above.

Options to mysqlcheck to just check a table are:

–check-only-changed Same as “check table changed” query
–extended Same as “check table extended” query
–fast Same as “check table fast” query
–medium-check Same as “check table medium” query
–quick Same as “check table quick” query

Options to mysqlcheck to repair a table are:

–repair Same as “repair table” query
–repair –extended Same as “repair table extended” query
–repair –quick Same as “repair table quick” query
–repair –use-frm Same as “repair table use_frm” query

Tuesday, March 31, 2009

How To Block Porn Pictures And Images With SafeSquid Proxy Server

Administrators can use various methods to block access to websites that are pornographic in nature, like URL Filter, URL Blacklist, Keyword Filter, etc. But many porn sites allow users to register their email IDs on their website and deliver the latest images and pictures to their personal emails. So if a user is allowed access to his personal mail, he can enjoy himself without having to access any porn site. Such images are also regularly displayed as ads and banners on other web pages, that might not be pornographic in nature.

Pornographic Image Filter can analyze an image in real-time, and identify the ones that are pornographic in nature. It analyzes the graphical content like skin tone, contour, etc. to identify a pornographic image. It is a commercially distributed add-on plug-in and can be used with SafeSquid to block pornographic images. Although it is about 85%-90% accurate, it acts as a good deterrent.

Follow the procedure below to install Pornographic Image Filter and use it with SafeSquid:

Download the trial Pornographic Image Filter Add-on Module from the SafeSquid Downloads Page and copy it to a directory on the SafeSquid Server, e.g. in /usr/local/src.

The add-on module is about 11MB in size and although it is an expired trial version, you can install the files. Once the installation is complete, you can request for fresh trial binary files by sending a request email to information@safesquid.net. Just replace the files that you receive in the installation directory.

Change directory to /usr/local/src

cd /usr/local/src

Untar the file imagefilter.trial.20080730.tar.gz

tar -xzvf imagefilter.trial.20080730.tar.gz

Change directory to imagefilter.trial.20080730.tar.gz

cd imagefilter.trial.20080730

The directory contains the following files:

drwxr-xr-x 3 root root 4096 Sep 4 17:26 imgfilter
-rwxr-xr-x 1 root root 20168 Sep 4 17:28 imgfilter.so
-rw-r--r-- 1 root root 2087 Sep 4 17:30 imgfilter.xml
-rw-r--r-- 1 root root 872160 Sep 4 17:28 libimgfilter.so

Create 'imgfilter' directory in /opt/safesquid/modules/

mkdir /opt/safesquid/modules/imgfilter

Copy the files to /opt/safesquid/modules/imgfilter

cp imgfilter.so /opt/safesquid/modules/imgfilter
cp imgfilter.xml /opt/safesquid/modules/imgfilter
cp libimgfilter.so /opt/safesquid/modules/imgfilter
cp -r imgfilter /opt/safesquid/modules/imgfilter

Create symbolic links for loading image filter module

cd /opt/safesquid/modules
ln -fs /opt/safesquid/modules/imgfilter/imgfilter.so imgfilter.so
ln -fs /opt/safesquid/modules/imgfilter/imgfilter.xml imgfilter.xml
ln -fs /opt/safesquid/modules/imgfilter/libimgfilter.so /lib/libimgfilter.so

Restart SafeSquid

/etc/init.d/safesquid restart

Now when you access the SafeSquid Interface, you should find a new section 'Image filter' under 'Config'.

When you request for a fresh trial copy, you will receive the file imgfilter.so.
Just copy this file to /opt/safesquid/modules/imgfilter
This will renew your trial period.

Now, to configure the Image filter section in the SafeSquid Interface, go to Config => Image filter.
You will see a screen similar to this:

imgfilter section
Image filtering will classify images based on their likelihood of containing pornographic material.
Option Value
Library path
Default template

Image filters
Add

Click on Add under the Image filters subsection to add a new filter rule.
Option Value
Enabled Yes: No:
Comment
Profiles
Threshold
Template

Specify the Profiles to which you want to apply this rule.
Leave it blank to apply to all.

You can configure the sensitivity of the filter by specifying the Threshold option.
Image filter allocates a score to all the images that it analyzes. -10.0 is unlikely to be porn whereas 0.0 is very likely.
To start with, keep the Threshold value 0.0. You can then fine tune the filter by altering this value.
You can also create multiple rules, with different Threshold limits for different Profiles.

When an image is blocked, it will appear as a blank box. You can also replace a blocked image with a custom image.
For example, if you fill in the Default template value with checkeredgif, which is a custom image that comes with SafeSquid, the image will be replace with a checkered box, as shown below: