Sunday, November 30, 2014

Automate Mysql with Puppet

Automate Mysql with Puppet


Puppet, based on my personal experience is a great technology that allows new ways in DevOps environments and automation. With this technology people can code and manage the complete IT infrastructure from a central location, or a within a cloud infrastructure. It has completely changed the way on providing IT services and managing the same. Some cool features like role based access control and activity login allow people to define stable management strategy.

To get started, I created a simple Client / Server based puppet scenario in which we will automate Mysql server implementation in a linux environment. Puppet master server is configured to provide modules, manifests and classes to the agent servers in the complete infrastructure.

After installing the puppet master server the users can use the following folder etc/puppet/manifests  to develop and write some code for deploying the services to the agent servers.

First, on the puppet master server we should download and install the package module for the Mysql server using the following command:

sudo puppet module search mysql

After the successful installation you should verify that the folder mysql exists in the /etc/puppet/modules folder with the source installation of mysql server.


After the validation of folder existence , the source file are located there. Now we have to define the site.pp file inside the /etc/puppet/manifests as a manifest file that will send the code execution of installing the mysql server on an agent server. The content of the file should look like the following:

node 'agent_node01' { include mysql::server }

This line of code defines the agent_node01 will look for the installation of mysql server inside the mysql server module installation on the puppet master server. The next step is to connect to the agent node and start the puppet agent service.

 sudo /etc/init.d/puppet start

After restarting the service I will initiate the agent test command to force the pulling the package and the module from the puppet master.


As we can see the catalog is finished and the Mysql server service is up and running. The allow_virtual parameter can be ignored as that is a deprecation warning by default.

Feel free to test and comment.


Saturday, September 27, 2014

Make linux process invisible with new Centos kernel

Make linux process invisible with new Centos kernel 


Processes carry out tasks within the operating system. A program is a set of machine code instructions and data stored in an executable image on disk and is, as such, a passive entity; a process can be thought of as a computer program in action.

After I have compiled the new version of Centos 3.2 kernel I have decided to test some security features that this version offers. How to check which kernel version you have installed, well easy:

[root@centos01 ~]# uname -r
3.2.48
[root@centos01 ~]#

As many other Linux servers, they run in a multi-user enviroments. That means that every user are using shared hardware and software resources of the server. From a security stand point of view, informations of user/usage processes ownership is not relevant for every user to see it. To prevent these informations to every shared resource we are going to tamper a little bit with the /proc filesystem. So if you have the Centos 3.2+ kernel compiled and installed on your test or production machine you can develope this situation further.
The task is simple, all we have to configure is the /proc file system mount with new security options, so that reading of every process can be delegated only to the owner of the process. The new option that we are going to introduce is hidepid.

We have three options available:

hidepid=0 - anyone can read the /proc/pid files 
hidepid=1 - this option prevents users to access /proc directories , except of their own. Important 
                    background tasks of the server are now prevented to be shown.
hidepid=2 - this option is an addition to the option 1 , with more security, denying everybody the information about the running processes. Now an intruder is not able to list sensitive data.

Before setting this security options we had a normal situation where a local user could read all of the root and system processes informations.



To continue setting this to prevent users information leakage we have to type further commands:

mount -o remount,rw,hidepid=2 /proc

To have the configuration over a rebooted server we have to update the FSTAB file.

vi /etc/fstab

And we have to add the following info to the file:

proc    /proc    proc    defaults,hidepid=2     0     0

Save and update the file.

This is all to it. Log into a stanard user and use the following command to list the processes:

ps -ef

As a standard user you should not be able to see the processes from other users and applications.

Feel free to test and comment.

Saturday, August 16, 2014

Using tcpdump with Linux

Using tcpdump with Linux


I find tcpdump as a very powerfull and useful tool to sniff network traffic from a Linux box. It is independent on the distro you are working and very easy to learn. It`s simplicity is inside the command line shell and can be very useful for remote troubleshooting of server and desktop systems.


It is a built in package that exists in various distros and can be used to capture received and transfered packets over a complete network or only from a host. There are a set of options and switch flags thah can be used with this command.
I will try to demonstrate a couple of them with explanation, the ones that I find useful:
  • -i any listen on any interface that is available on the system
  • -n do not resolve hostnames
  • -c get number of certain packets and then stop (usefull in not getting to much informations)
  • -e get the Ethernet header along with the capture
  • -E decrypt IPSEC traffic with providing a password key
Simple usage of tcpdump for viewing packets can be done with a couple of command line options. Whether you would like to go into the details of the packets or only the basic view it can be displayed on the command line.

tcpdump -nS simple communication of packets inside the network
tcpdump -nnvvS more advanced packet view with more verbosity
tcpdump -nnvvXS a more deeper look into the package with the content details (derrived from the GUI)

We could do now an example of a displaying only two packages TCP with and inside deeper view of the content. This simple command will show us a two packets with their content and headers.

tcpdump -nnvXSs 0 -c2 tcp



A more specific network goal is not to have too many traffic displayed on the shell. This leaves the options that are not needed out of the picture, and makes troubleshooting much easier.

To see the traffic derrived from a particular host we can use the following command:

tcpdump host 192.168.1.100

Another useful command is writting a certain type of traffic inside a text file for later troubleshooting:

tcpdump -s 1514 port 21 -w output.txt

And for the final packet show command in this blog, you can use a simple switch only to filter IPV6 traffic:

tcpdump ip6

This is only a small demonstration of this powerfull tool. More can be read through the MAN pages, or from similar sysadmin books. Feel free to share and comment.


Friday, May 23, 2014

Very Secure FTP on Centos Server

Very Secure FTP on Centos Server


For this blog post I will be using a VSFTPD as a fast, realible and very secure SFTP server for transferring data between client and server sites. One notice FTP is inherently insecure. If you must use FTP, consider securing your FTP connection with SSL/TLS. Otherwise, it is best to use SFTP, a secure alternative to FTP.

I have a basic built of Centos 6.5 server that is updated with the latest kernel and important security packages for this example. Once we have configured and installed the SFTP server, you will need a SFTP client application to test the connection. I often use Filezilla or WinScp as a alternative application.

Login via ssh to your Linux server and use the su command to become root and start the package installation. At the picture below this is a simple first step.


After this issues a simple command to install the VSFTPD package on your linux server:

yum install vsftpd

After this command issued you should have the package installed and as an example on the next picture I have shown here.


Also for a public server, should be a FTP connection availlable if the client has no possibility to use the SFTP protocol with this software. Then we can install also these packages.

yum install ftp

Now a ftp server should be installed and default configured as an service on your Centos machine.


BASIC VSFTPD CONFIGURATION

The default file for the configuration of this service is located under the /etc/vsftpd folder. I will use Nano editor to change the settings here.

nano /etc/vsftpd/vsftpd.conf

The first item you should configure , is the option to disable anoymous login:

anonymous_enable = NO

Next one is to enable local user logins with the command below:

locale_enable = yes

Next very important configuration to uncomment and set is the chroot option. This option will make a possibility for users, only to use their dedicated home folders on the server, and not able to traverse over different folders. This is a good security practice.

chroot_local_user=YES

So now bassically these are the most command and important settings to get the server up and running. You can fine tune the other settings , like change the default port, or certificates and etc. But this is not needed in this blog demontstration of basic secure server setttings.

We should now restart the service and make it a startup one on boot time of the Centos server. We achieve this with two simple commands:

service vsftpd restart
chkconfig vsftpd on


That is all to it. Now we can test our connection with a SFTP or FTP client.


I have used the Filezilla with a SFTP protocol and I have succesfully connected to my server via a secure channel.


Up and ready for receiving traffic. Now the users can enjoy security, performance and stability in your network.

Feel free to comment and suggest more topics.


Sunday, March 2, 2014

Flushing infected mail traffic from Postfix server

Flushing infected mail traffic from Postfix server


Postfix consists of a combination of server programs that run in the background, and client programs that are invoked by user programs or by system administrators.
The Postfix core consists of several dozen server programs that run in the background, each handling one specific aspect of email delivery. Examples are the SMTP server, the scheduler, the address rewriter, and the local delivery server. For damage-control purposes, most server programs run with fixed reduced privileges, and terminate voluntarily after processing a limited number of requests. To conserve system resources, most server programs terminate when they become idle.
Client programs run outside the Postfix core. They interact with Postfix server programs through mail delivery instructions in the user's ~/.forward file, and through small "gate" programs to submit mail or to request queue status information.
Other programs provide administrative support to start or stop Postfix, query status information, manipulate the queue, or to examine or update its configuration files.



If Postfix cannot deliver a message to a recipient it is placed in the deferred queue.  The queue manager will scan the deferred queue to see it if can place mail back into the active queue.  How often this scan occurs is determined by the queue_run_delay.  Postfix will scan the incoming queue at the same time as the deferred queue just to make sure that one does not take all the resources and so each can continue to move messages.

The real question is, What is causing messages to be deferred?  One of the major reasons that messages are deferred is that your server is going to place mail to “unknown recipients” into the deferred queue if they do not have a legitimate user to go to.

First thing that should be done to analyze the mails that are stuck in the queue is typing the mailq command. If you see a lot of mails in the queue shown in the output, than something fishy is going on on you server. Just looking on the mails, IT people should recognize the domain that has the most mails in the queue. When you find out the domain example.com than the next step to do is to run a bash script that will delete only those mails that are from the infected domain.

#!/bin/bash
match="$1"
find /var/spool/postfix/deferred/*/ -type f -exec grep -l $match '{}' \; | xargs -n1 basename | xargs -n1 postsuper -d
find /var/spool/postfix/active/ -type f -exec grep -l $match '{}' \; | xargs -n1 basename | xargs -n1 postsuper -d

This simple scripts are using bash language to find the deferred and active mails from the user keyboard input on the CLI. After that the script is executing the postsuper -d command that is flushing the queue with that specific domain. 
match="$1" is a simple regex that matched text by the first capturing group, in our case a user inputed domain. 

After this the mail queue should be emptied with the infected mails and the server will have some freed up resources. Another faster or simple solution, if the mails are not important at the moment, is to flush the complete queue in the deferred folder.

For this we have a simple command: postsuper -d ALL deferred

Feel free to comment..

Saturday, February 1, 2014

Server 2008 R2 as RADIUS for CISCO ASA VPN Clients

Server 2008 R2 as RADIUS for CISCO ASA VPN Clients


As in every Enterprise or a private Data Centar network one must use various of IT systems to insure the security of via meshed systems. The other day I implemented a Cisco 5520 Failover scenario and the main problem I had with the users, is how will they manage so many passwords for VPN, AD, Mail and etc. So I thought why not use Kerberos for VPN and simplify the tasks. 

This easy done task I will explain as short and much I can. The main goal is to make Cisco ASA failover to use the Active Directory for authenticating the users against VPN policy.



Easiest way to configure ASA quick is using the ASDM utility. I use CLI only for initial interface and http commands , after that all is downstream.



First we need to configure an object:
Using the Firewall section we expand Objects and select IP names. Then click ADD and describe the Radius server. After that we enter the IP address of the Intranet located Domain controller.

Next step is to define a AAA Radius group:
Click the Remote Access VPN section.
Expand AAA Setup and select AAA Server Groups.
Click the Add button to the right of the AAA Server Groups section.
Give the server group a name, like TEST-AD, and make sure the RADIUS protocol is selected.
Accept the default for the other settings. 
And click OK.

Next step is to add our RADIUS server to this created group:
Select the server group created in the step above.
Click the Add button to the right of Servers in the Select Group.
Under the Interface Name select the interface on the ASA that will have access to the RADIUS server, most likely inside.
Under Server Name or IP Address enter the IP Name you created for the RADIUS server above.
Skip to the Server Secret Key field and create a complex password. Make sure you document this as it is required when configuring the RADIUS server. Re-enter the secret in the Common Password field.
Leave the rest of the settings at the defaults and click Ok.

To enable RADIUS on Server 2008 we must add a role:
Connect to the Windows Server 2008 server and launch Server Manager.
Click the Roles object and then click the Add Roles link on the right.
Click Next on the Before You Begin page.
Select the Network Policy and Access Services role and click Next.
Under Role Service select only the Network Policy Server service and click Next.
Click Install.

After launching the NPS tool right-click on the entry NPS(Local) and click the Register Server in Active Directory. Follow the default prompts.

We need to define a Radius CLIENT on Server 2008 for our ASA Cluster:
Right-click on RADIUS Clients and select New RADIUS Client.
Create a Friendly Name for the ASA device. I used “CiscoASA” but if you had more than one you might want to make it more unique and identifiable. Make sure you document the Friendly Name used as it will be used later in some of the policies created.
Enter the Server Secret Key specified on during the ASA configuration in the Shared secret and Confirm shared secret field.
Leave the default values for the other settings and click OK. See Figure 1 for all the complete RADIUS Client properties.


Connection Request Policy
Expand the Policies folder.
Right-click on the Connection Request Policies and click New.
Set the Policy Nameto something meaningful. I used CiscoASA because this policy is geared specifically for that RADIUS client. Leave the Type of network access server as Unspecified and click Next.
Under Conditions click Add. Scroll down and select the Client Friendly Name condition and click Add…
Specify the friendly name that you used when creating the RADIUS Client above. Click OK and Next.
On the next two pages leave the default settings and click Next.
Under the Specify a Realm Name select the Attribute option on the left. From the drop down menu next to Attribute: on the right select User-Name. Click Next again.
Review the settings on the next page and click Finish.

Create a Network Policy
Right-click the Network Policy folder and click New.
Set the Policy Name to something meaningful. Leave the Type of network access server as Unspecified and click Next.
Under Conditions click Add.
Add a UsersGroup condition to limit access to a specific AD user group. You can use a generic group like Domain Users or create a group specifically to restrict access.
Add a Client Friendly Name condition and again specify the Friendly Name you used for your RADIUS client.
Click Next. Leave Access granted selected and click Next again.
(Important Step) On the authentication methods leave the default selection and add Unencrypted authentication (PAP, SPAP).
Accept the default Constraints and click Next.
Accept the default Radius Settings and click Next. Review the settings and click Finish.
Restart the Network Policy Server service.

The last thing left is to Test and Save the config.
If necessary re-launch the ASDM utility.
Return to Configuration -> Remote Access VPN -> AAA Setup -> AAA Server Groups.
Select the new Server Group you created.
From the Servers in the Selected Group section highlight the server you created. Click the Test button on the right.
Select the Authentication radio button. Enter the Username and Password of a user that meets the conditions specified in the Network Policy created above then click OK.


Feel free to comment.

Monday, January 6, 2014

Security ethical hacking checklist

Security ethical hacking checklist


We can define an Ethical Hacking Expert as a person who has many skills in many segments of Information Technology. On behalf of the owners of the Information Technology Systems an expert in Network and Systems attacks the complete organization. The main goal is to find vulnerabilities that would a malicious person use to exploit to gain important informations. 


These tasks include Penetration testing, risk assessment and intrusion testing. Many companies also involve some code reviewers to scan the web application code. Complete set of tasks that are involved in, for example a penetration test, are useful to find weaknesses in open source code that is used often in application development. Developers often are too busy in creating the applications and contributing to the opensource community, so the security concerns are not always highlighted. 
These situations require an expertise from people that can objectively look at the code and to verify the completion of the cycles needed for application implementation.

I will explain some basic steps in my White Hat general checklist that every IT security concerned people should know and use. 


RECONNAISSANCE

This is a military term that was used to seek out the intentions and plans of the enemy, using various methods to find out capabilities and composition of the enemy. In ethical hacking world this word is used for information gathering of the target. This is useful to find the weakest spot in the target Information Technology system to exploit and use it for the final goal.
There is also another side of  footprinting that is used for protecting the system instead of attacking it. First of the basic methods of information gathering is:

PING the remote target system to gather basic IP info.
For example Start-Run-CMD>  ping www.google.com  
Or some other range of public IP addresses to see if there are some hosts that are alive on the other side. This is a good starting point for every information gathering. 

PORT scanning of the remote services running on the target system. These TCP scans can be individual or we can scan a range of ports to identify different services. We can use a great tool found on www.nmap.org 
An example of a command line scan:   NMAP -T4 -A -v scanme.nmap.org

Target public information such as company info, telephone numbers, email addresses and many other are very useful to create a big picture. This can be done via whois lookups of the company domain and gathering info from DNS protocol. There are some great online sites for this http://www.uwhois.com/

EMAIL tracking is a good way to analyze the email header which will provide us the informations on the IP stack of the mail servers and other gateway functionalities. A good application that can be used for this is the EmailTracker that can be found on http://www.emailtrackerpro.com/

Network Connections from your computer or from the target systems can be useful to find out incoming and outgoing connections that are persistent and important for the target users. A free command utility called netstat is the best and fastest way to achieve this. 
An example command line:   netstat -ano

Explore the internet libraries to find out the history of the web page is sometimes important. Some URLs are useful in this information gathering.

Company location can be found using Google Earth. This is important if the company has IT storage rooms in many countries and from a network standpoint, a clear picture on GeoIP locations.

Network nodes displaying will show us the information on various path we can get to the target system. This is useful to find out the most optimized way to enumerate services in the target systems. A cool application for this is called NeoTrace and can be found on this link: http://neotrace-pro.en.softonic.com/

DNS Enumeration
By Enumerating DNS it is possible to get some important public (May be sometime Private information too) information such as Server name, Server IP address, Sub-domain etc.
Useful PERL script called dnsenum.pl can be found on this URL


SCANNING

In general we have three types of scanning:
  • Port scanning
  • Network scanning
  • Vulnerability scanning
Active information gathering produces more details about your network and helps you see your systems from an attacker’s perspective. We can see which server systems are alive and what services they are providing for the target users. The important fact is the system operating systems and the architecture that they are using. 

I can number some types of PORT scanning methods:

- Vanilla: the scanner attempts to connect to all 65,535 ports
- Fragmented packets: the scanner sends packet fragments that get through simple packet filters in a firewall
- UDP: the scanner looks for open UDP ports
- Sweep: the scanner connects to the same port on more than one machine
- Stealth scan: the scanner blocks the scanned computer from recording the port scan activities

Network Scanning is the process of examining the activity on a network, which can include monitoring data flow as well as monitoring the functioning of network devices. Network Scanning serves to promote both the security and performance of a network. Network Scanning may also be employed from outside a network in order to identify potential network vulnerabilities.

Vulnerability scanning employs software that seeks out security flaws based on a database of known flaws, testing systems for the occurrence of these flaws and generating a report of the findings that an individual or an enterprise can use to tighten the network's security.

SUBNET Information whether public or private is very important for time consuming security testing methods. A very useful tool for gathering the subnet information is the AngryIP application. This application is available on this location http://angryip.org/w/Home

There are some useful tools used for target system scannings:

Mcafee superscan tool
http://www.mcafee.com/us/downloads/free-tools/superscan.aspx

Network port scanning
Scan network ports with NetScanTools Pro or Nmap.

UDP ports scanner, very fast and powerfull WUPS

Unicornscan is an attempt at a User-land Distributed TCP/IP stack for information gathering and correlation
The app can be found on http://www.unicornscan.org/


ENUMERATION


Using the previous gathered information the attacker usually start scanning against the victim such as Port scanning, Banner Grabbing, Vulnerability Scanning, Finding Username/Emails address. This is usually active attack(May get detected by IDs or may get blocked by Firewalls.
Enumeration is the first attack on target network; Enumeration is a process to gather the information about user names, machine names, network resources, shares and services ; Enumeration makes a fixed active connection to a system

Null session - exploitation of Windows SMB communications network protocols.
We can exploit a remote machine without any credentials using: net use \\ip address\\IPC$ ""/u:""
If we have only one authenticated user credentials we can use this exploit for many machines in the domain. Also a good tool for enumeration these weak spots is enum4linux.pl found on this url 

NetBios/over/TCP/IP can be used with a integrated tool nbtstat that will display protocol statistics and current TCP/IP connections. We can also provide our information database with the MAC address. 
Usage:   nbtstat -A 192.168.1.1

FTP Enumeration - a crafty tool on NIX is useful for enumerating the TCP port 21 with useful information like server version and the list of users on the target system.
# perl -MCPAN -e shell
 cpan> install Getopt::Std  
This is used for the installation. And the usage on the target system
Usage: ftp-user-enum.pl [options] (-u username|-U file-of-usernames) (-t host|-T file-of-targets) 

TELNET to a service on different number of ports to see if a service is running on the remote server.
Usage: telnet <IP or FQDN> <port>
List of ports for services can be found on this URL.

A list of some useful tools used for enumeration:

IP Tools

SoftPerfect Network Scanner is a free multi-threaded IP, NetBIOS and SNMP scanner with a modern interface and many advanced features. It is intended for both system administrators and general users interested in computer security. The program pings computers, scans for listening TCP/UDP ports and displays which types of resources are shared on the network, including system and hidden ones.

SomarSoft's DumpSec is a (free) security auditing program for Microsoft Windows NT/2000. It dumps the permissions (DACLs) and audit settings (SACLs) for the file system, registry, printers and shares in a concise, readable format, so that holes in system security are readily apparent. DumpSec also dumps user, group and replication information. DumpSec is a must-have product for Windows NT systems administrators and computer security auditors.

Enumerate some devices like routers, printers, servers, backup devices and similar with default passwords. Many useful passwords can be found on google searches, and one of the list can be found on this URL.

Netcat is a simple networking utility which reads and writes data across network connections using the TCP/IP protocol. It's a wonderful tool for debugging all kinds of network problems. It allows you to read and write data over a network socket just as simply as you can read data from stdin or write to stdout. I have put together a few examples of what this can be used to accomplish.

Establishing a connection and getting some data over HTTP:# 
nc example.com 80
GET / HTTP/1.0
<HTML>
<!-- site's code here -->
</HTML>


HACKING

When above steps has be done the attacker start exploiting the all found vulnerability which may lead to compromise the System or an website. 

There are of four types of password attack:

1. Passive online attack - man in the middle, sniffing and similar
2. Active online attack - password guessing
3. Offline attack - brute force attack, directory attack and hybrid attacks
4. Non technical attack - social engineering

A rootkit is a stealthy type of software, typically malicious, designed to hide the existence of certain processes or programs from normal methods of detection and enable continued privileged access to a computer.[1] The term rootkit is a concatenation of "root" (the traditional name of the privileged account on Unix operating systems) and the word "kit" (which refers to the software components that implement the tool.

Steganography (Listen) is the art and science of encoding hidden messages in such a way that no one, apart from the sender and intended recipient, suspects the existence of the message. It is a form of security through obscurity.

An ethical hacker should equip himself with a database and dictionaries of default password. Some useful URLs can be a good starting point.

http://www.defaultpassword.com/?char=&action=dpl
http://www.cirt.net/passwords
http://www.virus.org/default-password%20

LOPTHCRACK can be used as a useful tool to recover passwords
Can be found on this URL

HACKING Windows Server administrator password is a powerful method of gaining access of target systems. Windows servers use the SAM database to encrypt and store passwords. There are many tools to exploit these passwords. One of them is offline NT password recovery tool that can be found on this URL

Keyloggers are useful software tools that log every keystroke that a user generates on the keyboard. These stealth tools are useful to capture credentials on target systems.
Free versions can be found on this URL More stealthy keyloggers are USB ones that hold the keylogger software on the USB stick, that can be manipulated inside the organization and can send credentials outside the corporate networks. Useful software can be downloaded from this URL

Scapy is a powerful interactive packet manipulation program. It is able to forge or decode packets of a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more.

Openpuff is a opensource stganography tool that can be used to create hidden scripts and apps inside cool extensions like PDF and JPG. Can be found on this URL

Lynis is an auditing tool for Unix/Linux. It performs a security scan and determines the hardening state of the machine. Any detected security issues will be provided in the form of a suggestion or warning. Beside security related information it will also scan for general system information, installed packages and possible configuration errors.

Yersinia is a network hacking tool designed to take advantage of the weaknesses in some network protocols. It pretends to be a framework for analyzing deployed networks and systems. It implements a number of attacks for the following protocols: STP, CDP, DTP, DHCP, VTP, ISL and etc.

The Metasploit Framework is an open-source development platform for creating security tools and exploits. The framework is used to test systems, verify patch installations, and perform regression testing. The framework allows users to configure exploit modules and test systems against attack.

The PsEXEC tool allows white hat people to remote execute applications and processes on target systems. It can launch interactive command prompts on remote computers.
Syntax:       psexec \\computer[,computer[,..] [options] command [arguments]

Core Impact is a penetration-testing tool for testing security threats. It allows systems administrators to test
security patches, network infrastructure, and system upgrades before an attacker does. It is frequently updated,so it is likely to stay ahead of new exploits.

Ratproxy is a semiautomated and largely passive Web application security audit tool. It detects and annotates potential problems and security-relevant design patterns based on the observation of existing and user-initiated traffic. It does not generate a high volume of traffic, taking very little bandwidth.


In this blog I tried to create a small checklist of tools I use , and some of the I have skipped (to be continued). Also the checklist methodology I think will be a good starting point for enthusiastic people that are concerned for the security of their IT systems.

To be continued ...