Shell scripting question - log monitor

I am developing a small script which will read all logs (anything with *.log) in a folder and if it encounters a keyword (error, exception etc.), a mail will be send regarding that by writing the error in errors.txt file. This will be a demon script, running every 30 minutes. I also want to only send errors/exceptions occurred in the last 30 minutes and not before it. Below is the script:

Code:
#!/bin/sh

#create errors.txt if it does not exist

if [ ! -f ./errors.txt ];

then

touch ./errors.txt

fi

#Find log files updated in last 30 minutes in /USCRDjobs/ folder and all sub-folders

names=(`find ./log/ -iname "*.log" -type f -mmin -30`)

#Look for keywords in these files

for(( i=0 ; i< ${#names[@]} ; i++ ))

do

  grep -i -m3 -H -f keywords.txt ${names[$i]} | cut -d";" -f 3,8 >> ./errors.txt

#Adding newline

echo >> ./errors.txt

done

#Mail only if errors.txt has been updated in last 30 minutes

if test `find errors.txt -mmin -30`

then

mailx -s "Alert Message" [email]admin@gmail.com[/email] < ./errors.txt

#Clear errors.txt

#> ./errors.txt

else

echo "No errors found"

fi

Now I have some issues with this. The log files tend to be quite large and so I need to read only the content updated in the last 30 minutes (after the last run of the script). So I need to start grep from the last read line. I want to know if this is possible or if there is another solution to this problem?
 
What's the nature of log files? Are they just getting appended or written in circular mode with fixed max size or a new log file starts after reaching some fix max size or both?

With grep, there is option "-n" which will give you line number in file where keyword matched. You can create a simple two column database text file with filename and last line number where match happened. Now after find command, if file is in database then use sed command's output as input to grep and update the database with calculated last line number

Code:
sed -n '<line number>,$p' <filename>
 
#kekerode the log files keep getting appended and thus can be of insane size (reaching few GBs of just text).

I was thinking on the same line as your solution. Thank you for the effort. I will try it and get back to you. Thanks again.
 
Ok so I am stuck again. As I am using grep to get the matching string and using cut to get the necessary fields (time-stamp and message), I am not sure how I can get the line numbers by using -n. The cut command only gets fields 3 and 8 delimited by a ";". How can I store the line number in the errors.txt file?

Code:
 grep -i -m2 -H -f keywords.txt ${names[$i]} | cut -d";" -f 3,8 >> ./errors.txt
 
#kekerode the log files keep getting appended and thus can be of insane size (reaching few GBs of just text).

I was thinking on the same line as your solution. Thank you for the effort. I will try it and get back to you. Thanks again.

If a large log file is a problem why not logrotate it using fixed size say 10 mb.

The best approach is to monitor the log file for an hour and decide how much it grows over the period of 1 hour .You can then decide on exact size for logrotate.
 
#Compiler the log file implementation is not under my control. Also, I cannot do any realtime testing in production so I have no way of knowing how much it will grow in a given duration. So, I think the best approach is to work on some known facts. I think the solution #kekerode is workable but I would really appreciate it if someone could provide a pseudocode for the same.
 
If you can pass on sample file it would definitely help

Code:
#!/bin/sh

touch errors.txt

names=`find ./log/ -name "*.log" -type f -mmin -30`

if [ $? ! = 0 ]; then

	echo "Find Failed"

	exit

fi

for F in ${names}

do

  sed -f pattern.txt '-n '1,3p' $D i' | xargs sed = | cut -d";" -f 3,8 ./errors.txt

  echo "" ./errors.txt

done

if [ `find errors.txt -mmin -30` ]; then

then

	mailx -s "Alert Message" [email]admin@gmail.com[/email] < ./errors.txt

else

	echo "No errors found"

fi
 
Hi Team
I need a shell script
1. We have multiple linux servers
2. Multiple logs like System logs, Web access logs, application logs, server Authentication logs
3. We need a central server which continuously polls these logs from these different servers, check for any failed logins, application errors, system errors etc and alert us via email.

Please help me its urgent for me

Thanks
Zam
 
@zam
You need to configure syslog to redirect logs to central server. You can then modify above script as per requirements or keywords.
 
Hi
I modified the script to grep a particular ip from the keyword file but the mail received contains only OS signatures and other ip.

Example :
Linux i686
Linux i686
Linux i686
Linux i686
Linux i686
Linux i686
U

Android 4.1.1
Windows NT 6.1
Windows NT 6.0
Windows NT 6.0
Windows NT 6.1; .NET CLR 3.5.30729






Windows NT 6.0
Windows NT 6.0


Could you please help me

Thanks
Zam
 
@zam
Something wrong with capturing the keywords. First try to use grep, identify lines of errors then use other commands to trim the result like sed or awk.

You can test the grep over command line by giving the filename as input before you put it in script.
 
Hi Team
I need a shell script
1. We have multiple linux servers
2. Multiple logs like System logs, Web access logs, application logs, server Authentication logs
3. We need a central server which continuously polls these logs from these different servers, check for any failed logins, application errors, system errors etc and alert us via email.

Please help me its urgent for me

Thanks
Zam

To add to what has already been replied, send everything to syslog and then just use Kibana with Logstash, Elasticsearch or you can consider Graylog too and you will be in log analysis heaven. Google is your friend for blog posts on how to set these up (hint: not too difficult).
 
Thanks for the reply:)

can any one please tell me what all exact software's we can implement for this requirement? mainly we need to monitor System logs, Web access logs, application logs, server Authentication logs also need a web interface with some dashboard as well

Thanks
Zam
 
@zam
What you have done? and Where have you reached so far?
Which tools you tried and failed at your expectations?
Are you able to get all logs in a single server?
If yes, are all logs concatenated or placed separately?
 
We use HP OMi for monitoring the uptime/downtime of the servers. Also the log monitoring can be easily done. Geneos' ITRS is another tool you can use to monitor infrastructure.

If you want to do it at a scripting level, the easiest way to do this is by setting up the SSH connectivity between your central server and other servers.
 
Back
Top