[Deploying Sakai] Sakai crashing

Benito J. Gonzalez bgonzalez2 at ucmerced.edu
Wed Sep 8 16:48:25 PDT 2010


Hi Steve,

Normal activity up until the crash.

I went through the "PID" log files and found that PSPermGen was at 99% 
for all three crashing tomcats.
What should we set our MaxPermSize to?

Thanks!

Benito J. Gonzalez
Manager, Enterprise Web Application Development
Information Technology Department
University of California, Merced
Desk: 209.228.2974
Cell: 209.201.5052
Email: bgonzalez2 at ucmerced.edu



Steve Swinsburg wrote:
> Hi Benito,
>
> What do your Tomcat logs say leading up to when the crash occurred? 
>
> cheers,
> Steve
>
>
>
>
> On 09/09/2010, at 9:27 AM, Benito J. Gonzalez wrote:
>
>   
>> We are having problems keeping Sakai running in production.  We have 
>> seen a crash today and yesterday.  About to dig into the dumps.
>>
>> Anyone else seen issues with
>>
>> I was using JConsole at the time of the crashes.  Memory usage was 
>> between 800m and 1300m for both Tomcats that failed.  Thread counts were 
>> bouncing around 380, which is normal for us.  CPU usage did not spike.
>>
>> Our environment:
>> 2x Solaris: SunOS 5.10 Generic_125101-05, 16G phys mem, i86 CPU (not 
>> sure of details), JDK 1.5.0_20
>> One box is running Apache 2.0, 4x Tomcats with the following memory 
>> settings:
>> -d64 -server -XX:MaxNewSize=500m -XX:MaxPermSize=650m -XX:+UseParallelGC 
>> -Djava.awt.headless=true -Dhttp.agent=Sakai-News-Too
>> -Xms4096 -Xmx4096
>>
>> 3/4 Tomcats crashed (see log messages below).  The one that stayed up is 
>> an admin one that is not part of the load balancer.
>>
>> #
>> # An unexpected error has been detected by HotSpot Virtual Machine:
>> #
>> #  SIGBUS (0xa) at pc=0xfffffd7ff81813b0, pid=15481, tid=2335
>> #
>> # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.5.0_20-b02 mixed mode)
>> # Problematic frame:
>> # J  java.util.Hashtable.get(Ljava/lang/Object;)Ljava/lang/Object;
>> #
>> # An error report file with more information is saved as hs_err_pid15481.log
>> #
>> # If you would like to submit a bug report, please visit:
>> #   http://java.sun.com/webapps/bugreport/crash.jsp
>> #
>>
>> and
>>
>> #
>> # An unexpected error has been detected by HotSpot Virtual Machine:
>> #
>> #  SIGBUS (0xa) at pc=0xfffffd7ff8328d00, pid=15220, tid=2242
>> #
>> # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.5.0_20-b02 mixed mode)
>> # Problematic frame:
>> # J  sun.reflect.Reflection.getCallerClass(I)Ljava/lang/Class;
>> #
>> # An error report file with more information is saved as hs_err_pid15220.log
>> #
>> # If you would like to submit a bug report, please visit:
>> #   http://java.sun.com/webapps/bugreport/crash.jsp
>> #
>>
>> and
>>
>> #
>> # An unexpected error has been detected by HotSpot Virtual Machine:
>> #
>> #  SIGBUS (0xa) at pc=0xfffffd7ff800acca, pid=168, tid=69
>> #
>> # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.5.0_20-b02 mixed mode)
>> # Problematic frame:
>> # j  java.lang.OutOfMemoryError.<init>(Ljava/lang/String;)V+0
>> #
>> # An error report file with more information is saved as hs_err_pid168.log
>> #
>> # If you would like to submit a bug report, please visit:
>> #   http://java.sun.com/webapps/bugreport/crash.jsp
>> #
>>
>>
>> Second box is running one Tomcat with:
>> -d64 -server -Xms2048m -Xmx2048m -XX:MaxNewSize=500m 
>> -XX:MaxPermSize=768m -XX:+UseParallelGC -Djava.awt.headless=true 
>> -Dhttp.agent=Sakai-News-Tool
>>
>> The second box did not crash and seemed to handle all the load.
>>
>> Can anyone shed some light on this?
>>
>> Thanks,
>>
>> -- 
>> Benito J. Gonzalez
>> Manager, Enterprise Web Application Development
>> Information Technology Department
>> University of California, Merced
>> Desk: 209.228.2974
>> Cell: 209.201.5052
>> Email: bgonzalez2 at ucmerced.edu
>>
>> _______________________________________________
>> production mailing list
>> production at collab.sakaiproject.org
>> http://collab.sakaiproject.org/mailman/listinfo/production
>>
>> TO UNSUBSCRIBE: send email to production-unsubscribe at collab.sakaiproject.org with a subject of "unsubscribe"
>>     
>
>   


More information about the production mailing list