[Deploying Sakai] [Building Sakai] ClientAbortException: java.net.SocketException: Broken pipe

Warwick Chapman warwickchapman at gmail.com
Sun Nov 3 01:23:35 PST 2013


Thank you all.  So far, so good today.  We're nearly half way through the
tests and have 71 sessions at the moment, with the three appservers now at
a LA of 0.00-0.08 each (though they do occasionally jump up to 0.40 or so
and then drop back down.

Memory utilisation is at 35% and 39% on the two appservers with 8.0G memory
(Xmx4g), and on the 6.0G server (Xmx4g) is 56%.

I am hopeful that it will remain solid for the rest of the day.

Regardless, thank you to all of you who assisted me to get this issue
resolved.  I appreciate it a great deal.

-- Warwick Bruce Chapman | +27 83 7797 094 | http://warwickchapman.com


On Fri, Nov 1, 2013 at 9:02 PM, Warwick Chapman <warwickchapman at gmail.com>wrote:

> Noah, thank you for this.  Much appreciated.  Let me try the 4G heap on
> the VMs with 8G of memory.  The third VM has 6GB and I cannot increase it
> much more than that.
>
> The proviso of 25% for the system would then also fit into that VM with
> Xmx 4G (67%) and system 2G (33%).
>
> I'm going to give it a crack.  If it goes down, there's be no heap
> analysis, I'll have find somewhere to hide.
>
> Aspirant members of parliament writing 'candidate tests' - the people take
> becoming an MP *very* seriously, and don't take lightly to the server
> falling over mid-test.  It happened on Saturday last week - during the 24
> hours period the exam ran for, I lost application servers times for 250
> applicants.  At that stage I had an Xmx of 1024m, and 2 vCPUs (and then 4
> during a running reconfigure) per appserver.
>
> Now I have an extra appserver, Xmx of 4G and 4 vCPUs per appserver.
>
> -- Warwick Bruce Chapman | +27 83 7797 094 | http://warwickchapman.com
>
>
> On Fri, Nov 1, 2013 at 6:34 PM, Noah Botimer <botimer at umich.edu> wrote:
>
>> It sounds like you were running at 1GB heap, which is quite likely too
>> low for an instance with lots of modules/webapps deployed. Increasing Xmx
>> should do the trick, but in case you still have problems...
>>
>> You can flip the switch to dump heap on crash (and once it falls over),
>> you could analyze the dump:
>>
>>
>> http://stackoverflow.com/questions/542979/using-heapdumponoutofmemoryerror-parameter-for-heap-dump-for-jboss
>>
>> I've had good success with YourKit for examining heaps, but jhat and
>> VisualVM can do it too. You would be looking for a particular class or set
>> of classes that are disproportionately allocated (say 50% or more of
>> total), indicating a leak (often a reference from an item to a
>> container/observer that persists but no longer tracks the child).
>>
>> If you observe that a JVM (via jstat or other) has a growing heap but is
>> not yet to crashing, you can dump the heap with jmap.
>>
>> I think it's pretty common these days to run with 4 and 6GB heaps on
>> 64-bit JVMs (with Aaron's proviso of ~25% spare for system).
>>
>> Thanks,
>> -Noah
>>
>>
>> On Nov 1, 2013, at 11:51 AM, Aaron Zeckoski wrote:
>>
>> I would say 2g is adequate but I am adding the production list to this
>> to see if anyone else has any comment here. Generally a single
>> instance would be more than adequate for 60 users. We certainly are
>> running with many more users per instance in many cases without issue.
>>
>> -AZ
>>
>>
>> On Fri, Nov 1, 2013 at 11:36 AM, Warwick Chapman
>> <warwickchapman at gmail.com> wrote:
>>
>> Ok, please give me your best guess because I don't have any opportunity
>> left
>>
>> to tune.  On Sunday I've got to ensure this thing doesn't crash because of
>>
>> java.lang.OutOfMemoryError: Java heap space as I've been getting now ...
>>
>>
>> I cannot believe I am working so hard to get Sakai stable for no more than
>>
>> 60 concurrent users.
>>
>>
>> I have 3 OpenVZ on 3 separate pieces of tin.
>>
>>
>> Two have been increased to 8192 and the third to 6144.
>>
>>
>> Their JAVA_OPTS are now, respectively:
>>
>>
>> 1. export JAVA_OPTS='-server -d64 -Xmx4g -XX:MaxPermSize=512m
>>
>> -Djava.awt.headless=true -Dhttp.agent=Sakai'
>>
>> 2. export JAVA_OPTS='-server -d64 -Xmx4g -XX:MaxPermSize=512m
>>
>> -Djava.awt.headless=true -Dhttp.agent=Sakai'
>>
>> 3. export JAVA_OPTS='-server -d64 -Xmx3g -XX:MaxPermSize=512m
>>
>> -Djava.awt.headless=true -Dhttp.agent=Sakai'
>>
>>
>> I am worried that keeping them at 4192 and doubling up from the original
>>
>> 1024 will not be sufficient.  But I will be guided by you.
>>
>>
>>
>> -- Warwick Bruce Chapman | +27 83 7797 094 | http://warwickchapman.com
>>
>>
>>
>> On Fri, Nov 1, 2013 at 5:28 PM, Aaron Zeckoski <azeckoski at unicon.net>
>> wrote:
>>
>>
>> The "No" was for your "OK" question since you are missing MaxPermSize.
>>
>> You can set it to something higher but if it were me I would not do
>>
>> it. Larger memory space means it takes the JVM longer to do garbage
>>
>> collections. I would suggest you start with something smaller and tune
>>
>> it until things are working well for your users and load. Like I said,
>>
>> there is no single magic or "right" number. There are lots of wrong
>>
>> ones though.
>>
>>
>> -AZ
>>
>>
>>
>> On Fri, Nov 1, 2013 at 11:20 AM, Warwick Chapman
>>
>> <warwickchapman at gmail.com> wrote:
>>
>> Even if I increase the VM memory and swap to 8G each?
>>
>>
>> -- Warwick Bruce Chapman | +27 83 7797 094 | http://warwickchapman.com
>>
>>
>>
>> On Fri, Nov 1, 2013 at 5:17 PM, Aaron Zeckoski <azeckoski at unicon.net>
>>
>> wrote:
>>
>>
>> No, you need MaxPermSize and you will definitely run into using swap
>>
>> since your OS uses up memory as well. You better go with 2g and 512M
>>
>> for Xmx and MaxPerm for now and you can tune from there.
>>
>> -AZ
>>
>>
>>
>> On Fri, Nov 1, 2013 at 11:12 AM, Warwick Chapman
>>
>> <warwickchapman at gmail.com> wrote:
>>
>> Aaron, if I set the VMs to 8G MEM and SWAP, will the following be OK:
>>
>>
>> export JAVA_OPTS='-server -d64 -Xmx4g -Djava.awt.headless=true
>>
>> -Dhttp.agent=Sakai'
>>
>>
>> -- Warwick Bruce Chapman | +27 83 7797 094 |
>>
>> http://warwickchapman.com
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://collab.sakaiproject.org/pipermail/production/attachments/20131103/1c86f499/attachment.html 


More information about the production mailing list