Re: Large variable problem on Linux

From: Dave Allured - NOAA Affiliate <dave.allured_at_nyahnyahspammersnyahnyah>
Date: Wed Oct 30 2013 - 19:41:24 MDT

Dave B,

> Could you perhaps be contending with some other memory-intensive process?

Well that seems to have been the problem. Thank you. I think I had a
bad assumption about how virtual memory works.

Do you have any suggestions as to how to have NCL make better use of
virtual memory on Linux? It seems that if I try to allocate more than
the indicated free physical memory, I get some kind of memory fault
even though there is another 32 Gbytes shown as "free" swap memory.

--Dave A.

On Wed, Oct 30, 2013 at 6:28 PM, David Brown <dbrown@ucar.edu> wrote:
> Hi Dave,
> I am not encountering a problem on caldera, part of the yellowstone system:
> $ cat allured.ncl
> x = new ((/ 5000, 1000, 1000 /), "float")
> printVarSummary(x)
> print (systemfunc ("date"))
>
> $ uname -a
> Linux caldera10 2.6.32-220.13.1.el6.x86_64 #1 SMP Thu Mar 29 11:46:40 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
>
> $ limit
> cputime unlimited
> filesize unlimited
> datasize unlimited
> stacksize unlimited
> coredumpsize 0 kbytes
> memoryuse unlimited
> vmemoryuse unlimited
> descriptors 4096
> memorylocked unlimited
> maxproc 2067554
>
> ncl allured.ncl
> Copyright (C) 1995-2013 - All Rights Reserved
> University Corporation for Atmospheric Research
> NCAR Command Language Version 6.1.2
> The use of this software is governed by a License Agreement.
> See http://www.ncl.ucar.edu/ for more details.
>
> Variable: x
> Type: float
> Total Size: 20000000000 bytes
> 5000000000 values
> Number of Dimensions: 3
> Dimensions and sizes: [5000] x [1000] x [1000]
> Coordinates:
> Number Of Attributes: 1
> _FillValue : 9.96921e+36
> (0) Wed Oct 30 18:18:23 MDT 2013
>
> The stacksize limit is different, but I wouldn't have thought that would make a difference, since this is memory allocated on the heap.
>
> I tried this interactively as well to see if that made any difference, but it did not.
> This machine does have 64 GB available, and there were no other user-level processes running at the time. Could you perhaps be contending with some other memory-intensive process?
> -dave
>
>
> On Oct 30, 2013, at 6:05 PM, Dave Allured - NOAA Affiliate <dave.allured@noaa.gov> wrote:
>
>> Additional info on "limits" status:
>>
>> batchb:~/stime 1138> limit
>> cputime unlimited
>> filesize unlimited
>> datasize unlimited
>> stacksize 10240 kbytes
>> coredumpsize 0 kbytes
>> memoryuse unlimited
>> vmemoryuse unlimited
>> descriptors 1024
>> memorylocked 32 kbytes
>> maxproc 257487
>>
>> --Dave
>>
>> On Wed, Oct 30, 2013 at 5:54 PM, Dave Allured - NOAA Affiliate
>> <dave.allured@noaa.gov> wrote:
>>> NCL team,
>>>
>>> This program allocates a variable of about 20 Gbytes. It fails on
>>> line 2, on 64-bit Linux with 32 Gb physical memory. But it runs
>>> correctly on 64-bit Mac OS with only 8 Gb memory, albeit with much
>>> disk swapping.
>>>
>>> batchb:~/stime 1128> ncl
>>> NCAR Command Language Version 6.1.2
>>> ncl 0> x = new ((/ 5000, 1000, 1000 /), "float")
>>> ncl 1> print (systemfunc ("date"))
>>> Segmentation fault
>>>
>>> The NCL version is 6.1.2 on both machines. But on Mac, it is the
>>> special workaround version, ncl.xq.fix, for what it's worth.
>>>
>>> More Linux info:
>>> batchb:~/stime 1137> uname -a
>>> Linux batchb.psd.esrl.noaa.gov 2.6.18-371.el5 #1 SMP Thu Sep 5
>>> 21:21:44 EDT 2013 x86_64 GNU/Linux
>>>
>>> There is nothing about this in the Known Bugs pages or the FAQ. Is
>>> the above a legal program? Is this a bug in NCL? Worth fixing? Is
>>> there a workaround? Thanks for your consideration.
>>>
>>> --Dave
>
_______________________________________________
ncl-talk mailing list
List instructions, subscriber options, unsubscribe:
http://mailman.ucar.edu/mailman/listinfo/ncl-talk
Received on Wed Oct 30 19:41:34 2013

This archive was generated by hypermail 2.1.8 : Fri Nov 01 2013 - 08:58:14 MDT