[Beowulf] first cluster
hahn at mcmaster.ca
Mon Jul 19 09:47:53 EDT 2010
> It's a very neat idea, but it has the disadvantage - unless I'm
>misunderstanding - that if the job fails, and leaves droppings in, say, /tmp
>on the cluster node, the user can't log in to diagnose things or clean up
my organization has ~4k users (~3-500 active at any time), and does not
attempt to prevent access to compute nodes by users. it just doesn't
seem like a real, worth-solving problem. heck, we have more trouble
with users running jobs on _login_ nodes, rather than compute notes.
(many of our systems came with a pam-slurm module which did this;
we remove it.)
I don't think this is at all surprising. if a user groks clusters
at all, they'll know that cheating is not very effective (and not very
scalable) and stands a good chance of bringing trouble.
those who don't grok wind up running on the login nodes
(where we have fairly tight RLIMIT_AS and CPU...)
regards, mark hahn.
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
More information about the Beowulf