Cuda support for Docking@home


Advanced search

Message boards : Wish list : Cuda support for Docking@home

Sort
Author Message
OOP

Joined: Jun 19 09
Posts: 6
ID: 13687
Credit: 80,872
RAC: 0
Message 5092 - Posted 28 Jun 2009 16:00:55 UTC

Guys we need Cuda support. Seti is very speedy with Cuda, Folding@home is 4,000 points behind seti. And i started running them together. Seti@home runs from 2X to 10X faster than the CPU-only version.

WE NEED CUDA SUPPORT !

There's got to be someone out there on the internet, who can make it happen.

- Thx

Travis

OOP

Joined: Jun 19 09
Posts: 6
ID: 13687
Credit: 80,872
RAC: 0
Message 5096 - Posted 29 Jun 2009 17:21:57 UTC - in response to Message ID 5092 .

RE: Docking@home Cuda Support‏
From: Brian Burke (BBurke@nvidia.com)
Sent: Mon 6/29/09 10:33 AM
To: 'OptimalOptimusPrimus r' (optimaloptimusprimus@hotmail.com)

Well do.



CUDA is a free download, all that needs to be done is register to receive support, documents, sample code, and tools that will help the developer along.



Any developer can grab it here:

http://www.nvidia.com/object/cuda_get.html



I will also pass this along to a group within NVIDIA that may be able to help.





From: OptimalOptimusPrimus r [mailto:optimaloptimusprimus@hotmail.com]
Sent: Sunday, June 28, 2009 11:46 AM
To: Brian Burke
Subject: Docking@home Cuda Support



I'm writing you to ask Nvidia to support docking at home, It needs Cuda support. Seti & folding at home runs 10x times faster on my GPU because they have Cuda support. and Docking at home does not. Please forward this to whoever you need to to make this happend. Thx

http://docking.cis.udel.edu/


- Travis

Profile Michela
Forum moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar

Joined: Sep 13 06
Posts: 163
ID: 10
Credit: 97,083
RAC: 0
Message 5099 - Posted 30 Jun 2009 1:09:11 UTC - in response to Message ID 5096 .

I wish I have the resources for this.

We started writing the MD code for CUDA and so far we can run simulations of solvents and membranes but we are far from being able to integrate all the functionalities needed for the docking.

Michela
____________
If you are interested in working on Docking@Home in a great group at UDel, contact me at 'taufer at acm dot org'!

OOP

Joined: Jun 19 09
Posts: 6
ID: 13687
Credit: 80,872
RAC: 0
Message 5100 - Posted 30 Jun 2009 14:34:15 UTC - in response to Message ID 5099 .

You should really email him and ask for help.

- T

From: Brian Burke (BBurke@nvidia.com)
Sent: Mon 6/29/09 10:33 AM
To: 'OptimalOptimusPrimus r' (optimaloptimusprimus@hotmail.com)

CUDA is a free download, all that needs to be done is register to receive support.

OOP

Joined: Jun 19 09
Posts: 6
ID: 13687
Credit: 80,872
RAC: 0
Message 5101 - Posted 1 Jul 2009 13:39:21 UTC - in response to Message ID 5100 .

http://viewmorepics.myspace.com/index.cfm?fuseaction=viewImage&friendID=166877672&albumID=1184535&imageID=30921683

My boinc stats, docking@home is falling wayyy behind seti.

- T

wojciech_czyz

Joined: Feb 22 10
Posts: 1
ID: 26132
Credit: 0
RAC: 0
Message 5726 - Posted 22 Feb 2010 13:20:59 UTC

Possibly there is new possibility to quickly talcke the problem, OpenMM library.
https://simtk.org/home/openmm

It could possibly be integrated into this project to automatically reuse CUDA and OpenCL optimizations. It is reused in Gromacs and being reused in Folding @ Home GPU3 core.

Is it possible to use it in this project and/or CHARMM?

Fred Verster
Avatar

Joined: May 8 09
Posts: 26
ID: 11034
Credit: 2,647,353
RAC: 0
Message 5727 - Posted 23 Feb 2010 23:06:57 UTC
Last modified: 23 Feb 2010 23:39:37 UTC

First, welcome to Number Crunching at Docking@Home ,
by the way.


Hi, IMHO someone who fully understands the science app. and has enough knowledge to 'see', if some parts or the whole app. can be ported to OpenCL.
If there are too much situations where 1 result has to wait for another, it can be hard or not at all possible for parallel processing, cause that's where CUDA (or CAL) gets the speed gain from.
Just my 2 (Euro)cents.
____________

Knight who says N! Ni Ni

Profile [VENETO] boboviz

Joined: Sep 25 08
Posts: 59
ID: 1577
Credit: 202,735
RAC: 0
Message 5923 - Posted 11 May 2010 10:35:25 UTC - in response to Message ID 5727 .


Hi, IMHO someone who fully understands the science app. and has enough knowledge to 'see', if some parts or the whole app. can be ported to OpenCL.


http://www.opencldev.com/
OpenCl is cross platform...
Profile [VENETO] boboviz

Joined: Sep 25 08
Posts: 59
ID: 1577
Credit: 202,735
RAC: 0
Message 5953 - Posted 15 Jun 2010 16:03:27 UTC - in response to Message ID 5923 .

OpenCl is cross platform...


New version of OpenCl...
OpenCl 1.1
zpm

Joined: Mar 13 09
Posts: 13
ID: 8257
Credit: 474,576
RAC: 0
Message 5964 - Posted 19 Jul 2010 1:34:52 UTC - in response to Message ID 5953 .

It's not that easy to compile the program for cuda opencl or anything, were having a lot of trouble with that other at Drugdiscovery@home and Hydrogen@home.
____________

I recommend Secunia PSI: http://secunia.com/vulnerability_scanning/personal/

Profile [VENETO] boboviz

Joined: Sep 25 08
Posts: 59
ID: 1577
Credit: 202,735
RAC: 0
Message 5966 - Posted 19 Jul 2010 20:45:15 UTC - in response to Message ID 5964 .

It's not that easy to compile the program for cuda opencl or anything,


We know it's not easy, but the boost of computational power with GPU is enormous....
zombie67 [MM]
Volunteer tester
Avatar

Joined: Sep 18 06
Posts: 207
ID: 114
Credit: 2,817,648
RAC: 0
Message 6041 - Posted 7 Oct 2010 6:07:17 UTC - in response to Message ID 5966 .

It's not that easy to compile the program for cuda opencl or anything,


We know it's not easy, but the boost of computational power with GPU is enormous....



Only *if* the app can benefit from parallel processing. Many can't.
____________
Dublin, CA
Team SETI.USA
Profile [VENETO] boboviz

Joined: Sep 25 08
Posts: 59
ID: 1577
Credit: 202,735
RAC: 0
Message 6522 - Posted 10 Dec 2011 9:55:46 UTC - in response to Message ID 6041 .

Only *if* the app can benefit from parallel processing. Many can't.


Ok, but if i see a Michela Taufel session at Nvidia GPU Conference, i think she is the Michela of docking and she is working on gpu project....
Profile [VENETO] boboviz

Joined: Sep 25 08
Posts: 59
ID: 1577
Credit: 202,735
RAC: 0
Message 6990 - Posted 22 Nov 2012 17:08:02 UTC - in response to Message ID 6522 .

Ok, but if i see a Michela Taufel session at Nvidia GPU Conference, i think she is the Michela of docking and she is working on gpu project....


There are 3 possibilities:
1) They have not gpu skilled developer, they realize that is impossible to bring the code on gpu, etc, etc. And they have abandoned "gpu idea"
2) They are working hard on gpu code and soon they present gpu app
3) They are working VERY slowly on gpu code (the last gpu admin post is 2009)

Meantime, they may tell us something!!!
rubyroberts

Joined: Jan 22 13
Posts: 1
ID: 73192
Credit: 0
RAC: 0
Message 7021 - Posted 22 Jan 2013 16:10:55 UTC - in response to Message ID 5092 .

Guys we need Cuda support. Seti is very speedy with Cuda, Folding@home is 4,000 points behind seti. And i started running them together. Seti@home runs from 2X to 10X faster than the CPU-only version.

WE NEED CUDA SUPPORT !

There's got to be someone out there on the internet, who can make it happen.

- Thx

Travis

DeAxes

Joined: Apr 24 12
Posts: 4
ID: 54858
Credit: 197,938
RAC: 0
Message 7031 - Posted 14 Feb 2013 10:59:35 UTC

I've been wondering why there isn't CUDA support. If they upgrade CHARMM from version 34a2 (developmental release from 2007) to the latest version, then they would not only gain better performance (not sure about that point) but also the ability to use OpenMM. OpenMM is what is used in Folding@home to enable GPU acceleration. Is there any reason to stay on a developmental release from 2007?

Profile [VENETO] boboviz

Joined: Sep 25 08
Posts: 59
ID: 1577
Credit: 202,735
RAC: 0
Message 7039 - Posted 19 Feb 2013 15:23:40 UTC - in response to Message ID 7031 .

OpenMM is what is used in Folding@home to enable GPU acceleration. Is there any reason to stay on a developmental release from 2007?


OpenMM support also OpenCL!
Simba123

Joined: Dec 7 11
Posts: 23
ID: 47237
Credit: 2,607,800
RAC: 0
Message 7042 - Posted 21 Feb 2013 14:18:36 UTC - in response to Message ID 7039 .

OpenMM is what is used in Folding@home to enable GPU acceleration. Is there any reason to stay on a developmental release from 2007?


OpenMM support also OpenCL!



More than likely lack of funds/people to do the upgrade.

It takes a fair bit of time, knowledge and cash to upgrade servers. All things that, sadly, D@H seems to be lacking.
Profile [VENETO] boboviz

Joined: Sep 25 08
Posts: 59
ID: 1577
Credit: 202,735
RAC: 0
Message 7045 - Posted 25 Feb 2013 9:23:34 UTC - in response to Message ID 7042 .

More than likely lack of funds/people to do the upgrade.

It takes a fair bit of time, knowledge and cash to upgrade servers. All things that, sadly, D@H seems to be lacking.


I agree with you.
But i think that if gpu client it's a "lot of work", they can start with an upgrade of cpu client, and after pass to gpu....
Profile robertmiles

Joined: Apr 16 09
Posts: 96
ID: 9967
Credit: 1,290,747
RAC: 0
Message 7182 - Posted 10 Dec 2013 22:41:40 UTC
Last modified: 10 Dec 2013 22:44:11 UTC

I've taken an online course in CUDA, and am looking for an online course in OpenCL. I've found that the minimum compiler that can handle the C or C++ portion of CUDA workunits costs about $400, for the Windows version only. I'm not sure if there is a suitable C or C++ compiler for using CUDA under Linux, but if there is, it is likely to be free.

I'd be interested in looking at the source code of CHARMM to see if I might be able to convert it to a CUDA version.

Also, are there any license restrictions on distributing the source code, and does it use pthreads or any other threading library?

Does it have many sections where a list of things can be done in any order, or even all at once, since the things in the list do not write to the same variables as any other things in the lists?

How much would you need to upgrade the servers to handle the additional volume of workunits a GPU version would allow?

Profile robertmiles

Joined: Apr 16 09
Posts: 96
ID: 9967
Credit: 1,290,747
RAC: 0
Message 7184 - Posted 11 Dec 2013 0:36:12 UTC
Last modified: 11 Dec 2013 0:43:37 UTC

I looked up CHARMM on the web. It uses FORTRAN77, which I've used in the past, but I'm not familiar with the current generation of compilers. The licensing terms appear to make it available only to students and academic researchers, so does UDel offer any online courses that could make me qualify?

It appears to be mainly for Linux, but I could try compiling it under the Cygwin or MSYS Linux emulations for Windows.

By the way, if CHARMM has too few items that can be performed in parallel, the GPU version could be as slow as one fourth the speed of the CPU version, on typical computers.

Message boards : Wish list : Cuda support for Docking@home

Database Error
: The MySQL server is running with the --read-only option so it cannot execute this statement
array(3) {
  [0]=>
  array(7) {
    ["file"]=>
    string(47) "/boinc/projects/docking/html_v2/inc/db_conn.inc"
    ["line"]=>
    int(97)
    ["function"]=>
    string(8) "do_query"
    ["class"]=>
    string(6) "DbConn"
    ["object"]=>
    object(DbConn)#26 (2) {
      ["db_conn"]=>
      resource(114) of type (mysql link persistent)
      ["db_name"]=>
      string(7) "docking"
    }
    ["type"]=>
    string(2) "->"
    ["args"]=>
    array(1) {
      [0]=>
      &string(51) "update DBNAME.thread set views=views+1 where id=442"
    }
  }
  [1]=>
  array(7) {
    ["file"]=>
    string(48) "/boinc/projects/docking/html_v2/inc/forum_db.inc"
    ["line"]=>
    int(60)
    ["function"]=>
    string(6) "update"
    ["class"]=>
    string(6) "DbConn"
    ["object"]=>
    object(DbConn)#26 (2) {
      ["db_conn"]=>
      resource(114) of type (mysql link persistent)
      ["db_name"]=>
      string(7) "docking"
    }
    ["type"]=>
    string(2) "->"
    ["args"]=>
    array(3) {
      [0]=>
      object(BoincThread)#3 (16) {
        ["id"]=>
        string(3) "442"
        ["forum"]=>
        string(1) "9"
        ["owner"]=>
        string(5) "13687"
        ["status"]=>
        string(1) "0"
        ["title"]=>
        string(29) "Cuda support for Docking@home"
        ["timestamp"]=>
        string(10) "1386722172"
        ["views"]=>
        string(3) "717"
        ["replies"]=>
        string(2) "20"
        ["activity"]=>
        string(19) "1.4190479990113e-17"
        ["sufferers"]=>
        string(1) "0"
        ["score"]=>
        string(1) "0"
        ["votes"]=>
        string(1) "0"
        ["create_time"]=>
        string(10) "1246204855"
        ["hidden"]=>
        string(1) "0"
        ["sticky"]=>
        string(1) "0"
        ["locked"]=>
        string(1) "0"
      }
      [1]=>
      &string(6) "thread"
      [2]=>
      &string(13) "views=views+1"
    }
  }
  [2]=>
  array(7) {
    ["file"]=>
    string(63) "/boinc/projects/docking/html_v2/user/community/forum/thread.php"
    ["line"]=>
    int(184)
    ["function"]=>
    string(6) "update"
    ["class"]=>
    string(11) "BoincThread"
    ["object"]=>
    object(BoincThread)#3 (16) {
      ["id"]=>
      string(3) "442"
      ["forum"]=>
      string(1) "9"
      ["owner"]=>
      string(5) "13687"
      ["status"]=>
      string(1) "0"
      ["title"]=>
      string(29) "Cuda support for Docking@home"
      ["timestamp"]=>
      string(10) "1386722172"
      ["views"]=>
      string(3) "717"
      ["replies"]=>
      string(2) "20"
      ["activity"]=>
      string(19) "1.4190479990113e-17"
      ["sufferers"]=>
      string(1) "0"
      ["score"]=>
      string(1) "0"
      ["votes"]=>
      string(1) "0"
      ["create_time"]=>
      string(10) "1246204855"
      ["hidden"]=>
      string(1) "0"
      ["sticky"]=>
      string(1) "0"
      ["locked"]=>
      string(1) "0"
    }
    ["type"]=>
    string(2) "->"
    ["args"]=>
    array(1) {
      [0]=>
      &string(13) "views=views+1"
    }
  }
}
query: update docking.thread set views=views+1 where id=442