Ten Tusscher Propagation in a 3072 Element Biventricular Mesh

Description

Submit an EP Simulation to the ROCCE Cluster

Gather Files on ROCCE

[guest@login-0-0 ~]$ cd continuity/
[guest@login-0-0 continuity]$ cp /share/apps/si2012/cardiac/EP_TenTusscher_Panfilov_Epi_sympy_GPU.zip EP_TenTusscher_Panfilov_Epi_sympy_GPU.zip
[guest@login-0-0 continuity]$ cp /share/apps/si2012/cardiac/SubmitSerial.qsub SubmitSerial.qsub
[guest@login-0-0 continuity]$ cp /share/apps/si2012/cardiac/EP_BiV3072.py EP_BiV3072.py

[guest@login-0-0 continuity]$ unzip EP_TenTusscher_Panfilov_Epi_sympy_GPU.zip
[guest@login-0-0 continuity]$ mv  EP_TenTusscher_Panfilov_Epi_sympy_GPU/ pcty/server/problem/Electrophysiology/

Inspect the Run and Submit Scripts

[guest@login-0-0 continuity]$ vi EP_BiV3072.py

   1 #************************************************************
   2 # NBCR Summer Institute
   3 # Category: EP
   4 # Date: 8/1/2012
   5 # Description: Solve the monodomain equation on a BiV mesh with
   6 #              extrodinary nodes and the TenTusscher ionic model
   7 # 
   8 #************************************************************
   9 
  10 import os
  11 import sys
  12 import numpy
  13 
  14 #**************define parameters for the EP solve step**************
  15 
  16 #Output file name
  17 fName = 'EP_BiV3072'
  18 
  19 #Use GPU acceleration for ionic model
  20 CUDA = 1
  21 
  22 #Set simulation time and step size
  23 tstart = 0.0
  24 duration = 60
  25 stepsize = 0.05
  26 
  27 #select to save voltage renderings at given intervals
  28 #since rendering is very memory intensive, we will render
  29 #the voltage every 10 steps or 0.5ms
  30 renderfile = 1
  31 rendercount = 10
  32 
  33 #**************end define parameters for the EP solve step**************
  34 
  35 #Load the model from the database, send, and calculate mesh
  36 self.Load_File({'model_id':'1173', 'username':'guest', 'password':'guest', 'version':'1'}, log = 0)
  37 self.Send(None, log=0)
  38 self.CalcMesh([('Calculate', None), ('Do not Calculate', None), ('Do not Calculate', None), ('Angle change scale factors (for nodal derivs wrt angle change)', None)], log=0)
  39 
  40 #Perform Simulations
  41 
  42 self.SinitElectrophys(log=0)
  43 self.Send(None, log=0)
  44 self.SintElectrophys({'plicitType':'Implicit','parallelLinearSolver':0,'conductivityBasis':3, \
        'solutions':{'writeFile': renderfile, 'counter': rendercount, 'tableResult': 0, 'renderResult': 0},  \
        'stateVarInputSelections':[],'stateVarDoTable':0,'parallelODESolver':0,'tstart':tstart,'useCuda':CUDA,  \
        'stateVarOutputSelections':[],'serverKeyname':'electromech_exchange','stateVarList':'1','fileName':fName,  \
        'aps':{'writeFile': 1, 'counter': 1, 'tableResult': 0, 'node_list': 'all', 'renderResult': 0}, \
        'stateVarListType':'collocation points','useGalerkinAssembly':True,'stateVarFrequency':1,'stateVarSelections':[], \
        'dtout':stepsize,'tlen':duration,'reassemble_lhs':1,  \
        'ecgs':{'getHeartVector': False, 'writeFile': 0, 'counter': 1, 'tableResult': 0, 'renderResult': 0}}, log=0)

[guest@login-0-0 continuity]$ vi SubmitSerial.qsub

   1 #!/bin/sh
   2 # 
   3 # EXAMPLE OPEN MPI SCRIPT FOR SGE
   4 # Modified by Basement Supercomputing 1/2/2006 DJE
   5 # Modified by cmrg 19/June/2008 FVL
   6 
   7 # Your job name 
   8 #$ -N EP_BiV_run1
   9 
  10 # Use Verbos
  11 #$ -V
  12 
  13 # Use current working directory
  14 #$ -cwd
  15 
  16 # Join stdout and stderr
  17 #$ -j y
  18 
  19 # Use our GPU queue, which uses GPU and CPU nodes
  20 # -q gpu@compute-1-5.local
  21 #$ -q gpu
  22 
  23 # To use CUDA nodes only
  24 #$ -l cuda
  25 
  26 # Set your number of processors here. 
  27 # Requests mpich environment although we actually are using openmpi
  28 #$ -pe orte 1
  29 
  30 # Run job through bash shell
  31 #$ -S /bin/bash
  32 
  33 # Export Library path
  34 export LD_LIBRARY_PATH=/opt/cuda/lib64:$LD_LIBRARY_PATH
  35 export LD_LIBRARY_PATH=/opt/openmpi-myrinet_mx/lib:$LD_LIBRARY_PATH
  36 export MX_RCACHE=0
  37 
  38 # Use full pathname to make sure we are using the right mpirun
  39 ./continuity --full --no-threads --batch /home/*yourUserName*/continuity/EP_BiV3072.py

Submit EP Job to ROCCE

[guest@login-0-0 continuity]$ qsub SubmitSerial.qsub

Inspect the Cont6 File

Load from Repository

Inspect Mesh

Inspect Electrophsyiology

Render Voltage Solution

[guest@login-0-0 continuity]$ cd $HOME/.continuity/working

[guest@login-0-0 continuity]$ scp -r *yourUserName*@rocce.ucsd.edu:/home/*yourUserName*/.continuity/working/Vsoln_BiV_3072_50.pickle .

Render Activation Map

Continuity/Documentation/Tutorials/BiV3000 (last edited 2015-07-07 19:41:02 by JeffVanDorn)