# castep_tutorial **Repository Path**: pscires/castep_tutorial ## Basic Information - **Project Name**: castep_tutorial - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-06-21 - **Last Updated**: 2023-07-28 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # follow the official castep tutorial ## keywords: castep, geometry optimization, DFT ## words before the project Before the tutorial, we need a Linux system (CentOS) with gcc-8, gfortran-8. Other prerequisite softwares for CASTEP are: BLAS, LAPACK, OpenBLAS, cmake, openmpi. I install them in my own difined directory: ~/softwares/ , that is, in ~/softwares/bin are the executable files and in ~/softwares/lib are the library files. It is better not to install them in all different directories like ~/softwares/OpenBLAS, ~/softwares/LAPACK and so on, which will make the .bash_profile a lot of messy. It is even better if we define some new directories like ~/softwares/opt, in which we can manage the CASTEP, Python and some other user-specified softwares. In this project, I put some notes and simple results for other beginners like me who want to try something new. Internet does change the world. ## working log of year 21 ### 21Jun21 install the softwares in these three days and write the log. ### 21Jun22 1) not very sure how to solve the ``can't open the display'', but solved it. 2) install jmol, see also: https://snapcraft.io/install/jmol/centos 3) add following sentences in .bash_profile on the server: ``` export LANG="en_US.UTF-8" export XAUTHORITY=$HOME/.Xauthority ``` ### 21Jun23 play with the bash prompt looking. and add the following sentence to .bash_profile: ``` PS1='\e[1;34m\u:\w\n\e[0m\D{%m/%d}> ' ``` see also: https://phoenixnap.com/kb/change-bash-prompt-linux ### 21Jul6 1) have been working on building the compiling environment on the new xps8940 machine. CentOS7, Ubuntu Server, Fedora Workstation were installed on it one by one, and finally I choose CentOS7. One pity is that CentOS7 will stop the service up to 2024Jun. The most annoying is the CentOS7 installed on it with devtoolset-8-gcc but failed at compiling CASTEP. Forturnately, devtoolset-7-gcc works, both serial and mpi version. 2) met the ``can't open display'' error for several times. not very sure about the solution, but the following steps should be helpful for MacOS (client) and CentOS (server): a) add to the client's ~/.ssh/CONFIG (or config) file: ``` ForwardX11 yes ``` b) change the server's /etc/ssh/sshd_config ``` X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes ``` c) restart the server ssh service (or simply reboot the server) ``` service sshd restart ``` or ``` /etc/init.d/sshd restart ``` or ``` sudo systemctl restart sshd ``` (see also, https://www.cyberciti.biz/faq/howto-restart-ssh/) d) close all the terminals that link to the server from the client, then ssh again, which works for me. ### 21Jul7 working with "Tutorial 3 : Converging Testing": 1) find that castep calculate 3 cut-off energies are chosen to calculate the finite basis set correction. 2) during programming to do high-throughput trials, again learn "p=subprocess.Popen(cmd,shell=True)" to launch a process and "p.communicate()" to wait the process finish. 3) let the process running on one core should be faster than on different cores. to do this we need: ``` taskset -c 13 castep.serial quartz ``` which let castep run on the core 13 of the CPU. (-c can be replaced by --cpu-list) cpu/processor affinity is the keyword to search. maybe we should disable the multicore feature. ### 21Jul8 had a discussion with ZHANG Qi on wechat about the "multiprocessing.Pool()". he gave me some advices and I tried several tests but failed. He sent me message this noon by showing me the correct way to communicate between processes using "multiprocessing.Manager().Queue()". The communication may slow down the calculation a bit. My way is to averagely divide the processes to each processor. But "Pool()" will send the process to the processor just after the last one finished, however new process got some possibilities to be sent to the processor that is running for other processes. From this point, my way may be not faster than his too much. In case of sending several processes to the same processor, it may be better to let the processes communicate before they are launched. (which is suitable for large-scale parallel calculations.) ### 21Jul9 1) add an image into markdown file. see also: https://marinegeo.github.io/2018-08-10-adding-images-markdown/ 2) finished tutorial-3, many info in (.castep) file still unknown for me. a simple case of geometry optimization for quartz system is also completed. see the note for details. see also: http://www.tcm.phy.cam.ac.uk/castep/Geom_Opt/GEOM_OPT.html and the note in notes_t_silicon/ 3) about the pseudo-potential file, see also: http://www.castep.org/CASTEP/FAQPseudopotentials to utilize the built-in library, just comment out the following sentences: ``` #%BLOCK SPECIES_POT # Si Si_00.usp #%ENDBLOCK SPECIES_POT ``` 4) about managing processes in mpirun, we need some tutorials, see also: https://docs.oracle.com/cd/E19356-01/820-3176-10/ExecutingPrograms.html#50413574_76503 ### 21Jul11 1) to install "GNOME Desktop" on CentOS7: ``` sudo yum -y groups install "GNOME Desktop" echo "exec gnome-session" > ~/.xinitrc ``` then type in "startx" for a graphical desktop to be launched. (For now, only the screen connected to my computer can ouput a graphical desktop. If connected remotely, xauth error occurs and I haven't solve the problem.) I do not set this as my default login method, which can be done from the following command: ``` systemctl set-default graphical.target ``` 2) continue reproducing the results in "Geometry Optimisation". 3) "to specify a restart and not a fresh calculation", we need a (.check) file existed and put the following line into the (.param) file ``` continuation : default ``` ### 21Jul12 1) Geometry Optimization of Si-001 plane with hydrogen passivation. not succeeded for now. 2) mpirun to control the processes running on specified cores: ``` mpirun -n 4 -display-devel-map --map-by socket --bind-to cpu-list:ordered --cpu-list "4,5,6,7" castep.mpi si ``` where the option "--map-by socket" may be useless. However, it seems MPI first map the processes and then rank them and finally bind them to cores. see also: https://www.open-mpi.org/doc/v4.0/man1/mpirun.1.php for "Mapping, Ranking, and Binding: Oh My!" A rankfile could be useful for this purpose. ### 21Jul13 1) continue GeomOpt for Si-001 system. 2) some big mistakes made without realization on my xps8940, cannot connect to it or connect to others from it. so re-install CentOS7 on it. so bad! ### 21Jul14 1) OS tortures me a lot. The connection from others to xps8940 always fails, however the other direction seems to work. So I plan to move my main workspace to xps8940. Re-install some softwares on xps8940. 1.1) "yum install code" may not work. "snap" may help: ``` sudo snap install --classic code ``` see also: https://code.visualstudio.com/docs/setup/linux 1.2) the CentOS7 does not detect my second screen. but this can be fixed after I install the Nvidia driver for xps8940 (GeForce RTX 3070). It says the default gcc for CentOS7 is 4.8.5 (see also: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#runfile-nouveau). To perform the runfile installation, we need first to disable the Nouveau drivers by create a file at /usr/lib/modprobe.d/blacklist-nouveau.conf with following content: ``` blacklist nouveau options nouveau modeset=0 ``` then regenerate the kernal initramfs ``` sudo dracut --force ``` and change the graphical mode followed by reboot ``` systemctl set-default multi-user.target init 3 reboot ``` and finally run the bash file downloaded from NVIDIA website ``` bash NVIDIA-*.run ``` After installation, reset the default login mode: ``` systemctl set-default graphical.target ``` or maybe just "init 5" back to graphical mode. see also: https://blog.csdn.net/tony_vip/article/details/104531383 ; https://blog.csdn.net/xueshengke/article/details/78134991 1.3) fftw, openmpi ``` ./configure --prefix=/home/cpan/softwares ``` "--enable-threads", "--enable-openmp", "--enable-mpi" may be useful for specific purpose when using fftw lib. 1.4) castep-20.11 1.4.1) prerequisite: ``` yum group install "Development Tools" yum install centos-release-scl yum install devtoolset-7 yum install lapack lapack-devel blas blas-devel openssl openssl-devel fftw fftw-devel openblas openblas-devel ``` add "source scl_source enable devtoolset-7" in "~/.bash_profile", and comment if after we install "castep". 1.4.2) serial version, ``` make FFT=fftw3 MATHLIBS=openblas FFTLIBDIR=/usr/lib64 MATHLIBDIR=/usr/lib64 make install INSTALL_DIR=/dir/to/install ``` 1.4.3) parallel version ``` make COMMS_ARCH=mpi SUBARCH=mpi FFT=fftw3 MATHLIBS=openblas FFTLIBDIR=/usr/lib64 MATHLIBDIR=/usr/lib64 make install INSTALL_DIR=/dir/to/install ``` better not use "-j" flag, both "make" and "make check" could be much more efficient. 1.5) xmgrace, gnuplot ``` yum install xmgrace gnuplot ``` 1.6) miniconda, openmm ``` bash Miniconda3-py39_4.9.2-Linux-x86_64.sh eval "$(/home/cpan/softwares/opt/miniconda3/bin/conda shell.bash hook)" conda install -c conda-forge openmm cudatoolkit==11.2.2 python -m simtk.testInstallation ``` Cuda Toolkit-11.4 is installed on xps8940, and related cudatoolkit-11.2.2 is installed by conda. The check/test is successfully over and I do not plan to re-install the Cuda Toolkit-11.2.2 on xps8940. 2) It seems the reason why "castep.serial" not running on one core is that I install it by "make -j". On xps8940, ``` mpirun -n 4 castep.mpi si ``` is the same as the previous complicated command on cls0: ``` mpirun -n 4 -display-devel-map --map-by socket --bind-to cpu-list:ordered --cpu-list "4,5,6,7" castep.mpi si ``` And on xps8940, "mpirun" can run a new job smartly by sending processes to the idle processors with old ones running undisturbed. (actually, all calculations slow down) It should has 16 cpus, and I submit 4 "mpirun -n 4" jobs with the --cpu-list separately "0,1", "2,3", "4,5", "6,7", however the "Total time" are: ``` Total time = 442.15 s Total time = 416.46 s Total time = 363.72 s Total time = 417.15 s ``` however "castep.serial" only cost ~260 s. need more tests. ### Jul15 1) "xc_functional : PBE" with "geom_method : LBFGS" set in (.param) file, the geometry optimization could be much efficient! 2) suddenly found that the GeomOpt for Si-001 in documentation is 9 layers but the input (.cell) file provided in only 7 layers. with the 7 layers input file, I cannot reproduce the results given by the documentation. 3) my xps8940 cannot give a efficient parallel computation result. I'm thinking if the hardware cause such problems. ### Jul16 1) volunteer to do a favor for the students to move their luggage. 2) busy with the optimization and may find some errors in the documentation. ### Jul17 it should be the high-energy state rather than the thickness of the vacuum largely impacts how to reach the local minimum. ### Jul18 I try to do the geometry optimization for different initial configurations that are slightly different from the crystal configuration. (1%,5%,0.1% randomly deviated) All finally output the dimerization. Climbing the hills in energy map is also found. ### Jul19 1) learn about the "transition state search", see also: http://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/content/modules/castep/dlgcasteptss.htm ; http://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/content/modules/castep/tskcasteptss.htm ; https://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/content/modules/castep/tskcastepsettss.htm It is a functional module in CASTEP. ``` task : transitionstatesearch ``` ### Jul20 1) running geometry optimization of LiPS on surface (carbon sheet, carbon sheet doped by nitrogen) learn about the balance between "cut_off_ene" and "grid_scale". 1.1) I forgot to run "singlepoint" or "geometryoptimization" for LiPS singlet, which may indicate how to choose the "cut_off_ene". 2) try "basis_precision : fine" ### Jul22 1) I'm thinking if the "species_pot" of "NCP" written in (.cell) file is not suitable for LiPS relaxation. The optimization by default species_pot is newly running. The problem I met is that lithium atom moves away from nitrogen atom during optimization, however they should close each other due to electrostatic attraction. 2) After reading two papers, I find that "SEDC_SCHEME" is specifically defined in their work and I do not know this at all. ``` sedc_scheme : G06 ``` refs: a) Interface covalent bonding endowing high-sulfur-loading paper cathode with robustness for energy-dense, compact and foldable lithium-sulfur batteries, Hong Li et. al., Chemical Enginerring Journal 412 (2021) 128562 b) Cobalt in Nitrogen-Doped Graphene as Single-Atom Catalyst for High-Sulfur Content Lithium−Sulfur Batteries, Zhenzhene Du et. al., Journal of the American Chemical Society 2019, 141, 3977--398 see also: documentation for "sedc_scheme", https://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/content/modules/castep/keywords/k_sedc_scheme_castep.htm ; DFT-D3, https://www.chemie.uni-bonn.de/pctc/mulliken-center/software/dft-d3/ ; "CASTEP background theory", https://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/content/modules/castep/thcastepbackground.htm 2.1) Li2021 paper also suggests one "Hubbard U correction". see also: https://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/content/modules/castep/keywords/k_hubbard_u_castep.htm 2.2) actually, both papers above do not emphasize the "norm conserving pseudo-potential". 2.2.1) they also prescribe the "cut_off_ene" as "400 eV" and "kpoints_mp_grid" as "3x3x1" or "5x5x1". 2.2.2) energy converges to "5e-6 eV/atom" or "1e-5 eV/atom", force converges to "0.01 eV/A" ### Jul23 1) work at home today and wait for the typhoon. 2) as for the novelty for the project of relaxation of LiPS on surfaces, I think there may be two points we can try to investigate: a) find the promissing sub-equilibrium states of LiPS and LiPS on surfaces; b) add the electric field on the system and see what happens on the system; ### Jul24 1) work at home today and typhon closes 2) The adsorption energy between "S8" and "carbon sheet" and between "S8" and "N doped carbon sheet" are very close to each other. I checked the Mulliken charge and found that sulfer atoms kept neutral, and the closest distance between sulfer atom and the nitrogen atom is about 0.38 nm. Should I pull nearer them? 3) following three questions need to be answered: a) is "metals_method : dm" relavent to our LiPS optimization on surfaces? comment it or not? b) lz = 20 or more? (mine is 15) c) do we need to firstly optimize the surface then the whole system? (haven't done yet) ### Jul29 1) back to office today. a nice day. read several papers on Li2S by CUI Yi. his work is very enlightful. why not try single lithum or sulfer atom on surfaces? Is there any reasonable strategy for optimization? 2) still not finish the geometry optimizations. Li2S6 + carbon gives a relatively large binding energy. no idea why. ### Jul30 1) try "task : molecular dynamics" for Li2S relaxation on carbon sheet. previous results of optimization give almost zero interactions between the Li2S molecule and the carbon sheet. very weird. 2) by changing the nitrogen atom to carbon atom of the optimized configuration of Li2S@NC, I start a new optimization of Li2S@carbon. The result also gives an even larger binding energy than Li2S@NC's. I'm thinking if I should do more optimizations for each pair of systems, with changing the N atom to C atom of the substrate and vice versa. 3) it should be helpful to do a series single point energy for a specific configuration of LiPS above the substrate with different distances and different orientations. ### Aug2 1) try to calculate the single point energy for MnO2 substrate model. met "SCF not coverging" problem again. The official website of CASTEP provides some methods to check the convergence. (see: www.castep.org/CASTEP/FAQSCF ) The very first solution should be to check your input configuration!!! (if you casually enlarge the lz but with the positions_frac block.) 1.1) slides of a lecture by Stewart Clark I found online also points that ``` Golden rule: Try to use the number of cores that gives you the highest common factor with the number of k-points Example of 5-point calculation 10 cores: 48 seconds 13cores: 172 seconds ``` see: "Introduction to the CASTEP code", https://www.tcm.phy.cam.ac.uk/castep/oxford/castep.pdf 1.2) about the kpoints, see also: http://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/content/modules/castep/tskcastepseteleckpoints.htm ; https://www.tcm.phy.cam.ac.uk/castep/documentation/WebHelp/content/modules/castep/dlgcastepelecoptkpoints.htm#:~:text=k-points%20tab%20The%20k-points%20tab%20provides%20access%20to,the%20calculation%20can%20be%20specified%20in%20several%20ways. ``` The quality of the k-point sampling is particularly important for metallic systems, where rapid changes in electronic structure may occur along the energy band that crosses the Fermi level. Insulartors or semiconductors, even whtn they are treated using variable occupation numbers for electronic states, are less sensitive to the quality of k-point sampling. The default settings used by CASTEP are designed to give accurate sampling for metallic systems. This means that you can get good results for insulators and semiconductors with a slightly less fine k-point mesh than the default. ``` ### Aug4 working with the parameter settings for MnO2 surface system in recent several days. The "fix_occupancy : true" instructed in the tutorial could be very misleading for our systems. I'm considering comment the "fix_occupancy : true" command in (.param) file to re-run the optimization for LiPS on carbon or nitrogen doped carbon sheet. ### Aug8 having the single-point energy calculated for MnO2 system, that is, the calculation converges within 100 SCF cycles. ``` mixing_scheme : pulay mix_charge_amp : 0.2 mix_spin_amp : 0.2 ``` with "cut_off_energy : 500" and "kpoints_mp_grid 3 3 1 " written in the (.cell) file. ### Aug21 I'm thinking if ''BFGS'' is more accurate than ''LBFGS'' when run geometry optimization. ### Aug22 Is that ok to first optimize with ''LBFGS'' and ''geom_force_tol : 0.05'' and later with ''BFGS'' and ''geom_force_tol : 0.01''? ### Aug23 ''BFGS'' + ''geom_force_tol : 0.01'' succeeded for C_Li2S6 system. (no_fix_occupancy5) other optimization are canceled. ### Sept13 1) learn some commands in Jmol to write some labels on the canvas and tune the font size. ``` select @xx label "xx" color label green font label 35 set labeloffset 20 30 set monitor on/off ``` see also: https://cbm.msoe.edu/includes/modules/jmolTutorial/jmolLabel.html#anchor2 ; http://www.chm.bris.ac.uk/jmol/jmol-7/doc/JmolUserGuide/index.html 2) re-start the MnO2 substrate case. try to optimize the geometry with z-direction constraints on every atom and the simulation box changeable. ## working log of year 22 ### 22Mar18 (1) MnO2 project was aborted after last summer. I cannot relax the LiPS on MnO2 surface. (2) MoS2 project was constructed this month to collaborate with ZHANG Yu. We first relaxed the MoS2 surface and then put Li2S on it. Things got a little quicker because of the experiences last summer. *** One problem I met is the error ''norm_sq is negative''. I tried ''%block species_pot'' in .cell file marked by ''NCP'' which really avoids the error repeated. During checking the files (.cell, .param, and .geom), I found that the unit in .geom is bohr and in .cell is angstrom. My carelessness is the forgetting of changing bohr to angstrom when building the surface. After I fix the unit exchanging problem, the error disappears. ### 22Mar25 1) LiPS have been relaxed in the unit box (@xps8940), because I found the size of box in z-direction may affect the vdw correction of energy for MoS2 substrate. Q: Should I re-relax the MoS2 substrate in the large box? 2) grahene is being relaxed in the large unit box (@xps8940) with no atom constrained for now. 2.1) LiPS relaxations on graphen begin. 3) The relaxation of Li2S4 on MoS2 is hard to fulfill. I'm thinking if the initial configure is bad and needs to be rotated to let two Li atoms close to the surface. 3.1) optimization done, still only one Li atom pointing down. ### 22Mar27 1) relaxation of Li2S6 on MoS2 completed. 2) relaxation of LiPS on graphene does not converge within 100 cycles. ### 22Apr4 Relaxation of LiPS on MoS2 completed! The next step should the search of transition states! ## working log of year 23 ### 23Jan30 find something interesting on the website: http://www.castep.org/CASTEP/OtherCodes ``ase'' is a python package that incorporates a lot of calculators (like castep, vasp, gpaw etc.). ### 23Feb2 successfully install gpaw (parallel version). "pip install gpaw" is easy but only runs gpaw tests on single core. Following way has been tested: "yum install libxc-devel openblas-devel fftw-devel blacs-openmpi-devel scalapack-openmpi-devel" install the mpi-version for these packages. then "python -m pip install -v gpaw --user" links below are helpful: https://wiki.fysik.dtu.dk/gpaw/install.html#siteconfig ; https://wiki.fysik.dtu.dk/gpaw/platforms/Linux/centos.html ### 23Feb4 python 3.8 and 3.6 are tested to be successful for installing gpaw on xps8940 (user: guest, cp) summary: 1) install miniconda first under the directory "~/.local/opt" 2) install ase ``` pip install ase ``` 3) edit the ~/.bash_gpaw file ``` conda activate py38 export PATH=/usr/lib64/openmpi/bin:$PATH export LD_LIBRARY_PATH=/usr/lib64/openmpi/lib:$LD_LIBRARY_PATH export GPAW_SETUP_PATH=/home/cp/.local/share/gpaw_datasets/gpaw-setups-0.9.20000 ``` the last line indicates where the preset dataset locates. better download it from https://wiki.fysik.dtu.dk/gpaw/setups/setups.html#installation-of-paw-datasets then "source ~/.bash_gpaw" everytime before use gpaw. 4) go into the installing package of gpaw, edit the ./siteconfig.py, ``` fftw = True scalapack = True libraries = ['xc', 'fftw3', 'scalapack', 'mpiblacs'] library_dirs = ['/usr/lib64/openmpi/lib/'] ``` then "pip install gpaw" and test by "gpaw info", "gpaw test" and "gpaw -P 4 test". Reminder: To install the "gpaw" package of parallel version, we need to tell the OS where to find other packages (libxc, fftw, blacs, scalapack etc) of parallel version. (See also step 3 above) ### 23Feb6 1) play with Windows 10 OpenSSH for a couple of days. The official tutorial on installation may be helpful: https://learn.microsoft.com/zh-cn/windows-server/administration/openssh/openssh_install_firstuse 2) also learn to use powerShell, see also: https://blog.csdn.net/weixin_41010198/article/details/117513931 3) test Win10 firewall settings, but not succeed on specific local port for remote login. ### 23Feb7 1) install miniconda and ase in offline mode. very struggling experience. after DingTalking with Li Kai, get the information about using vnc to connect internet on hpc. (interesting, need more efforts.) see also, [python installation in offline mode](https://blog.csdn.net/Kfdhfljl/article/details/105345893#:~:text=Conda%20%E7%A6%BB%E7%BA%BF%E5%AE%89%E8%A3%85%20python%20%E7%8E%AF%E5%A2%83%20%E4%BD%BF%E7%94%A8conda%E7%AE%A1%E7%90%86python%E7%8E%AF%E5%A2%83%E8%BE%83%E4%B8%BA%E6%96%B9%E4%BE%BF%EF%BC%8C%E6%9C%89%E6%97%B6%E9%9C%80%E8%A6%81%E5%9C%A8%E7%A6%BB%E7%BA%BF%E7%9A%84Linux%E6%9C%8D%E5%8A%A1%E5%99%A8%E4%B8%8A%E5%88%9B%E5%BB%BA%E7%89%B9%E5%AE%9A%E7%89%88%E6%9C%AC%E7%9A%84python%EF%BC%8C%E7%BD%91%E4%B8%8A%E6%B2%A1%E6%9C%89%E7%9B%B8%E5%85%B3%E5%8D%9A%E6%96%87%EF%BC%8C%E6%9F%A5%E6%89%BE%E5%88%B0%E4%B8%80%E4%B8%AA%E8%AE%BA%E5%9D%9B%E4%B8%8A%E6%9C%89%E6%89%80%E8%AE%A8%E8%AE%BA%EF%BC%8C%E7%89%B9%E6%AD%A4%E8%AE%B0%E5%BD%95%E3%80%82%20%E4%BD%BF%E7%94%A8conda%E7%89%88%E6%9C%AC%E4%B8%BA4.8.1%EF%BC%8C%E4%BE%9D%E6%AC%A1%E6%89%A7%E8%A1%8C%E4%BB%A5%E4%B8%8B%E6%AD%A5%E9%AA%A4%EF%BC%9A%20%E5%88%9B%E5%BB%BA%E4%B8%80%E4%B8%AA%E7%A9%BA%E7%99%BD%E7%8E%AF%E5%A2%83%20%EF%BC%9A%E5%8F%AA%E9%9C%80%E5%9C%A8%24,activate%20%2B%E7%8E%AF%E5%A2%83%E5%90%8D%20%E8%BF%9B%E5%85%A5%E6%96%B0%E7%8E%AF%E5%A2%83%EF%BC%9B%20%E4%B8%8B%E8%BD%BDpython%E5%AE%89%E8%A3%85%E6%96%87%E4%BB%B6%20%EF%BC%9A%E5%9C%A8%E5%85%B7%E6%9C%89%E4%BA%92%E8%81%94%E7%BD%91%E8%BF%9E%E6%8E%A5%E7%9A%84%E8%AE%A1%E7%AE%97%E6%9C%BA%E4%B8%8A%E4%BB%8E%20Anaconda%20Cloud%20%E4%B8%8B%E8%BD%BD%E7%9B%B8%E5%BA%94%E7%9A%84%E5%8C%85%EF%BC%88%2A.bz2%EF%BC%89%E5%B9%B6%E5%B0%86%E5%85%B6%E7%A7%BB%E8%87%B3%E9%9A%94%E7%A6%BB%E7%9A%84%E4%B8%BB%E6%9C%BA%EF%BC%9B); [mirrors](https://blog.csdn.net/adreammaker/article/details/123396951#:~:text=conda%E6%9B%BF%E6%8D%A2%E9%95%9C%E5%83%8F%E6%BA%90%E6%95%99%E7%A8%8B%201%201.%E9%95%9C%E5%83%8F%E6%BA%90%E6%B7%BB%E5%8A%A0%E6%96%B9%E6%B3%95%20%E9%A6%96%E5%85%88%E6%98%AF%E4%B8%80%E4%BA%9B%E5%B8%B8%E7%94%A8%E5%91%BD%E4%BB%A4%EF%BC%8C%E5%B8%AE%E4%BD%A0%E8%AF%8A%E6%96%AD%E7%9B%AE%E5%89%8D%E4%BD%A0%E7%9A%84conda%E6%BA%90%E7%9A%84%E6%83%85%E5%86%B5%EF%BC%8C%E5%A6%82%E6%9E%9C%E6%98%AF%E6%96%B0%E8%A3%85%E7%9A%84conda%EF%BC%8C%E5%8F%AF%E4%BB%A5%E4%B8%8D%E7%94%A8%E7%AE%A1%E3%80%82%20%E7%AC%AC%E4%B8%80%E6%AD%A5%EF%BC%9A%E6%9F%A5%E7%9C%8B%E5%B9%B6%E8%BF%98%E5%8E%9F%E9%BB%98%E8%AE%A4%E9%95%9C%E5%83%8F%E6%BA%90%20%23%20%E9%A6%96%E5%85%88%EF%BC%8C%E7%9C%8B%E4%B8%80%E4%B8%8B%E7%9B%AE%E5%89%8Dconda%E6%BA%90%E9%83%BD%E6%9C%89%E5%93%AA%E4%BA%9B%E5%86%85%E5%AE%B9%20conda,...%203%203.%E5%BF%AB%E9%80%9F%E4%B8%8A%E6%89%8B%20%E4%B8%BA%E4%BA%86%E6%96%B9%E4%BE%BF%E5%A4%A7%E5%AE%B6%E5%BF%AB%E9%80%9F%E6%B7%BB%E5%8A%A0%EF%BC%8C%E6%88%91%E4%BB%AC%E6%8A%8Aconda%E8%87%AA%E8%BA%AB%E9%BB%98%E8%AE%A4%E7%9A%84%E9%95%9C%E5%83%8F%E6%BA%90%E7%9A%84%E6%9B%BF%E6%8D%A2%E4%BB%A3%E7%A0%81%EF%BC%8C%E5%86%99%E5%A5%BD%E6%94%BE%E5%9C%A8%E8%BF%99%E9%87%8C%EF%BC%8C%E6%96%B9%E4%BE%BF%E5%A4%A7%E5%AE%B6%E7%9B%B4%E6%8E%A5%E6%9B%BF%E6%8D%A2%E4%BD%BF%E7%94%A8%EF%BC%8C%E5%A4%A7%E5%AE%B6%E9%80%90%E6%9D%A1%E5%A4%8D%E5%88%B6%E6%B7%BB%E5%8A%A0%E5%8D%B3%E5%8F%AF%EF%BC%8C%E4%B8%8B%E8%BF%B0%E9%93%BE%E6%8E%A5%E6%9B%B4%E6%96%B0%E6%97%B6%E9%97%B4%E4%B8%BA2022.3.10%EF%BC%8C%E5%8F%AF%E4%BB%A5%E9%80%9A%E8%BF%87%E5%89%8D%E9%9D%A2%E7%9A%84%E6%96%B9%E6%B3%95%E5%85%88%E5%88%A4%E6%96%AD%E4%B8%80%E4%B8%8B%E9%95%9C%E5%83%8F%E6%BA%90%E5%AF%B9%E4%B8%8D%E5%AF%B9%EF%BC%9A%20...%204%204.%E9%95%9C%E5%83%8F%E6%BA%90%E7%BD%91%E7%AB%99%E7%9B%AE%E5%BD%95%E5%91%BD%E5%90%8D%E7%9A%84%E5%90%AB%E4%B9%89%20); [conda environment installation in offline mode](https://www.codeleading.com/article/58815751701/#:~:text=CondaError%3A%20RuntimeError%28u%27EnforceUnusedAdapter%20called%20with%20url%20https%3A%2F%2Frepo.continuum.io%2Fpkgs%2Ffree%2Flinux-64%2Fjpeg-8d-0.tar.bz2nThis%20command%20is,in%20offline%20mode.n%27%2C%29%20%E2%80%A6%E2%80%A6%20%E6%80%9D%E8%B7%AF%20%E7%B1%BB%E5%9E%8B%E9%94%99%E8%AF%AF%EF%BC%9A%E6%90%9C%E7%B4%A2%E4%BA%86%E8%A7%A3%E5%88%B0%E5%8F%AF%E8%83%BD%E6%98%AFconda%E7%89%88%E6%9C%ACbug%E5%AF%BC%E8%87%B4%E7%9A%84%20%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%20%E6%96%B9%E6%B3%95%E4%B8%80%EF%BC%88%E6%B2%A1%E6%9C%89%E9%AA%8C%E8%AF%81%EF%BC%8C%E4%B8%8D%E8%BF%87%E7%9C%8B%E8%B5%B7%E6%9D%A5%E6%8C%BA%E6%9C%89%E9%81%93%E7%90%86%E7%9A%84%EF%BC%8C%E5%8F%AF%E4%BB%A5%E5%B0%9D%E8%AF%95%E4%B8%80%E4%B8%8B%EF%BC%89%EF%BC%9A%E5%AE%89%E8%A3%85%E6%97%B6%E5%B0%86pkgs%E5%86%85%E7%9A%84urls%E5%92%8Curls.txt%E4%B8%A4%E4%B8%AA%E6%96%87%E4%BB%B6%E5%88%A0%E9%99%A4%E4%BA%86%EF%BC%88%E6%96%87%E4%BB%B6%E5%86%85%E9%83%A8%E6%98%AF%E5%AE%89%E8%A3%85%E5%8C%85%E7%9A%84%E5%9C%B0%E5%9D%80%EF%BC%89%EF%BC%9B) 2) finally installed gpaw (mpi version) on hpc. Intel tools (MKL etc) are utilized. Following links are helpful: [Zhihu: gpaw installation](https://zhuanlan.zhihu.com/p/370192750); [csdn: for loop, c99 mode](https://blog.csdn.net/imyang2007/article/details/8296331); [csdn: cannot load libmkl_avx2.so](https://blog.csdn.net/charie411/article/details/111048936) ### Feb9, Thursday, rainy, work at home play with ase tutorial and gpaw tutorial. [ase tutorial](https://wiki.fysik.dtu.dk/ase/tutorials/tutorials.html#basic-property-calculations) [gpaw tutorial](https://wiki.fysik.dtu.dk/gpaw/tutorialsexercises/tutorialsexercises.html) "LCAO" mode is suitable for calculating geometries with a small basis set, and "PW" (plane-wave) mode is suitable for small unit cell. ### Feb10 1) fail to reproduce the lattice constant searching by gpaw. the lattice constant does not go to a platform as the cutoff increases. see also, [finding lattice constant](https://wiki.fysik.dtu.dk/gpaw/tutorialsexercises/structureoptimization/lattice_constants/lattice_constants.html) problem fixed. a error caused by the typo in the original tutorial file. The following two sentences make the calculation absurd. ``` cell0 = al.cell ... al.cell = (1+eps)*cell0 ``` which should be rewritten by np.copy() to avoid excess changing of the cell parameter ``` cell0 = np.copy(al.cell) ``` Then the energies converge as cutoff increases. see also, cp@xps8940:~/workspace/y23/02/gpaw_tutorial/0basics/Al/lattice_constant/0ecut.py 2) optimal lattice constant calculated by gpaw on xps8940 is around 3.9, however experimental result is 5.4. see also, [structure optimization](https://wiki.fysik.dtu.dk/gpaw/tutorialsexercises/structureoptimization/stress/stress.html) ### Feb11 1) "dftd3" correction for vdw calculation can be found in the following links, [theory](https://www.vasp.at/wiki/index.php/DFT-D3) ; [simple DFT-D3 install](https://dftd3.readthedocs.io/en/latest/installation.html) ; [dftd3 pip install](https://pypi.org/project/dftd3/) ; [ase example](https://wiki.fysik.dtu.dk/ase/ase/calculators/dftd3.html#examples) Usage in ase: (after "pip install dftd3") ``` xc = 'PBE' calcname = 'graphite-{}'.format(xc) dft = GPAW(mode=PW(500), kpts=(10, 10, 6), xc=xc, txt=calcname + '_DFTD3.log') calc = DFTD3(method='pbe', damping='d3bj').add_calculator(dft) ``` 2) calculation of graphite is finished. (ecut=500, kcut=8 is enough. see also, localhost:~/Tutorial_of_ase_and_gpaw/batteries/part1/.) convergence tests on lithium recommend ecut=500 and kcut=16. The next step is to optimize the structure of Li metal and get the energy for different methods (LDA, PBE, PBE+DFT-D3). ### Feb13 1) intercalation energy : exp. (-0.124) , LDA (-0.359), PBE (-0.058), PBE+DFT-D3 (0.0005), BEEF-vdW (-0.0695) 2) not fully understand the Li/FePO4 part. explanations on the webpage are not complete. 2.1) no idea about the volume change during charge/discharge. 2.2) no idea about the gravimetric or volumetric energy density of a FePO4/C battery ### Feb14 1) NEB (Nudged Elastic Band) method is for calculating the energy barrier from a point to another equivalent point. Thus the initial state and final state are the same except a constant, like in the case of one lithium atom diffused on graphene surface where the carbon atoms are fixed and final position of the lithium atom is just attained by a translation to reproduce the initial state. Then some middle states are created by interpolation. The rest work can be done by the command: ``` initial = read(NEB_init.traj) final = initial.copy() cell = initial.get_cell() # "view(initial)" to see the symmetry final[-1].x -= (cell[0,0]+cell[1,0])/3. final[-1].y -= (cell[0,1]+cell[1,1])/3. neb = NEB(images) neb.interpolate() for image in images[0:7]: calc = GPAW(mode=PW(500), kpts=(5, 5, 6), xc='LDA', txt=None, symmetry={'point_group': False}) image.set_calculator(calc) image.set_constraint(FixAtoms(mask=[atom.symbol == 'C' for atom in image])) # To better illustrate how the NEB method works, the symmetry is broken using the rattle function. images[3].rattle(stdev=0.05, seed=42) images[0].get_potential_energy() images[0].get_forces() images[6].get_potential_energy() images[6].get_forces() optimizer = BFGS(neb, trajectory='neb.traj', logfile='neb.log' ) optimizer.run(fmax=0.10) ``` An easy way to get the final position of a specific atom is to add a vector to its position. Such a vector is just difference between two identical atoms on the surface, no matter if these two are the nearest identical atoms or not. 2) open database for DFT calculating: https://oqmd.org/ 3) play with N2 on Ru (111) surface. ### Feb15 1) N2 on Ru: N2 bond length (before relax: 1.153, after: 1.171), N2 center z coordinates (before: 9.697, after: 9.758), N2 z coordinates (before: 9.121, 10.273 , after: 9.173, 10.344) The N-N bond is slightly elongated due to the interaction with the surface. 2) in ase, "Atom" is different from "Atoms". use "dir(Atom)" and "dir(Atoms)" to check! ``` slab = read('Ru.traj') # Atoms('Ru8', ...) slab.get_positions() # output all the positions of atoms Ru4 = slab[4] # Atom('Ru', [x,y,z], ...) Ru4.position # output the position ``` 3) adsorption energy: N2 standing on Ru (-0.403 eV), N2 lying on Ru (0.203 eV), N N on Ru (0.089 eV). It seems N2 molecule rather stable when standing on the top of one Ru atom. Note: E_ads = E_N2@slab - E_N2 - E_slab 3.1) N2 lying on Ru, after relaxation, the bond length turns to 1.284, even elongated more. 3.2) start from the lying state, the dissociation state with two N atoms staying at two neighbor hollow sites seems more favorable. dE = E_NN - E_lying = -0.114 eV ### Feb16, try to find the energy barrier from N2 to NN lying on Ru 1) in parallel mode for NEB calculation, "view(images)" cannot be used to call a window showing the images. Such a parallel calculation is to optimize each image simultaneously on different cores, which should be of higher efficiency than run on same number of cores in series. (haven't check yet) see also, https://wiki.fysik.dtu.dk/gpaw/tutorialsexercises/moleculardynamics/neb/neb.html ### Mar16, ethernet connected for Dell-T7920 1) call for help and send applications for IP. "nameserver" is important for successful "ping www.baidu.com". Ubuntu official website also provides some simple examples: [ubuntu official examaples](https://ubuntu.com/server/docs/network-configuration) see also: [failure in name resolution](https://blog.51cto.com/u_3826358/3832703) ### Mar23, ethernet re-configuration for Dell-T7920 1) "netplan" should be configured by "/etc/netplan/00-installer-config.yaml" ``` # This is the network config written by 'subiquity' network: version: 2 renderer: NetworkManager ethernets: enp0s31f6: dhcp4: no addresses: [xx.xx.xx.xx/24] nameservers: addresses: [10.1.1.9, 8.8.8.8, 8.8.4.4, 114.114.114.114] enp2s0: dhcp4: true ``` For now this setup works well. "NetworkManager" is better than "networkd" on my server. see also: [to start or stop "NetworkManager" and "networkd"](https://www.cnblogs.com/nuoforever/p/14176630.html) 2) Pressing "Tab" key doesn't work for programmable completion when login Dell7920. You may want to check the shell in use first. see also: [programmable completion](https://blog.csdn.net/Frederick_Bala/article/details/107410923) 3) install "ase" and "gpaw" on Dell7920-ubuntu-server. To install mpi-version "gpaw", one needs to download the source file ([link for gpaw-22.8.0.tar.gz](https://pypi.org/packages/source/g/gpaw/gpaw-22.8.0.tar.gz), [webpage to get source code](https://wiki.fysik.dtu.dk/gpaw/install.html#siteconfig)) and install by ``` python setup.py build python setup.py install ``` "site_config.py" file should be edited before installation. see also, [instruction for ubuntu](https://wiki.fysik.dtu.dk/gpaw/platforms/Linux/ubuntu.html) other tips see also: [link for gpaw-setups-0.9.20000](https://wiki.fysik.dtu.dk/gpaw-files/gpaw-setups-0.9.20000.tar.gz), [webpage to get gpaw datasets](https://wiki.fysik.dtu.dk/gpaw/setups/setups.html#installation-of-paw-datasets) ### Jul7, cp2k on cls0 1) ssh with no password typed in: copy the "id_rsa.pub" file to the server (cls0), then rename it to "authorized_keys", and "chmod a-rw authorized_keys && chmod u+rw authorized_keys". 2) try to install cp2k on cls0 and xps8940. not succeed for now. the following command is typed in to build the dependences: ``` ./install_cp2k_toolchain.sh --install-all ``` 2.1) then start from installation of the dependences for gcc ``` # gmp ./configure --prefix=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build make -j8 make install # mpfr ./configure --with-gmp=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --prefix=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build make -j8 make install # mpc ./configure --with-gmp=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --with-gmp=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --prefix=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build make -j8 make install # isl ./configure --with-gmp=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --with-gmp=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --with-mpc=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --prefix=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build make -j8 make install ``` 2.2) vim "~/.bash_cp2k" ``` build=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build PATH=$build/bin:$PATH LD_LIBRARY_PATH=$build/lib:$LD_LIBRARY_PATH LD_LIBRARY_PATH=$build/lib64:$LD_LIBRARY_PATH export PATH export LD_LIBRARY_PATH ``` "source ~/.bash_cp2k" to avoid the error of losing libisl.so.23 during compiling gcc. 2.3) install gcc ``` ./configure --with-gmp=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --with-gmp=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --with-mpc=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --with-isl=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --prefix=/home/cp/Downloads/cp2k-2023.1/tools/toolchain/build --disable-multilib --enable-languages=c,c++ ``` 2.3.1) "enable-languages=c,c++" could be very important. (for now on xps8940, install gcc-10.3 with such "enable" line, c and c++ complier done, while "--enable-languages=c,c++,gfortran" failed) 2.3.2) suddenly found one serious problem about the compatability between the compiler and cp2k. see also [cp2k-official-report-compiler-support](https://www.cp2k.org/dev:compiler_support). gcc-12.2 may cause elpa installation failure. 2.3.3) see also: [official installation configuration](https://gcc.gnu.org/install/configure.html) ### Jul10, platformIO on vs code 1) arduino programming by platformIO on vs code. should be a simpler way to manage the code blocks. ### Jul26 1) gcc-10.3 installed/compiled successfully on xps8940. to solve "where has float128 gone?" error, see also [bugs when installing in Linux](https://blog.csdn.net/qq_32115939/article/details/103786253) It's a good idea to compile gcc without statements of "C_INCLUDE_PATH" in the bash, i.e. source a new empty bash file to configure for gcc path only. 2) CentOS with gcc-4.8 preinstalled however no g++. Ubuntu-server has them both preinstalled. to solve "C++ compiler missing or inoperational", type following command on CentOS: ``` yum install gcc-c++ ``` ### Jul27 1) cp2k-cpu seems well installed, however cp2k-gpu not. maybe the "--gpu-ver" should be adjusted. 1.1) "toolchain" downloads the necessary packages of the specific version for the installation. that is to say, just follow it rather than download and install the packages by hand. ``` ./install_cp2k_toolchain.sh --with-gcc=install --with-openmpi=enable --gpu-ver=A100 --enable-cuda --no-check-certificate ``` after installation of these prerequisites, copy the arch files to the arch folder under the root folder of cp2k, then source and make. 1.1.1) it seems "A100" is close to "RTX 3070" card in graphic acceleration. their cuda arch are separately "sm_80" and "sm_86". see also [matching cuda arch and cuda gencode](https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/) ### Jul28 1) cp2k-gpu tested on xps8940, parallel version ("psmp") does not work well however "ssmp" version does better. maybe too many communications between CPUs and/or CPUs and GPUs. 1.1) latest cuda-12 is installed before compile "elpa". the old cuda-11.4 does not work. (the "cudaErrorUnsupportedPtxVersion" error) see also: (common seen cuda error)[https://blog.csdn.net/Bit_Coders/article/details/113181262] 2) gcc-12 is installed on cls0. "CPATH" needs to be specified before run "./installation_cp2k_toolchain.sh" which uses "CFLAGS=-I/usr/include" as default.