Home

TATA INSTITUTE OF FUNDAMENTAL RESEARCH

image

Contents

1. Provide output of the above benchmark on DVD CD along with the bid Do not provide the print out of the outputs Proposals of vendors who do not fulfill the above criteria or who fail to submit documentary proof would be rejected Page 7 of 14 K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in AMENDMENTS ANNEXURE A SCOPE OF WORK ITEM 1 Master Node 1 No 2 Intel Haswell 8C E5 2630V3 2 4GHz 20M 8GT s Two sockets per node Memory of 4 GB core DDR4 RAM 4 x minimum 500 GB Enterprise Hard Disk SAS 10000 rpm or better with RAID 10 Configuration 1 Management port Redundant power supply 1 x DVD Writer Quote separately for 8 GB core DDR4 RAM also optional but quote compulsory Connectivity as per requirement in ITEM 5 ITEM 2 Compute Nodes 64 Nos Note 2 Intel Haswell 8C E5 2630V3 2 4GHz 20M 8GT s Two sockets per node Memory of 4 GB core DDR4 RAM 1 x 500 GB SATA Hard Disk 7200 rpm or better 1 Management port Redundant power supply Quote separately for 8 GB core DDR4 RAM also optional but quote compulsory Connectivity as per requirement in ITEM 5 16 No s of NVIDIA Tesla K40 GPUs should be populated across the nodes with minimum of
2. 040 2419 5029 Email purchase tifrh res in User quota and group quota should be configurable Storage system should be scalable up to 200 TB in single filesystem by addition of hard disks only without additional controllers being required OPTIONAL Parallel File System Quote separately for a 200 TB open source parallel filesystem based storage solution with 5 GB s throughput for the cluster solution optional but quote compulsory e 200 TB usable capacity storage on NL SAS or SAS with Hardware Requirement RAID 6 8 2 storage array e Storage to be split into two silo s in a 25 75 ratio wherein 25 of storage is required to deliver minimum 5 GB s write throughput The remaining 75 is required to delivery minimum 3 GB s write throughput Read performance should not be less than write e Minimum of 2TB capacity 7 2K RPM Enterprise NL SAS disks to be used e Parallel File system PFS should be Intel sourced and OEM supported Lustre or equivalent or better open source PFS The solution should be highly available and with no single point of failure including I O servers Metadata servers Storage array HBA Cards and power supply e Meta Data Targets MDT in RAID 10 and Object Storage Targets OST in RAID 6 Hardware RAID 8 2 configuration e Global Hot Spare Disks Disks amounting to 5 of total capacity to be provided as Global Hot spare i e Global Hot Spare for every 2 LUN in RAID 6 Storage Throughput Minimum 5
3. e Fntire solution to be implemented in 12 weeks timeline Delay in delivery will have penalty TCIS reserves the right to cancel the order if it is not deployed even after that e Delay due to TCIS will not be considered in computing time Administrative Officer Page 14 of 14
4. 30 1 Request to change Industry standard Rack OEM Rack mentioned in ITEM 6 Racks to Server OEM rack ITEM 6 Rack is now amended Please see page no 10 Page no 30 Are the academic versions of Intel and PGI openACC Compilers will be provided by the TIFR TCIS for installation If so Request TIFR TCIS to arrange the requisite licenses from respective ISVs ITEM 7 Softwares is now amended Please See page no 10 Page no 31 Request to remove failover capability mentioned in cluster management tool as there is only one head node ITEM 7 Softwares A Cluster Management tool is now amended Please see page no 10 Page no 31 Some features mentioned in ITEM 7 Cluster Management tools and job work load management were not available in open source management tools Requesting TIFR to remove the features which are not available in open source solutions or Requesting TIFR to allow vendors to quote with commercial Linux Scheduler and management tools Please see the amendment for clarifications page no 11 Page 4of 14 C tifr TATA INSTITUTE OF FUNDAMENTAL RESEARCH Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in Page no 32 Requesting TIFR to make the AMC for 6th and
5. Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr Corrigendum for Supply Installation Configuration Testing amp Demonstration for satisfactory Performance of High Performance Computing HPC Cluster at TIFR TCIS Hyderabad PUBLIC TENDER NOTICE TENDER REFERENCE NO TFR PD CA15 274 150156 TATA INSTITUTE OF FUNDAMENTAL RESEARCH TIFR Centre for Interdisciplinary Sciences Plot No 21 Brundavan Colony Narsingi Hyderabad 500 075 Tel 91 0 40 2419 5029 Email purchase tifrh res in Page 1 of 14 K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in Date 12 10 2015 TENDER REFERENCE NO TFR PD CA15 274 150156 To Vendors Bidder Sub Corrigendum for Supply Installation Configuration Testing amp Demonstration for satisfactory Performance of High Performance Computing HPC Cluster at TIFR TCIS Hyderabad Dear Bidders Vendors Please refer the subject tender published in Times of India al
6. user to get used to the cluster and troubleshooting of problems faced by different users immediately 19 Documentation a detailed document about the cluster including hardware and software details 20 Project final signoff Page 12 of 14 K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in Terms and conditions Mandatory Not Optional e Any item not specifically mentioned in the specification but is required for successful implementation of the HPC solution in the opinion of the vendor must be brought to our notice and quoted accordingly e At the time of installation if it is found that some additional hardware or software items are required to meet the operational requirement of the configuration but not included in the vendor s original list of deliverables the vendor shall supply such items to ensure the completeness of the configuration at no extra cost e Delivery period will be 8 weeks from the date of purchase order Once delivered to onsite the installation commissioning and acceptance testing period will be within 4 weeks from the date of delivery of equipment e The vendor immediately after the award of the work shall prepare a detailed plan of in
7. 7th year as optional by continuing the difficulty on pricing due to price escalation during 6 amp 7 year which is a long period beyond 5 year warranty and support in ITEM 8 Warranty and Support ITEM 8 Warranty and Support is now amended Please see page no 11 Page no 33 Request to reduce the demonstration of HPL Performance benchmarks values mentioned in ITEM 10 Scope of work with Deliverables to be part of implementation ITEM 10 Scope of work with Deliverables to be part of implementation is now amended Please see page no 12 Page no 34 Request to remove 5th point in Terms and Conditions as it is conflicting with prequalification criteria Terms and Conditions are now amended Please see page no 13 Page no 35 Request to change the 2nd point in page no 35 to allow remote connection at the time of installation No change in the 2nd point in page no 35 Page no 35 Clarification needed on File system mentioned in 6th point Terms and Conditions are now amended Please see page no 13 i Page 20 Point 20 Price Techno Commercially qualified Lowest L1 bid will be considered inclusive of Total Amount of AMC charges for 6 amp 7 Year ii Page 44 Note Techno Commercially qualified Lowest L1 bid will be considered inclusive of Total Amount of AMC charges for 6 amp 7 Year Techno Commercially qualified Lowest L1 bid will be considered exclusive of Total Amount of AMC charg
8. GB s write speed from compute nodes Meta data should be stored in a separate storage enclosure which is connected to the MDS Server OST should be in separate storage enclosure s which is are connected to the OSS Server MDT Hard Disk should be on SAS 10000 rpm or higher Storage nodes amp Management nodes should be connected with KVM switch and display Open source IOR IO Zone benchmarks running on compute nodes with 1MB block size and file size double than total storage cache and I O node memory e Benchmarks should be submitted with the technical bid with I O measured from client compute node using IOR benchmark for 5 GB s write throughput e Wire speed Infiniband connectivity between Storage servers to Storage enclosures with Redundant Connects amp Links MDT should be mounted only with MDS server OST should be mounted only with OSS servers For MDT Failover MDS Nodes should be configured with active passive pair For OST Failover OSS Nodes should be configured with active active pair High Availability should be automated Failover and MMP Multiple Mount Protection should be configured File system should not go down even if one of the MDS or OSS nodes fails Mounting and unmounting of the file system should happen without error File server should be scalable up to 600 TB in single filesystem by addition of hard disks only without additional controllers being required ITEM 4 System Software Job Scheduler ca
9. Research institutions like IISc TIFR IISER IT or institutions of equivalent stature Bidders should submit the satisfactory performance letter certificate from their Clients where they have installed HPCs having similar configurations b c d e h i aD k All warranty and support must be serviced directly by the OEM TCIS requires that there be a Single Point Of Contact SPOC from OEM who is responsible for all issues between TCIS and the OEM The bidder should have average annual sales turnover of Rs 9 Crores or more during the last three financial years ending 31st March2015 Attach firm s last 3 years audited profit and loss balance sheet duly audited by C A 5 years warranty is required for all equipment proposed in quotation Mention warranty for 6th and 7th years separately optional Quote Compulsory Latest Solvency Certificate obtained issued after 01 04 2015 by nationalized bank for value of 200 Lakhs to be submitted along with technical bid Failure in which the tender will be rejected All quotations submitted must mention all the details in the scope of work from ITEM 1 to 10 enclosed in Annexure A Failure to do so will result in the quotation being summarily rejected Bidder should be either an Original Equipment Manufacturer OEM or should be single Authorized System Integrator Partner having Direct Purchase and Support Agreement with the OEM One OEM can authorize one partner only and one bidder can represe
10. ative works All Network Interconnect cablings must be structured and adhere to ANSI TIA 568 standard ITEM 6 Rack e Server OEM Rack 42U suitable for quoted server and storage ITEM 7 Software Applications that will be run e FFTW P3DFFT GERRIS LAMMPS QUANTUM ESPRESSO NAMD GROMACS etc e Scientific programs Python Numpy SciPy Setuptools IPython pythondev pythonnumpy pythonmatplotlib pythontk pythonlxml PyReadline MDAnalysis MATLAB e Vim Gvim GNU plot NTPD GRACE BC VMD Perl etc Compilers and Libraries e OpenMP MPI C C and FORTRAN compilers should be installed e 4 No s of PGI OpenACC compiler Academic version including FORTRAN should be installed Vendors should contact Nvidia for academic version openACC compiler and license Cluster management tool e Web based application for supercomputer access Platform independent interface Windows Unix Mac GUI monitor should support all browsers e User account management from master node Monitoring tool like Nagios or better integrated with Environment values Load Resource status system environment status Hard Disk usage status and hardware configuration PDSH Generating Host Keys for users and PDCP should be configured Users home directory should be in file system with quota enabled Reimaging new nodes CPU from master nodes Integration of all software components so as to make the complete HPC cluster system fully functio
11. elow listed benchmark programs on 5TF 1OTF and 20 TF peak performance configurations of the offered solution and also produce the extrapolated outputs of the fully offered solution in peak performance configurations Benchmark codes can be run on the Haswell architecture and the extrapolated results can be submitted However the produced results should match with the results of the offered solution as a part of the acceptance test The maximum allowable deviation from the extrapolated results in the acceptance test should be less than 2 The results with TFLOP count where applicable should be presented in an output file and included in the technical bid e Demonstration of High Performance Linpack HPL Benchmark performance of minimum 75 e Other applications for Benchmark LAMMPS P3DFFT GROMACS http www tifrh res in tcis downloads benchmarking zip 100 TB Storage Solution All storage controllers nodes must support Linux based operating system Must support NFS version3 and above CIFS protocols Open source IO Zone or IOR must be used to demonstrate aggregate performance of the storage system They must be run with many to one distribution of large sequential read and write of 1MB I O block size Benchmark must be run in the following modes with data size twice that of the node memory e All controller amp disk LUNS working e Atleast one RAID6 LUN is in rebuilding mode Performance in both scenarios should not differ by 25
12. es for 6 amp 7 Year Bidder should compulsory quote for the total amount of AMC charges for 6 amp 7 year Techno Commercially qualified Lowest L1 bid will be considered exclusive of Total Amount of AMC charges for 6 amp 7 Year Bidder should compulsory quote for the total amount of AMC charges for 6 amp 7 year All other Terms amp Conditions remain same Page 5 of 14 K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in Annexure I Amendments Pre qualification for bidding Mandatory requirements for a bidder to qualify as a participant in this tender a The bidder should have executed at least one project using the architecture and technologies similar to those being proposed in their quotation against this tender In addition the following condition should also be satisfied 1 At least one order of 80 of tender value or 2 Atleast two order of 60 of tender value or 3 Atleast three orders of 40 of tender value Purchase order copies of the same must be submitted with the technical bid The OEM or partner should have successfully executed projects at premier Indian Defence organizations or premier Indian Academic and
13. l H W Components offered in the Bill of Material should be covered under OEM support enabling program so that to get back end support benefits from Principles OEM in terms of Free Software Update Support Maintenance releases if any to particular Software Version Access to 24 x 7 x 365 online support from Technical Assistance Center of OEM for resolution of problems with the help of Online tools and technical database for on line resolution advance defective part replacement during warranty period within a period of two working days and OEM Login Access This point is now amended Please see page no 6 Page no 14 point g amp h One OEM can authorize one partner only and one bidder can represent only one OEM Is the same clause applicable for Storage solution also No It is only applicable for HPC Cluster Solution Page no 15 K a 1 Is it needed to perform the benchmark has to be carried out on STF and 10 TF and Fully setup 64 Node cluster 2 Can the number of in the list of benchmarks be reduced 3 Can the benchmark using E5 2630v3 only 4 What is the allowable deviation for the extrapolated output 5 Request to mention the URLs to download the other mentioned benchmarks Technical qualifying criteria are now amended Please see page no 7 Page no 28 1 Request to mention the desired RAID level in Head node 2 Request to remove DVD drives from ITEM 2 Compute nodes 3 Request to re
14. l editions amp Hindustan Times all editions on 13 09 2015 the following amendment to the subject is being issued 1 Queries and Clarifications Amendments shall be read as per the attached Annexure I 2 Due date is amended as 03 11 2015 upto 13 00 Hrs instead of 27 10 2015 All other terms amp conditions of subject tender shall remain unchanged This Corrigendum No 01 is an integral part of the subject tender and a copy of the same must be submitted along with the offer duly signed and stamped Administrative Officer Page 2 of 14 C tifr TATA INSTITUTE OF FUNDAMENTAL RESEARCH Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in Queries and Clarification Amendments S No Query Clarification Sought from tender document Clarification Amendment Page no 14 point j About the local presence of OEM s support center This point is now amended Please see Annexure I page no 6 Page 13 and Page no 34 Clarifications on Point a mentioned in page no 13 conflicting with the 5 point of Terms and Conditions in page no 34 No change in point a mentioned in page no 13 h i ne 5 point mentioned in the Terms and Conditions is now removed Page no 14 point i Al
15. ll check all the softwares in Item 10 S No 16 for at least 3 days TCIS team will cross check benchmarking and all other tests based on our input files in the fully offered solution e All LAN cabling should be done on site as per the length required using CAT6 Use factory crimped CAT6 cables e All cabling should be done to provide efficient air circulation and should not block any air circulation behind the servers e Please specify the heat dissipation in BTU and max power consumption of each component when configures with the above configuration e All the required CAT6 Patch cables should be branded ISO IEC 11801 and it should be moulded cables It should withstand the heat produced at the back of servers e Supplier should have direct system integration SI with the OEM whose product the vendor is quoting for The bidder should have a back to back agreement with the OEM to supply and support the OEM s product and solution in India e Itemized price list of each hardware item software bundle and service and warranty to be given separately and clearly e TCIS requires that there be a Single Point of Contact SPOC directly from OEM who is responsible for all issues between TCIS and the OEM partner who executes this project e All quotations submitted must follow the prescribed format for technical compliance as in document attached Annexure B 3 Failure to do so will result in the quotation being summarily rejected e Product offe
16. move All compute nodes should be ITEM 1 to ITEM 10 in Scope of Work mentioned in Annexure A is now amended Please see the amended Annexure A in page no 8 Page 3 of 14 C tifr TATA INSTITUTE OF FUNDAMENTAL RESEARCH Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in GPU ready 4 Request to change RPS can be in the compute node level OR Enclosure level or Rack level 5 Can Vendors accommodate more than two GPUs in a single node Is there a no of minimum or maximum no of GPUs that can be populated across a node Page no 28 1 Request to allow SATA Hard disks in ITEM 2 Compute Nodes ITEM 2 Compute nodes are now amended Please see page no 8 Page no 29 1 Request to allow SATA disks in ITEM 3 for NAS Storage 2 Request to reduce the write throughput from 5 GB s to 2 GB s 3 Request to mention the required throughput and specifications for the Parallel File System ITEM 3 NAS Storage is now amended Please see page no 8 Page no 30 1 Request to mention Minimum FDR in the ITEM 5 Network Interconnect 2 Request to remove 10 Gbps NIC connectivity in Head Mater node mentioned in ITEM 5 ITEM 5 Network Interconnect is now amended Please see page no 10 Page no
17. n be open source Resource Manager can be open source with integrated work load managers Operating System can be open source Linux but not windows Modules support for maintaining multiple versions of softwares All softwares should be configured with module environment http modules sourceforge net Page 9 of 14 K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in e All specified solutions and required software products must be clearly listed with mode of licensing used and number of licenses required including the period of validity and any maintenance or upgrade applicable e Restrictions on software usage if any should also be indicated e PERPETUAL LICENSE is required for all these softwares ITEM 5 Network Interconnect e Network should be fully non blocking interconnect fabric with QDR FDR Omni path equivalent or higher in terms of bandwidth Chassis switch with redundant power supply and redundant fan HBA cards cables etc Management switch Gigabit LAN switch Management modules in IB switch should be redundant Master Head Login node has to have at least 2 x 1 Gbps NIC for LAN connectivity All nodes to be connected by Gigabit network for administr
18. nal and usable e g integration of the scheduler with MPI any license managers etc e Detailed reports about cluster usage statistics Reports of every user and jobs including their monthly usage node usage percentage of utilization History tracking number of completed failed queued and running Page 10 of 14 C tifr TATA INSTITUTE OF FUNDAMENTAL RESEARCH Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in jobs estimated delay and average job duration Automated health check of nodes and Urgent alert messages about critical errors via SMS E mail Job Work load management Note Jobs can be submitted from master node only CPU enabled scheduling with checkpoint and restart Job scheduler configuration including new delete disable queues Dynamic Resource Management and Resource balancing Policy based resource allocation Configure job scheduler for script less short command line submission of jobs on master node Job status monitoring Job history Usage accounting amp reporting with total utilization of resource Check runaway process before entering new jobs Proposed job work load management tool should be open source which should meet 7 out of 9 criteria mentioned above Quote separately fo
19. nt only one OEM In case the Bidder is a System Integration Partner of the Principal Manufacturer a Certificate from the Principal Manufacturer clearly stating the relationship and level of partnership with the Partner and authorization to the Partner to quote for this Specific tender Enquiry is to be furnished One OEM can authorize one partner only and one bidder can represent only one OEM Hardware and software warranty to be covered directly by OEM Vendor Vendor is required to stock a set of spares of all critical components related to the HPC onsite at TIFR TCIS Hyderabad s premises All warranty and support must be serviced directly by the OEM Vendor They should have a registered local office in Hyderabad Acceptance letter to be enclosed The principal firm must have a local logistics support by maintaining a local spares depot in the country of deployment of the equipment This is to ensure immediate delivery of spares parts from Principal Vendor of equipment to its channel partner system integrator Page 6 of 14 K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in TECHNICAL QUALIFYING CRITERIA a b High Performance Computing Cluster The bidder should carry out b
20. r a job work load management tool which fulfills all the above mentioned requirements It can be commercial Optional Quote Compulsory Firmware All hardware should be installed with recent stable version of firmware ITEM 8 Warranty and Support NOT OPTIONAL e Cluster management and support for 5 years Training for general system administration with documentation including tasks such as user node management installation upgrade queuing system management and file system management One L2 L3 level trained personnel should be available to help at any time either remotely or in person Technical support for administration maintenance both software and hardware levels of HPC Vendor will be responsible to protect data during any upgrades of firmware OS A helpdesk email account which is regularly monitored should be available to the users An escalation matrix for issues not resolved by the support personnel with an expected time line should be clearly mentioned The person should have enough experience to handle cluster hardware and software troubleshooting to resolve the problems faced by the users This should include fine tuning of the scheduler s various capabilities The person should be able to produce required status report of the cluster when asked using the software installed in the cluster to manage it e Faulty parts should be replaced within 48 hours of logging a call e Hardware warranty for 5 years e 5 year
21. red should not have end of life for sales for at least three years from date of installation e SLA of 98 of uptime within 24hours reporting onsite failing which penalty will be applicable based on deviation e The bidder has to ensure that the solution proposed delivers an uptime of 98 of the entire system on a yearly basis and minimum of 92 on a monthly basis Every percentage of uptime below 98 on a yearly basis will incur 0 1 of the total cost of this tender In the event of failure of any of the sub systems or components of the proposed solution the bidder has to ensure that the defects are rectified within two full working days All these conditions need to be satisfied Any delay in servicing node s beyond 3 days will incur a penalty of 0 2 of the total cost of this tender per day of delay Any delay in storage or any of its subsystems not working beyond 24 hours will incur a penalty of 0 2 of the total cost of this tender for every completed 24 hours e Bidder should provide complete documentation about the Rack layout power cooling and electrical Page 13 of 14 Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr infrastructure required at TCIS along with the bid
22. s of on site comprehensive warranty Mention Annual Maintenance Contract AMC Charges for 6th amp 7th years separately Optional Quote Compulsory Page 11 of 14 K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone 040 2419 5029 Email purchase tifrh res in ITEM 9 Documentation NOT OPTIONAL User Creation Deletion Modification Bringing up and shutting down the cluster Disk status monitoring of Master IO nodes and storage enclosure Basic troubleshooting for storage and job scheduler Step by step installation guide for node configuration from scratch When handing over the cluster the vendor should provide the full design of the cluster installation including the electric connections network connections user manual clearly explaining how to use the cluster ITEM 10 Scope of work with Deliverables to be part of implementation S No Deliverables BOM Verification and Hardware installation 1 Physical Verifying hardware items in Bill of Materials 2 Rack mounting all the hardwares and Connecting power cables Infiniband amp Ethernet cables to all the Nodes 3 Hardware installation of Master Compute Nodes Switches and S
23. stallation as proposed to be followed by placement of the equipment etc e All vendors participating in this tender must visit the TCIS site for a complete site survey and also meet with the TCIS IT team in the pre bid conference for detailed discussions and clarifications if any e The installation should be done by certified and trained engineers for HPCC stack e g Parallel file system Infiniband and etc followed by comprehensive user training e Installation and integration of all supplied hardware and software shall be done by the vendor The vendor shall install and configure all required hardware and software suites including but not limited to racking and stacking Cluster networking Configuring all nodes Execution and submission of jobs Installation of compilers with flags for optimization and applications Configuration of environment variables and license utility configuration e Entire installation should be done at the proposed site only Remote control of network will not be given e during installation e Give all model numbers of master nodes compute nodes hybrid nodes storage nodes Infiniband chassis switch model Accelerator card details maximum number of port in IB switch and how many ports populated e Provide case logging procedure for both hardware and software failure e OEM is responsible for all performance bench marks and the quote should contain an undertaking certifying the same from the OEM e TCIS team wi
24. torage Implementation amp Configuration 4 Installation and configuration of Cluster OS and Cluster Toolkit on Master Node and compute Nodes 5 Installation and configuration of Infiniband Drivers 6 Installation and configuration of Scheduler queues users and policies policies to be discussed with TIFR before installation process and implemented Installation of Compilers amp Libraries 7 Installation and integration of Intel Compilers 8 Installation and Integration of Open Source compilers and libraries 9 Integration of infifniband MPI and schedulers with compilers 10 Storage manager installation and configuration 11 Storage Configuration RAID Creating LUNs 12 Mapping LUNs to cluster 13 Configuration of LUNs on cluster according to requirement 14 Testing and Verification of complete setup functionality Applications 15 Installation and configuration of Applications 16 Applications for Benchmark in acceptance test LINPACK LAMMPS FFTW P3DFFT GROMACS GERRIS HOOMD SIMPSON Benchmarking 17 Demonstration of High Performance Linpack HPL Benchmark performance of minimum 75 Training and Documentation Cluster usage training and Scheduler training to end users Good and nicely written user manual on How to use 18 the cluster Documents should readable by a new user without much help Demonstration on details of the scheduler and other necessary cluster related software packages Remote or in person help for new
25. two GPUs per nodes OEMs Bidders to contact NVIDIA to offer Academic pricing on GPUs Redundant Power Supply can be in the compute node level OR Enclosure level or Rack level ITEM 3 NAS Storage 100 TB usable capacity storage on NL SAS or better with inbuilt controller and Hardware Requirement RAID 6 8 2 storage array Maximum of 2TB capacity 7 2K RPM NL SAS or better disks to be used Global Hot Spare Disks Disks amounting to 5 of total capacity to be provided as Global Hot spare i e Global Hot Spare for every 2 LUN in RAID 6 Storage Throughput Minimum 2GB s write speed from compute nodes Storage nodes amp Management nodes should be connected with KVM switch and display Open source IOR IO Zone benchmarks running on compute nodes with 1MB block size and file size double than total storage cache and I O node memory Benchmarks should be submitted with the technical bid with I O measured from client compute node using IOR benchmark for 2 GB s write throughput High Availability should be automated Failover and MMP Multiple Mount Protection should be configured Mounting and unmounting of the file system should happen without error Page 8 of 14 K e TATA INSTITUTE OF FUNDAMENTAL RESEARCH tifr Autonomous Institution of the Department of Atomic Energy Government of India TIFR Centre for Interdisciplinary Sciences 21 Brundavan Colony Gandipet Road CBIT Post Office Hyderabad 500 075 Transit Campus Phone

Download Pdf Manuals

image

Related Search

Related Contents

manual de usuario esclusa z14 versión 5  IVR User Manual - Oracle Documentation  Les Veines qui Tuent  User Manual Insitu Riser 5 1/8 ID 8 ¼ Box x 9  

Copyright © All rights reserved.
Failed to retrieve file