The source code to OpenNeuro is available on github and can be installed on your own server. But be aware that it is a narrow domain-specific repository that is closely tied to BIDS; all datasets that are uploaded and hosted have to be BIDS compliant. Depending on the data you want to store, you may also want to consider a more generic repository system such as DataVerse, which can also be downloaded from github. Besides the domains and restrictions on data organization being different between the systems, also the models for the “roles” are different; DataVerse has more elaborate roles for stakeholders with different responsibilities for data collection (e.g. for reviewing and granting access), whereas on OpenNeuro you have uploader and downloaders, nothing more. Which one works best depends on the needs of the researchers that will deposit data.
Regarding hardware, I don’t know what is precisely required, but can imagine a single or possibly a few servers (e.g. one for the web interface, m one for management, and one for file storage). Requirements for redundancy/high-availability and performance make the setup more complex. Nowadays I would say that a virtualized setup in the cloud makes most sense to start with, since that way you can prevent large up-front hardware investments. But if you have some computers available, those should just work well to set up a (test) environment.