There is no way. If Namenode dies and there is no backup then there is no way to recover data
To open a file, a client contacts the Name Node and retrieves a list of locations for the blocks that comprise the file. These locations identify the Data Nodes which hold each block. Clients then read file data directly from the Data Node servers, possibly in parallel. The Name Node is not directly involved in this bulk data transfer, keeping its overhead to a minimum.
As of 0.20 release, Hadoop supported the following read-only default configurations
Hadoop does not recommends changing the default configuration files, instead it recommends making all site specific changes in the following files
Unless explicitly turned off, Hadoop by default specifies two resources, loaded in-order from the classpath:
Hence if same configuration is defined in file core-default.xml and src/core/core-default.xml then the values in file core-default.xml (same is true for other 2 file pairs) is used.