Logo F2FInterview

Hadoop Interview Questions

Q   |   QA

There is no way. If Namenode dies and there is no backup then there is no way to recover data 

To open a file, a client contacts the Name Node and retrieves a list of locations for the blocks that comprise the file. These locations identify the Data Nodes which hold each block. Clients then read file data directly from the Data Node servers, possibly in parallel. The Name Node is not directly involved in this bulk data transfer, keeping its overhead to a minimum. 

  • hadoop fs -ls
  • hadoop fs -mkdir
  • hadoop fs -put localfile hdfsfile  

As of 0.20 release, Hadoop supported the following read-only default configurations

  • src/core/core-default.xml
  • src/hdfs/hdfs-default.xml
  • src/mapred/mapred-default.xml

Hadoop does not recommends changing the default configuration files, instead it recommends making all site specific changes in the following files

  • conf/core-site.xml
  • conf/hdfs-site.xml
  • conf/mapred-site.xml

Unless explicitly turned off, Hadoop by default specifies two resources, loaded in-order from the classpath:

  • core-default.xml : Read-only defaults for hadoop.
  • core-site.xml: Site-specific configuration for a given hadoop installation.

Hence if same configuration is defined in file core-default.xml and src/core/core-default.xml then the values in file core-default.xml (same is true for other 2 file pairs) is used.

In order to link this F2FInterview's page as Reference on your website or Blog, click on below text area and pres (CTRL-C) to copy the code in clipboard or right click then copy the following lines after that paste into your website or Blog.

Get Reference Link To This Page: (copy below code by (CTRL-C) and paste into your website or Blog)
HTML Rendering of above code: