Vote count:
0
Since dfs.block.size is an HDFS setting, it shouldn't make a difference if I change it during an application execution, right? For example, if the block size of the files of a job are 128 and I call
hadoop jar /path/to/.jar xxx -D dfs.block.size=256
would it make a difference or would I need to change the block size before saving the files to HDFS? Are dfs.block.size and the split size of tasks directly related? If im correct and they are not, is there a way to specify the size of a split?
Thanks!
asked 24 secs ago
Aucun commentaire:
Enregistrer un commentaire