我有一个包含大量属性的文件,每行文件可能大约为1000,每个属性文件将有大约5000个键值对。为例如: - 样品实施例(的abc.txt) -从单个文本文件中加载很多属性文件并插入LinkedHashMap
abc1.properties
abc2.properties
abc3.properties
abc4.properties
abc5.properties
因此,我打开该文件并在读取每一行我加载的属性在loadProperties方法文件。并将该属性的键值对存储在LinkedHashMap中。
public class Project {
public static HashMap<String, String> hashMap;
public static void main(String[] args) {
BufferedReader br = null;
hashMap = new LinkedHashMap<String, String>();
try {
br = new BufferedReader(new FileReader("C:\\apps\\apache\\tomcat7\\webapps\\examples\\WEB-INF\\classes\\abc.txt"));
String line = null;
while ((line = br.readLine()) != null) {
loadProperties(line);//loads abc1.properties first time
}
} catch (FileNotFoundException e1) {
e1.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
} finally {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
//I am loading each property file in this method. And checking whether the key
already exists in the hashMap if it exists in the hashMap then concatenate the
new key value with the previous key value. And keep on doing everytime you
find key exists.
private static void loadProperties(String line) {
Properties prop = new Properties();
InputStream in = Project.class.getResourceAsStream(line);
String value = null;
try {
prop.load(in);
for(Object str: prop.keySet()) {
if(hashMap.containsKey(str.toString())) {
StringBuilder sb = new StringBuilder().append(hashMap.get(str)).append("-").append(prop.getProperty((String) str));
hashMap.put(str.toString(), sb.toString());
} else {
value = prop.getProperty((String) str);
hashMap.put(str.toString(), value);
System.out.println(str+" - "+value);
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
try {
in.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
所以我的问题是因为我有1000个多属性文件和每个属性文件有超过5000键值对。并且大多数属性文件具有相同的密钥但具有不同的值,因此如果密钥相同,则必须将该值与先前的值连接。因此,随着属性文件的持续增加以及属性文件中的键值对,LinkedHashMap的大小是否有任何限制。所以这段代码已经足够优化来处理这类问题了?