转自:
下文讲述java中HashSet进行去重的四种方法分享,如下所示:
去重后保持原有顺序(重复数据只保留一条)
String[] arr = new String[] {"java", "265", "com", "very", "good", "web"};
Collection noDups = new LinkedHashSet(Arrays.asList(arr));
System.out.println("(LinkedHashSet) distinct words: " + noDups);
去重后顺序打乱(重复数据只保留一条)
String[] arr = new String[] {"java", "265", "com", "very", "good", "web"};
Collection noDups = new HashSet(Arrays.asList(arr));
System.out.println("(HashSet) distinct words: " + noDups);
去重后顺序打乱(重复数据只保留一条)
String[] arr = new String[] {"java", "265", "com", "very", "good", "web"};
Set s = new HashSet();
for (String a : arr)
{
if (!s.add(a))
{
System.out.println("Duplicate detected: " + a);
}
}
System.out.println(s.size() + " distinct words: " + s);
去重后顺序打乱(相同的数据一条都不保留,取唯一)
String[] arr = new String[] {"java", "265", "com", "very", "good", "web"};
Set uniques = new HashSet();
Set dups = new HashSet();
for (String a : arr)
{
{
if (!uniques.add(a))
dups.add(a);
}
}
uniques.removeAll(dups);
System.out.println("Unique words: " + uniques);
System.out.println("Duplicate words: " + dups);