在R中是否可以处理文件读取和解析的操作?

10 浏览
0 Comments

在R中是否可以处理文件读取和解析的操作?

目录中有一堆文件,每行都有json格式的条目。文件的大小从5k到200MB不等。我有这段代码来遍历每个文件,在json中解析我要查找的数据,最后形成一个数据框。这个脚本花费了很长时间才能完成,事实上它从来没有完成过。

有没有办法加快速度,以便我可以更快地读取文件?

代码:

library(jsonlite)
library(data.table) 
setwd("C:/Files/")
#data <- lapply(readLines("test.txt"), fromJSON)
df<-data.frame(Timestamp=factor(),Source=factor(),Host=factor(),Status=factor())
filenames <- list.files("Json_files", pattern="*.txt", full.names=TRUE)
for(i in filenames){
  print(i)
  data <- lapply(readLines(i), fromJSON)
  myDf <- do.call("rbind", lapply(data, function(d) { 
    data.frame(TimeStamp = d$payloadData$timestamp, 
               Source = d$payloadData$source, 
               Host = d$payloadData$host, 
               Status = d$payloadData$status)}))
  df<-rbind(df,myDf)

这是一个示例条目,但文件中有成千上万条这样的条目:

{"senderDateTimeStamp":"2016/04/08 10:53:18","senderHost":null,"senderAppcode":"app","senderUsecase":"appinternalstats_prod","destinationTopic":"app_appinternalstats_realtimedata_topic","correlatedRecord":false,"needCorrelationCacheCleanup":false,"needCorrelation":false,"correlationAttributes":null,"correlationRecordCount":0,"correlateTimeWindowInMills":0,"lastCorrelationRecord":false,"realtimeESStorage":true,"receiverDateTimeStamp":1460127623591,"payloadData":{"timestamp":"2016-04-08T10:53:18.169","status":"get","source":"STREAM","fund":"JVV","client":"","region":"","evetid":"","osareqid":"","basis":"","pricingdate":"","content":"","msgname":"","recipient":"","objid":"","idlreqno":"","host":"WEB01","servermember":"test"},"payloadDataText":"","key":"app:appinternalstats_prod","destinationTopicName":"app_appinternalstats_realtimedata_topic","hdfsPath":"app/appinternalstats_prod","esindex":"app","estype":"appinternalstats_prod","useCase":"appinternalstats_prod","appCode":"app"}

{"senderDateTimeStamp":"2016/04/08 10:54:18","senderHost":null,"senderAppcode":"app","senderUsecase":"appinternalstats_prod","destinationTopic":"app_appinternalstats_realtimedata_topic","correlatedRecord":false,"needCorrelationCacheCleanup":false,"needCorrelation":false,"correlationAttributes":null,"correlationRecordCount":0,"correlateTimeWindowInMills":0,"lastCorrelationRecord":false,"realtimeESStorage":true,"receiverDateTimeStamp":1460127623591,"payloadData":{"timestamp":"2016-04-08T10:53:18.169","status":"get","source":"STREAM","fund":"JVV","client":"","region":"","evetid":"","osareqid":"","basis":"","pricingdate":"","content":"","msgname":"","recipient":"","objid":"","idlreqno":"","host":"WEB02","servermember":""},"payloadDataText":"","key":"app:appinternalstats_prod","destinationTopicName":"app_appinternalstats_realtimedata_topic","hdfsPath":"app/appinternalstats_prod","esindex":"app","estype":"appinternalstats_prod","useCase":"appinternalstats_prod","appCode":"app"}

{"senderDateTimeStamp":"2016/04/08 10:55:18","senderHost":null,"senderAppcode":"app","senderUsecase":"appinternalstats_prod","destinationTopic":"app_appinternalstats_realtimedata_topic","correlatedRecord":false,"needCorrelationCacheCleanup":false,"needCorrelation":false,"correlationAttributes":null,"correlationRecordCount":0,"correlateTimeWindowInMills":0,"lastCorrelationRecord":false,"realtimeESStorage":true,"receiverDateTimeStamp":1460127623591,"payloadData":{"timestamp":"2016-04-08T10:53:18.169","status":"get","source":"STREAM","fund":"JVV","client":"","region":"","evetid":"","osareqid":"","basis":"","pricingdate":"","content":"","msgname":"","recipient":"","objid":"","idlreqno":"","host":"WEB02","servermember":""},"payloadDataText":"","key":"app:appinternalstats_prod","destinationTopicName":"app_appinternalstats_realtimedata_topic","hdfsPath":"app/appinternalstats_prod","esindex":"app","estype":"appinternalstats_prod","useCase":"appinternalstats_prod","appCode":"app"}

0
0 Comments

问题的出现原因是在处理文件读取和解析时,使用了一个循环来逐个读取文件并解析数据。这种方法在处理大量文件时效率较低,因为每次读取文件都需要复制一个越来越大的数据框。这导致了处理时间的增加。

解决方法是使用lapply函数替代for循环,并且不再使用rbind函数。使用lapply函数可以避免重复复制数据框的问题,而且结果将被存储在一个方便的列表中,可以直接使用或一次性转换为数据框。

下面是解决方法的代码示例:

# 创建处理函数
getData <- function(i) {
  print(i)
  data <- lapply(readLines(i), fromJSON)
  myDf <- do.call("rbind", lapply(data, function(d) { 
  data.frame(TimeStamp = d$payloadData$timestamp, 
           Source = d$payloadData$source, 
           Host = d$payloadData$host, 
           Status = d$payloadData$status)}))
}
# 使用lapply函数遍历文件
myDataList <- lapply(filenames, getData)

通过使用lapply函数和避免重复复制文件,可以提高文件读取和解析的效率。这种方法将文件数据存储在一个列表中,可以方便地进行后续处理或转换为数据框。

0
0 Comments

在R中进行文件读取和解析的问题出现的原因是需要将文本数据转换为数据框。解决方法是使用jsonlite包中的fromJSON函数将文本数据转换为JSON数组,并通过解析该数组创建数据框。如果需要将多个文件的数据合并为一个数据框,可以使用lapply函数将文件名列表作为输入,并在每个文件上应用fromJSON函数,最后使用data.table包中的rbindlist函数将列表中的数据框合并为一个。

下面是解决问题的代码示例:

# 读取文本文件的数据
df <- jsonlite::fromJSON(paste0("[",paste0(readLines("c:/tmp.txt"),collapse=","),"]"))$payloadData[c("timestamp","source","host","status")]
df
# 将多个文件的数据合并为一个数据框
dflist <- lapply(filenames, function(i) {
  jsonlite::fromJSON(
    paste0("[",
            paste0(readLines(i),collapse=","),
            "]")
  )$payloadData[c("timestamp","source","host","status")]
})
largedf <- do.call("rbind", dflist)

如果需要处理嵌套的对象,可以写一个新的问题并在其中引用这个问题,以获取帮助。

0