I am having difficulty figuring out the logic and flow of a problem. I have a dataframe with start times and end times of certain events over a period of many minutes (the real dataframe is huge). I want to find out how many of those events (i.e. offsets) occurred in each minute.
onset = pd.Dataframe([32.1,45.3,78.3,121.1,150.3,190.1],index= index)
offset = pd.DataFrame([30.1,41.3,71.3,119.1,148.3,185.1],index= index)
timestamps = pd.concat(onset, offset], axis=1)
n=1
seconds = 60
offset_df = []
for offset in timestamps['offset']:
if offset < seconds:
offset_df.append({'clip_offset': offset, 'seconds': seconds})
elif seconds <= offset <= seconds+60: # 180 < 150ms < 120
seconds = seconds + 60
offset_df.append({'clip_offset': offset, 'seconds': seconds})
elif offset > seconds+60:
new_n = offset / seconds
n+=int(new_n)
seconds = 60*n
offset_df.append({'clip_offset': offset, 'seconds': seconds})
I know my logic about updating 'seconds' is incorrect and I know I need a contingency in the event that there aren't any offset events in a given 60 second chunk.
Aucun commentaire:
Enregistrer un commentaire