Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
E expred
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 6
    • Issues 6
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Operations
    • Operations
    • Incidents
    • Environments
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Maximilian Reimer
  • expred
  • Issues
  • #4

Closed
Open
Created Apr 08, 2021 by Maximilian Reimer@mreimerMaintainer

Refactor loading of data to use a data loader

I would suggest wrapping the data loading into a pytorch data loader and just a sampler for shuffling and batching.

One needs to

  1. Implement a dataset that takes care of the loading tokenization and returns instances. I think a map-style dataset that returns dict/object (like SentenceEvidence)
  2. Implement a collate_fn that takes a list of these object and returns a batch
  3. Then one can just use:

def collate_and_padd_batch(...):
   ...
dataset = EraserDataset(..., split='train')

loader = DataLoader(dataset, batch_size=1, shuffle=True, num_workers=0, collate_fn=collate_and_padd_batch)

for batch in loader:
  # to training
  ...

For more information see

Edited Apr 08, 2021 by Maximilian Reimer
Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking