Conference paper Open Access

Unsupervised Video Summarization via Attention-Driven Adversarial Learning

Apostolidis, Evlampios; Adamantidou, Eleni; Metsai, Alexandros; Mezaris, Vasileios; Patras, Ioannis


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="URL">https://zenodo.org/record/3605501</identifier>
  <creators>
    <creator>
      <creatorName>Apostolidis, Evlampios</creatorName>
      <givenName>Evlampios</givenName>
      <familyName>Apostolidis</familyName>
      <affiliation>CERTH &amp; QMUL</affiliation>
    </creator>
    <creator>
      <creatorName>Adamantidou, Eleni</creatorName>
      <givenName>Eleni</givenName>
      <familyName>Adamantidou</familyName>
      <affiliation>CERTH</affiliation>
    </creator>
    <creator>
      <creatorName>Metsai, Alexandros</creatorName>
      <givenName>Alexandros</givenName>
      <familyName>Metsai</familyName>
      <affiliation>CERTH</affiliation>
    </creator>
    <creator>
      <creatorName>Mezaris, Vasileios</creatorName>
      <givenName>Vasileios</givenName>
      <familyName>Mezaris</familyName>
      <affiliation>CERTH</affiliation>
    </creator>
    <creator>
      <creatorName>Patras, Ioannis</creatorName>
      <givenName>Ioannis</givenName>
      <familyName>Patras</familyName>
      <affiliation>QMUL</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Unsupervised Video Summarization via Attention-Driven Adversarial Learning</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2020</publicationYear>
  <subjects>
    <subject>Video summarization</subject>
    <subject>Unsupervised learning</subject>
    <subject>Attention mechanism</subject>
    <subject>Adversarial learning</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2020-01-06</date>
  </dates>
  <resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3605501</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1007/978-3-030-37731-1_40</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/retv-h2020</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;This paper presents a new video summarization approach that integrates an attention mechanism to identify the significant parts of the video, and is trained unsupervisingly via generative adversarial learning. Starting from the SUM-GAN model, we rst develop an improved version of it (called SUM-GAN-sl) that has a significantly reduced number of learned parameters, performs incremental training of the model&amp;#39;s components, and applies a stepwise label-based strategy for updating the adversarial part. Subsequently, we introduce an attention mechanism to SUM-GAN-sl in two ways: i) by integrating an attention layer within the variational auto-encoder (VAE) of the architecture (SUM-GAN-VAAE), and ii) by replacing the VAE with a deterministic attention auto-encoder (SUM-GAN-AAE). Experimental evaluation on two datasets (SumMe and TVSum) documents the contribution of the attention auto-encoder to faster and more stable training of the model, resulting in a signicant performance improvement with respect to the original model and demonstrating the competitiveness of the proposed SUM-GAN-AAE against the state of the art.&amp;nbsp;Software is publicly available at: https://github.com/e-apostolidis/SUM-GAN-AAE&lt;/p&gt;</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/780656/">780656</awardNumber>
      <awardTitle>Enhancing and Re-Purposing TV Content for Trans-Vector Engagement</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
462
179
views
downloads
Views 462
Downloads 179
Data volume 175.2 MB
Unique views 448
Unique downloads 168

Share

Cite as