FFLA (Fundación Futuro Latinoamericano) is an NGO dedicated to promote the dialog between different actors responsible for main environmental decisions. They approached us with the need to build a platform that would connect the decision makers, communicators, promotors and investigators with all the relevant information required for different topics.
The Portal seeks to facilitate access to technical and scientific information, news, databases and relevant tools for all actors involved in coastal and marine conservation whether these researchers or decision makers.
The development of conservapacifico.net responds to needs identified in the Plan for the Conservation and Use of the PACIFIC Platform and the Research and Knowledge Management Strategy for the conservation and sustainable use of marine-coastal resources in the Tropical Eastern Pacific (EIGC- PET).
End users can customize the information of the portal according to the themes and areas of interest of each participant, receive information and news of interest, build a community of practice and interact with contacts through thematic forums. End users can also contribute shared information by recommending content and proposing new inputs.
Architecture and Technological Challenges
Even though Drupal 8 was already released by the time we started the project, due to a hard delivery date, we took the decision to develop it in Drupal 7 which was far more stable and feature complete than Drupal 8 at the time of starting the project. Nevertheless, most of the business logic of the site was built leveraging core functionality and popular contributed modules with the aim that it could be migrated to Drupal 8 at some point.
One of the challenges we faced was that the platform needed to pull information form hundreds of different sources, including scientific magazines, digital newspapers, research publications, among others. Most of these sources did not had a standard RSS channel or a REST API that could allow us to parse the information with the use of existing tools so we had to handle some of these sources in a very specific way. In most cases we needed to parse the HTML output. We wrapped the output in a SimpleXML object and parsed it through xpath in those cases.
Many databases and libraries have been linked as resources through the platform. We also needed to pull some of the information from some of these databases in a structured way so it could be referenced from some of these articles. Articles and publications were also parsed by the Open Calais service tool to extract relevant keywords and linked to existing taxonomy terms that were created into the platform.
The platform although feature complete still faces a few challenges that are important for the promotion of the tools that it provides that where a bit out of our scope. We hope that the platform will continue to grow in the future by adding more sources and improving the relevance linking of the information and incorpore some entomology structure of the information in contrast with the tree like taxonomy term vocabularies information that it contains now. As web development continue to standardize, we expect more site to incorpore more schemas to organize and give structure to scientific data so it will be easier to pull structured information from the different sources and give more relevance to the information offered. For this dream to happen, the source provides will need to actively collaborate to update their website/platforms to provide the metadata needed.