Drupal

Drupal for Government: Govermenty forms even more governmenty with FillPDF and Rules

Drupal Planet - 5 hours 4 min ago

When working with government agencies the sacred form may raise it's fugly formatted head now and again.  Despite attempts at logic "Wouldn't an XLS spreadsheet be easier for everyone?" it sometimes comes down to what's simpler - gettin' er done vs doin' it right.... and if no one really cares about doin' it right, gettin' er done becomes the (sloppy) way, (half)truth, and (dim) light....

So yeah - I had a form that needed to be pixel perfect so that a state-wide agency could print the forms up and store them in a manilla folder... I started working with Views PDF.  This did generate pdf's... and along with mimemail and rules we were sending PDF's out... but they just weren't looking like folks wanted them... FillPDF - thank you.

To use FillPDF we started by installing pdftk (apt-get install pdftk on ubuntu) and then installing the module as per usual....  here's the rest step-by-step 

Categories: Drupal

Drupal Association News: Clever Ways to Raise Funds for D8 Accelerate

Drupal Planet - 7 hours 19 min ago

What’s the most clever way to raise funds for Drupal? Ask Ralf Hendel of Comm Press in Hamburg Germany.

The Drupal Association is working with the Drupal 8 branch maintainers to provide $250,000 in Drupal 8 Acceleration Grants that will be awarded to individuals and groups, helping them get Drupal 8 from beta to release.

Now The Association needs to raise the funds to support the grants and we are working with the Association Board to kick off a D8 Accelerate fundraiser. We are asking community members to help out and donate here. We all want to get D8 released so we can enjoy all the launch parties!

The good news is that we only need to raise $125,000 as a community because all donations will be matched! The Association contributed $62,500 and the Association Board raised another $62,500 from Anchor Donors: Acquia, Appnovation, Drupalize.me by Lullabot, Palantir.net, Phase2, PreviousNext, and Wunderkraut.

Having Anchor Partners means…
Every dollar you donate is matched, doubling your impact.

Ralf Hendel, CEO of Comm Press, heard the call and took action in the most creative way. Over the last few years, Ralf learned how to play the piano (very well I might add) and he recently held a benefit recital where he played Bach, Schubert, and Skrjabin. Those attending were asked to donate to D8 Accelerate and together they raised €345. Of course, with the matching funds from our Anchor Partners, that contribution is €690.

At The Drupal Association, we are always amazed at the many talents our community has and we are especially thankful to Ralf for sharing his passion for music and Drupal with others and raising these funds.

There’s so many clever ways to raise funds to help D8 get across the finish line. What ideas do you have? Or if feel like going the traditional route, you can donate here.

Thanks for considering this opportunity to get Drupal 8 released sooner. If you would like to learn more about how D8 Accelerate grants are being given out, please read Angie Byron’s blog post.

Categories: Drupal

Drupalpress, Drupal in the Health Sciences Library at UVA: equipment booking system — managing reservation conflicts with field validation and EntityFieldQuery()

Drupal Planet - 9 hours 46 min ago

The first requirement of a registration system is to have something to reserve.

The second requirement of a registration system is to manage conflicting reservations.

Setting up validation of submitted reservations based on the existing reservation nodes was probably the most complex part of this project. A booking module like MERCI has this functionality baked in — but again, that was too heavy for us so we had to do it on our own. We started off on a fairly thankless path of Views/Rules integration. Basically we were building a view of existing reservations, contextually filtering that view by content id (the item that someone was trying to reserve) and then setting a rule that would delete the content and redirect the user to a “oops that’s not available” page. We ran into issues building the view contextually with rules (for some reason the rule wouldn’t pass the nid …) and even if we would have got that wired up, it would have been clunky.

Scrap that.

On to Field Validation.

The Field Validation module offers client-side form validation (not to be confused with Clientside Validation or Webform Validation based on any number of conditions at the field level. We were trying to validate the submitted reservation on length (no longer than 5 days) and availability (no reservations of the same item during any of the days requested).

The length turned out to be pretty straightforward — we set the “Date range2″ field validator on the reservation date field. The validator lets you choose “global” date format, which means you can input logic like “+ X days” so long as it can be converted by the strtotime() function.

 

Field Validation also gives you configurations to bypass the validation criteria by role — this was helpful in our case given that there are special circumstances when “approved” reservations can be made for longer than 5 days. And if something doesn’t validate, you can plug in a custom error message in the validator configuration.

 

With the condition set for the length of the reservation, we could tackle the real beast. Determining reservation conflicts required us to use the “powerfull [sic] but dangerous” PHP validator from Field Validation. Squirting custom code into our Drupal instance is something we try to avoid as much as possible — it’s difficult to maintain … and as you’ll see below it can be difficult to understand. To be honest, a big part of the impetus for writing this series of blog posts was to document the 60+ lines of code that we strung together to get our booking system to recognize conflicts.

The script starts by identifying information about the item that the patron is trying to reserve  (item = $arg1, checkout date = $arg2, return date = $arg3) and then builds an array of dates from the start to finish of the requested reservation. Then we use EntityFieldQuery() to find all of the reservations that have dates less than or equal to the end date request. That’s where we use the fieldCondition() with <= to the $arg3_for_fc value. What that gives us is all of the reservations on that item that could possibly conflict. Then we sort by descending and trim the top value out of the list to get the nearest reservation to the date requested. With that record in hand, we can build another array of start and end dates and use array_intersetct() to see if there is any overlap.

I bet that was fun to read.

I’ll leave you with the code and comments:

<?php //find arguments from nid and dates for the requested reservation $arg1 = $this->entity->field_equipmentt_item[und][0][target_id]; $arg2 = $this->entity->field_reservation_date[und][0][value]; $arg2 = new DateTime($arg2); $arg3 = $this->entity->field_reservation_date[und][0][value2]; $arg3 = new DateTime($arg3); $arg3_for_fc = $arg3->format("Ymd"); //build out array of argument dates for comparison with existing reservation $beginning = $arg2; $ending = $arg3; $ending = $ending->modify( '+1 day' ); $argumentinterval = new DateInterval('P1D'); $argumentdaterange = new DatePeriod($beginning, $argumentinterval ,$ending); $arraydates = array(); foreach($argumentdaterange as $argumentdates){ $arraydates []= $argumentdates->format("Ymd"); } //execute entityfieldquery to find the most recent reservation that could conflict $query = new EntityFieldQuery(); $fullquery = $query->entityCondition('entity_type', 'node') ->entityCondition('bundle', 'reservation') ->propertyCondition('status', NODE_PUBLISHED) ->fieldCondition('field_equipmentt_item', 'target_id', $arg1, '=') ->fieldCondition('field_reservation_date', 'value', $arg3_for_fc, '<=') ->fieldOrderBy('field_reservation_date', 'value', 'desc') ->range(0,1); $fetchrecords = $fullquery->execute(); if (isset($fetchrecords['node'])) { $reservation_nids = array_keys($fetchrecords['node']); $reservations = entity_load('node', $reservation_nids); } //find std object for the nearest reservation from the top $reservations_test = array_slice($reservations, 0, 1); //parse and record values for dates $startdate = $reservations_test[0]->field_reservation_date[und][0][value]; $enddate = $reservations_test[0]->field_reservation_date[und][0][value2]; //iterate through to create interval date array $begin = new DateTime($startdate); $end = new DateTime($enddate); $end = $end->modify( '+1 day' ); $interval = new DateInterval('P1D'); $daterange = new DatePeriod($begin, $interval ,$end); $arraydates2 = array(); foreach($daterange as $date){ $arraydates2 []= $date->format("Ymd"); } $conflicts = array_intersect($arraydates, $arraydates2); if($conflicts != NULL){ $this->set_error(); } ?>
Categories: Drupal

Yuriy Babenko: Adding custom contexts to UDFs in Acquia Lift

Drupal Planet - Sun, 2015-03-29 21:55

Within the Lift ecosystem, "contexts" can be thought of as pre-defined functionality that makes data available to the personalization tools, when that data exists in the current state (of the site/user/environment/whatever else).

Use cases

The simplest use of contexts is in mapping their data to UDF fields on /admin/config/content/personalize/acquia_lift_profiles. When the context is available, its data is assigned to a UDF field and included with Lift requests. For example, the Personalize URL Context module (part of the Personalize suite) does exactly this with query string contexts.

First steps

The first thing to do is to implement hook_ctools_plugin_api() and hook_personalize_visitor_contexts(). These will make the Personalize module aware of your code, and will allow it to load your context declaration class.

Our module is called yuba_lift:

/**
 * Implements hook_ctools_plugin_api();
 */
function yuba_lift_ctools_plugin_api($owner, $api) {
  if ($owner == 'personalize' && $api == 'personalize') {
    return array('version' => 1);
  }
}

/**
 * Implements hook_personalize_visitor_contexts();
 */
function yuba_lift_personalize_visitor_context() {
  $info = array();
  $path = drupal_get_path('module', 'yuba_lift') . '/plugins';

  $info['yuba_lift'] = array(
    'path' => $path . '/visitor_context',
    'handler' => array(
      'file' => 'YubaLift.inc',
      'class' => 'YubaLift',
    ),
  );

  return $info;
}

The latter hook tells Personalize that we have a class called YubaLift located at /plugins/visitor_context/YubaLift.inc (relative to our module's folder).

The context class

Our context class must extend the abstract PersonalizeContextBase and implement a couple required methods:

<?php
/**
 * @file
 * Provides a visitor context plugin for Custom Yuba data.
 */

class YubaLift extends PersonalizeContextBase {
  /**
   * Implements PersonalizeContextInterface::create().
   */
  public static function create(PersonalizeAgentInterface $agent = NULL, $selected_context = array()) {
    return new self($agent, $selected_context);
  }

  /**
   * Implements PersonalizeContextInterface::getOptions().
   */
  public static function getOptions() {
    $options = array();

    $options['car_color']   = array('name' => t('Car color'),);
    $options['destination'] = array('name' => t('Destination'),);

    foreach ($options as &$option) {
      $option['group'] = t('Yuba');
    }

    return $options;
  }
}

The getOptions method is what we're interested in; it returns an array of context options (individual items that can be assigned to UDF fields, among other uses). The options are grouped into a 'Yuba' group, which will be visible in the UDF selects.

With this code in place (and cache cleared - for the hooks above), the 'Yuba' group and its context options become available for mapping to UDFs.

Values for options

The context options now need actual values. This is achieved by providing those values to an appropriate JavaScript object. We'll do this in hook_page_build().

/**
 * Implements hook_page_build();
 */
function yuba_lift_page_build(&$page) {
  // build values corresponding to our context options
  $values = array(
    'car_color' => t('Red'),
    'destination' => t('Beach'),
  );

  // add the options' values to JS data, and load separate JS file
  $page['page_top']['yuba_lift'] = array(
    '#attached' => array(
      'js' => array(
        drupal_get_path('module', 'yuba_lift') . '/js/yuba_lift.js' => array(),
        array(
          'data' => array(
            'yuba_lift' => array(
              'contexts' => $values,
            ),
          ),
          'type' => 'setting'
        ),
      ),
    )
  );
}

In the example above we hardcoded our values. In real use cases, the context options' values would vary from page to page, or be entirely omitted (when they're not appropriate) - this will, of course, be specific to your individual application.

With the values in place, we add them to a JS setting (Drupal.settings.yuba_lift.contexts), and also load a JS file. You could store the values in any arbitrary JS variable, but it will need to be accessible from the JS file we're about to create.

The JavaScript

The last piece of the puzzle is creating a new object within Drupal.personalize.visitor_context that will implement the getContext method. This method will look at the enabled contexts (provided via a parameter), and map them to the appropriate values (which were passed via hook_page_build() above):

(function ($) {
  /**
   * Visitor Context object.
   * Code is mostly pulled together from Personalize modules.
   */
  Drupal.personalize = Drupal.personalize || {};
  Drupal.personalize.visitor_context = Drupal.personalize.visitor_context || {};
  Drupal.personalize.visitor_context.yuba_lift = {
    'getContext': function(enabled) {
      if (!Drupal.settings.hasOwnProperty('yuba_lift')) {
        return [];
      }

      var i = 0;
      var context_values = {};

      for (i in enabled) {
        if (enabled.hasOwnProperty(i) && Drupal.settings.yuba_lift.contexts.hasOwnProperty(i)) {
          context_values[i] = Drupal.settings.yuba_lift.contexts[i];
        }
      }
      
      return context_values;
    }
  };

})(jQuery);

That's it! You'll now see your UDF values showing up in Lift requests. You may also want to create new column(s) for the custom UDF mappings in your Lift admin interface.

You can grab the completed module from my GitHub.

Tagsdrupal planetdrupaldrupal 7liftpersonalization
Categories: Drupal

Chen Hui Jing: The one without sleep

Drupal Planet - Sun, 2015-03-29 20:00

So I recently participated in my first ever hackathon over the weekend of March 28. Battlehack Singapore to be exact (oddly, there was another hackathon taking place at the same time). A UX designer friend of mine had told me about the event and asked if I wanted to join as a team.
Me: Is there gonna be food at this thing?
Her: Erm…yes.
Me: Sold!
Joking aside, I’d never done a hackathon before and thought it’d be fun to try. We managed to recruit another friend and went as a team of three.

The idea

The theme of the hackathon was to solve a local...

Categories: Drupal

Paul Rowell: My journey in Drupal, 4 years on

Drupal Planet - Sun, 2015-03-29 16:38

I’m approaching the 4 year mark at my agency and along with it my 4 year mark working with Drupal. It’s been an interesting journey so far and I’ve learned a fair bit, but, as with anything in technology, there’s still a great deal left to discover. This is my journey so far and a few key points I learned along the way.

Categories: Drupal

Victor Kane: Getting Started with a Real World Application on platform.sh

Drupal Planet - Sun, 2015-03-29 14:20

This is the second in a series of articles involving the writing and launching of my DurableDrupal Lean ebook series website on platform.sh. Since it's a real world application, this article is for real world website and web application developers. If you are starting from scratch but enthusiastic and willing to learn, that means you too. I'm fortunate enough to have their sponsorship and full technical support, so everything in the article has been tested out on the platform. A link will be edited in here as soon as it goes live.

Diving in

Diving right in I setup a Trello Kanban Board for Project Inception as follows:

Both Vision (Process, Product) and Candidate Architecture (Process, Product) jobs have been completed, and have been moved to the MVP 1 column. We know what we want to do, and we're doing it with Drupal 7, based on some initial configuration as a starting point (expressed both as an install profile and a drush configuration script). At this point there are three jobs in the To Do column, constituting the remaining preparation for the Team Product Kickoff. And two of them (setup for continuous integration and continuous delivery) are about to be made much easier by virtue of using platform.sh, not only as a home for the production instance, but as a central point of organization for the entire development and deployment process.

Beginning Continuous Integration Workflow

What we'll be doing in this article:

read more

Categories: Drupal

Drupal @ Penn State: Improve User Experience by Ajaxing Your Drupal Forms

Drupal Planet - Sun, 2015-03-29 07:02

Drupal's Form API has everything that we love about the Drupal framework. It's powerful, flexible, and easily extendable with our custom modules and themes. But lets face it; it's boooorrrrriinnnnggg. Users these days are used to their browsers doing the heavy lifting. Page reloads are becoming fewer and fewer, especially when we are expecting our users to take action on our websites. If we are asking our visitors to take time out of their day to fill out a form on our website, that form should be intuitive, easy to use, and not distracting.

Categories: Drupal

Code Karate: Module Investigator: Fixing an issue in the Drupal Subuser module

Drupal Planet - Sat, 2015-03-28 22:09
Episode Number: 199

In the last episode, we learned about the Drupal Subuser module. In this episode, we continue where we left off but take a look under the hood at the module code of the Drupal Subuser module.

By following along with this episode you will learn some things such as:

Tags: DrupalDrupal 7Module DevelopmentModule InvestigatorDrupal Planet
Categories: Drupal

Blue Drop Shop: Camp Record Beta Test Three: MidCamp 2015

Drupal Planet - Sat, 2015-03-28 16:08

After my #epicfail that was BADCamp, to say that I was entering MidCamp with trepidation would be the understatement of the year. Two full days of sessions and a 1-and-1 track record was weighing heavily upon my soul. Add to the mix that I was coming directly off of a 5-day con my company runs, and responsible for MidCamp venue and catering logistics. Oh right, and I ran out of time to make instructions and train anyone else on setup, which only added to my on-site burden.

Testing is good.

After BADCamp, I added a powered 4-port USB hub to the kits, as well as an accessory pack for the H2N voice recorder, mainly for the powered A/C adapter and remote. All total, these two items bring the current cost of the kit to about $425.

In addition, at one of our venue walk-throughs, I was able to actually test the kits with the projectors UIC would be using. The units in two of the rooms had an unexplainable random few-second blackout of the screens, but the records were good and the rest of the rooms checked out.

Success.

After the mad scramble setting up three breakout rooms and the main stage leading up to the opening keynote, I can't begin to describe the feeling in the pit of my stomach after I pulled the USB stick after stopping the keynote recording. I can’t begin to describe the elation I felt after seeing a full record, complete with audio.

We hit a few snags with presenters not starting their records (fixable) and older PCs not connecting (possibly fixable), and a couple sessions that didn’t have audio (hello redundancy from the voice recorder). Aside from that, froboy and I were able to trim and upload all the successful records during the Sunday sprint.

A huge shout out also goes to jason.bell for helping me on-site with setups and capture. He helped me during Fox Valley’s camp, so I deputized him as soon as I saw him Friday morning.

Learnings.

With the addition of the powered USB hub, we no longer need to steal any ports from the presenter laptop. For all of the first day, we were unnecessarily hooking up the hub’s USB cable to the presenter laptop. Doing this caused a restart of the record kit. We did lose a session to a presenter laptop going to sleep, and I have to wonder whether we would have still captured it if the hub hadn’t been attached.

The VGA to HDMI dongle is too unreliable to be part of the kit. When used, either there was no connection, or it would cycle between on and off. Most, if not all, machines that didn’t have mini display port or direct HMDI out had full display port. I will be testing a display port to HDMI dongle for a more reliable option.

Redundant audio is essential. The default record format for the voice recorders is a WAV file. These are best quality, but enormous, which is why I failed at capturing most of BADCamp’s audio (RTFM, right?). By changing the settings to 192kbs MP3, two days of session audio barely made a dent in the 2GB cards that are included with the recorders. Thankfully, this saved three session records: two with no audio at all (still a mystery) and one with blown out audio.

Trimming and combining in YouTube is a thing. Kudos again to froboy for pointing me to YouTube’s editing capabilities. A couple sessions had split records (also a mystery), which we then stitched together after upload, and several sessions needed some pre- or post-record trimming. This can all be done in YouTube instead of using a video editor and re-encoding. Granted, YouTube takes what seems like forever to process, but it works and once you do the editing, you can forget about it.

There is a known issue with mini display port to HDMI where a green tint is added to the output. Setting the external PVR to 720p generally fixed this. There were a couple times where it didn’t, but switching either between direct HDMI or mini display port to HDMI seemed to resolve most of the issues. Sorry for the few presenters that opted for funky colors before we learned this during the camp. The recording is always fine, but the on-site experience is borked.

Finally, we need to tell presenters to adjust their energy saver settings. I take this for granted, because the con my company runs is for marketing people who present frequently, and this is basically just assumed to be set correctly. We are a more casual bunch and don’t fret when the laptop sleeps or the screen saver comes up during a presentation. Just move the cursor and roll with it. But that can kill a record...even with the Drupal Association kits. I do plan to test this, now that I’ve learned we don’t need any power at all from the presenter laptop, but it’s still an easy fix with documentation.

Next steps.

Documentation. I need to make simple instructions sheets to include with the kits. Overall, they are really easy to use and connect, but it’s completely unfamiliar territory. With foolproof instructions, presenters can be at ease and room monitors can be tasked with assisting without fear.

Packaging. With the mad dash to set these up — combined with hourly hookups — these were a hot mess on the podium. I’ll be working to tighten these up so they look less intimidating and take up less space. No idea what this entails yet, so I’ll gladly accept ideas.

Testing. As mentioned, I will test regular display port to HDMI, as well as various sleep states while recording.

Shipping. Because these kits are so light weight, part of the plan is to be able to share them with regional camps. There was a lot of interest from other organizers in these kits during the camp. Someone from Twin Cities even offered to purchase a kit to add to the mix, as long as they could borrow the others. A Pelican box with adjustable inserts would be just the ticket.

Sponsors. If you are willing to help finance this project, please contact me at kthull@bluedropshop.com. While Fox Valley Camp owns three kits and MidCamp owns one, wouldn’t it be great to have your branding on these as they make their way around the camp circuit? The equipment costs have (mostly) been reimbursed, but I’ve devoted a lot of time to testing and documenting the process, and will be spending more time with the next steps listed above.

Tags:
Categories: Drupal

Amazee Labs: Drupal Camp Johannesburg 2015

Drupal Planet - Sat, 2015-03-28 15:52
Drupal Camp Johannesburg 2015

Today I had the pleasure to attend Johannesburg's Drupal Camp 2015.

The event was organized by DASA that is doing a stunning job in gathering and energizing South Africa's Drupal Community. From community subjects to Drupal 8, we got to see a lekker variety of talks including those by Michael and me on "Drupal 8" and "How to run a successful Drupal Shop".

Special thanks to the organizers Riaan, Renate, Adam, Greg and Robin. Up next will be Drupal Camp Cape Town in September 2015.

Categories: Drupal

DrupalOnWindows: Adding native JSON storage support in Drupal 7 or how to mix RDBM with NoSQL

Drupal Planet - Sat, 2015-03-28 13:45
Language English

I rencently read a very interesting article of an all time .Net developer comparing the MEAN stack (Mongo-Express-Angluar-Node) with traditional .Net application design. (I can't post the link because I'm unable to find the article!!).

Among other things he compared the tedious process of adding a new field in the RDBM model (modifying the database, then the data layer, then views and controllers), whereas in the MEAN stack it was as simple as adding two lines of code to the UI.

More articles...
Categories: Drupal

Lullabot: Drupalize.Me 2015 Spring Update

Drupal Planet - Fri, 2015-03-27 14:32

The Drupalize.Me team typically gets together each quarter to go over how we did with our goals and to plan out what we want to accomplish and prioritize in the upcoming quarter. These goals range from site upgrades to our next content sprints. A few weeks ago we all flew into Atlanta and did just that. We feel it is important to communicate to our members and the Drupal community at-large, what we've been doing in the world of Drupal training and what our plans are for the near future. What better way to do this than our own podcast.

Categories: Drupal

LevelTen Interactive: Part 2: More Videos That Will Help You Get Started with Drupal

Drupal Planet - Fri, 2015-03-27 11:41

If you are new to Drupal, take a look at our previous blog New To Drupal? These Videos Will Help You Get StartedIf you just got started in Drupal, how about we provide you these short but thorough tutorial videos on Working with Content.... Read more

Categories: Drupal

Annertech: COPE - Create Once, Publish Everywhere

Drupal Planet - Fri, 2015-03-27 11:41
COPE - Create Once, Publish Everywhere

When developing websites, we always aim take a “COPE” approach to web content management. COPE – Create Once Publish Everywhere – was popularised by NPR (National Public Radio) in the United States. It is a content management philosophy that seeks to allow content creators to add content in one place and then use it in various forms in other places.

Categories: Drupal

Chromatic: Integrating Recurly and Drupal

Drupal Planet - Fri, 2015-03-27 11:21

If you’re working on a site that needs subscriptions, take a look at Recurly. Recurly’s biggest strength is its simple handling of subscriptions, billing, invoices, and all that goes along with it. But how do you get that integrated into your Drupal site? Let’s walk through it.

There are a handful of pieces that work to connect your Recurly account and your Drupal site.
  1. The Recurly PHP library.
  2. The recurly.js library (optional, but recommended).
  3. The Recurly module for Drupal.

The first thing you need to do is bookmark is the Recurly API documentation.
Note: The Drupal Recurly module is still using v2 of the API. A re-write of the module to support v3 is in the works, but we have few active maintainers right now (few meaning one, and you’re looking at her). If you find this module of use or potential interest, pop into the issue queue and lend a hand writing or reviewing patches!

Okay, now that I’ve gotten that pitch out of the way, let’s get started.

I’ll be using a new Recurly account and a fresh install of Drupal 7.35 on a local MAMP environment. I’ll also be using drush as I go along (Not using drush?! Stop reading this and get it set up, then come back. Your life will be easier and you’ll thank us.)

  1. The first step is to sign up at https://recurly.com/ and get your account set up with your subscription plan(s). Your account will start out in a sandbox mode, and once you have everything set up with Recurly (it’s a paid service), you can switch to production mode. For our production site, we have a separate account that’s entirely in sandbox mode just for dev and QA, which is nice for testing, knowing we can’t break anything.
  2. Recurly is dependent on the Libraries module, so make sure you’ve got that installed (7.x-2.x version). drush dl libraries && drush en libraries
  3. You’ll need the Recurly Client PHP library, which you’ll need to put into sites/all/libraries/recurly. This is also an open-source, community-supported library, using v2 of the Recurly API. If you’re using composer, you can set this as a dependency. You will probably have to make the libraries directory. From the root of your installation, run mkdir sites/all/libraries.
  4. You need the Recurly module, which comes with two sub-modules: Recurly Hosted Pages and Recurly.js. drush dl recurly && drush en recurly
  5. If you are using Recurly.js, you will need that library, v2 of which can be found here. This will need to be placed into sites/all/libraries/recurly-js.
    Your /libraries/ directory should look something like this now:
Which integration option is best for my site? There are three different ways to use Recurly with Drupal.

You can just use the library and the module, which include some built-in pages and basic functionality. If you need a great deal of customization and your own functionality, this might be the option for you.

Recurly offers hosted pages, for which there is also a Drupal sub-module. This is the least amount of integration with Drupal; your site won’t be handling any of the account management. If you are low on dev hours or availability, this may be a good option.

Thirdly, and this is the option we are using for one of our clients and demonstrating in this tutorial, you can use the recurly.js library (there is a sub-module to integrate this). Recurly.js is a client-side credit-card authorization service which keeps credit card data from ever touching your server. Users can then make payments directly from your site, but with much less responsibility on your end. You can still do a great deal of customization around the forms – this is what we do, as well as customized versions of the built-in pages.

Please note: Whichever of these options you choose, your site will still need a level of PCI-DSS Compliance (Payment Card Industry Data Security Standard). You can read more about PCI Compliance here. This is not prohibitively complex or difficult, and just requires a self-assessment questionnaire.

Settings

You should now have everything in the right place. Let’s get set up.

  1. Go to yoursite.dev/admin/config (just click Configuration at the top) and you’ll see Recurly under Web Services.
  2. You’ll now see a form with a handful of settings. Here’s where to find the values in your Recurly account. Once you set up a subscription plan in Recurly, you’ll find yourself on this page. On the right hand side, go to API Credentials. You may have to scroll down or collapse some menus in order to see it.
  3. Your Private API Key is the first key found on this page (I’ve blocked mine out):
  4. Next, you’ll need to go to Manage Transparent Post Keys on the right. You will not need the public key, as it’s not used in Recurly.js v2.
  5. Click to Enable Transparent Post and Recurly.js v2 API.
  6. Now you’ll see your key. This is the value you’ll enter into the Transparent Post Private Key field.
  7. The last basic setup step is to enter your subdomain. The help text for this field is currently incorrect as of 3/26/2015 and will be corrected in the next release. It is correct in the README file, and on the project page. There is no longer a -test suffix for sandbox mode. Copy your subdomain either from the address bar or from the Site Settings. You don’t need the entire url, so in my case, the subdomain is alanna-demo.
  8. With these settings, you can accept the rest of the default values and be ready to go. The rest of the configuration is specific to how you’d like to set up your account, how your subscription is configured, what fields you want to record in Recurly, how much custom configuration you want to do, and what functionality you need. The next step, if you are using Recurly’s built-in pages, is to enable your subscription plans. In Drupal, head over to the Subscription Plans tab and enable the plans you want to use on your site. Here I’ve just created one test plan in Recurly. Check the boxes next to the plan(s) you want enabled, and click Update Plans.
Getting Ready for Customers

So you have Recurly integrated, but how are people going to use it on your Drupal site? Good question. For this tutorial, we’ll use Recurly.js. Make sure you enable the submodule if you haven’t already: drush en recurlyjs. Now you’ll see some new options on the Recurly admin setting page.

I’m going to keep the defaults for this example. Now when you go to a user account page, you’ll see a Subscription tab with the option to sign up for a plan.

Clicking Sign up will bring you to the signup page provided by Recurly.js.

After filling out the fields and clicking Purchase, you’ll see a handful of brand new tabs. I set this subscription plan to have a trial period, which is reflected here.

Keep in mind, this is the default Drupal theme with no styling applied at all. If you head over to your Recurly account, you’ll see this new subscription.

There are a lot of configuration options, but your site is now integrated with Recurly. You can sign up, change, view, and cancel accounts. If you choose to use coupons, you can do that as well, and we’ve done all of this without any custom code.

If you have any questions, please read the documentation, or head over to the Recurly project page on Drupal.org and see if it’s answered in the issue queue. If not, make sure to submit your issue so that we can address it!

Categories: Drupal

Deeson: Drush Registry Rebuild

Drupal Planet - Fri, 2015-03-27 10:30

Keep Calm and Clear Cache!

This is an often used phrase in Drupal land. Clearing cache fixes many issues that can occur in Drupal, usually after a change is made and then isn't being reflected on the site.

But sometimes, clearing cache isn't enough and a registry rebuild is in order.

The Drupal 7 registry contains an inventory of all classes and interfaces for all enabled modules and Drupal's core files. The registry stores the path to the file that a given class or interface is defined in, and loads the file when necessary. On occasion a class maybe moved or renamed and then Drupal doesn't know where to find it and what appears to be unrecoverable problems occur.

One such example might be if you move the location of a module. This can happen if you have taken over a site and all the contrib and custom modules are stored in the sites/all/modules folder and you want to separate that out into sites/all/modules/contrib and sites/all/modules/custom.  After moving the modules into your neat sub folders, things stop working and clearing caches doesn't seem to help.

Enter, registry rebuild.  This isn't a module, its a drush command. After downloading from drupal.org, the registry_rebuild folder should be placed into the directory sites/all/drush.

You should then clear the drush cache so drush knows about the new command

drush cc drush

Then you are ready to rebuild the registry

drush rr

Registry rebuild is a standard tool we use on all projects now and forms part of our deployment scripts when new code is deployed to an environment.

So the next time you feel yourself about to tear your hair out and you've run clear cache ten times, keep calm and give registry rebuild a try.

Categories: Drupal

Joachim's blog: A script for making patches

Drupal Planet - Fri, 2015-03-27 09:46

I have a standard format for patchnames: 1234-99.project.brief-description.patch, where 1234 is the issue number and 99 is the (expected) comment number. However, it involves two copy-pastes: one for the issue number, taken from my browser, and one for the project name, taken from my command line prompt.

Some automation of this is clearly possible, especially as I usually name my git branches 1234-brief-description. More automation is less typing, and so in true XKCD condiment-passing style, I've now written that script, which you can find on github as dorgpatch. (The hardest part was thinking of a good name, and as you can see, in the end I gave up.)

Out of the components of the patch name, the issue number and description can be deduced from the current git branch, and the project from the current folder. For the comment number, a bit more work is needed: but drupal.org now has a public API, so a simple REST request to that gives us data about the issue node including the comment count.

So far, so good: we can generate the filename for a new patch. But really, the script should take care of doing the diff too. That's actually the trickiest part: figuring out which branch to diff against. It requires a bit of git branch wizardry to look at the branches that the current branch forks off from, and some regular expression matching to find one that looks like a Drupal development branch (i.e., 8.x-4.x, or 8.0.x). It's probably not perfect; I don't know if I accounted for a possibility such as 8.x-4.x branching off a 7.x-3.x which then has no further commits and so is also reachable from the feature branch.

The other thing this script can do is create a tests-only patch. These are useful, and generally advisable on drupal.org issues, to demonstrate that the test not only checks for the correct behaviour, but also fails for the problem that's being fixed. The script assumes that you have two branches: the one you're on, 1234-brief-description, and also one called 1234-tests, which contains only commits that change tests.

The git workflow to get to that point would be:

  1. Create the branch 1234-brief-description
  2. Make commits to fix the bug
  3. Create a branch 1234-tests
  4. Make commits to tests (I assume most people are like me, and write the tests after the fix)
  5. Move the string of commits that are only tests so they fork off at the same point as the feature branch: git rebase --onto 8.x-4.x 1234-brief-description 1234-tests
  6. Go back to 1234-brief-description and do: git merge 1234-tests, so the feature branch includes the tests.
  7. If you need to do further work on the tests, you can repeat with a temporary branch that you rebase onto the tip of 1234-tests. (Or you can cherry-pick the commits. Or do cherry-pick with git rev-list, which is a trick I discovered today.)

Next step will be having the script make an interdiff file, which is a task I find particularly fiddly.

Tags: gitpatchingdrupal.orgworkflow
Categories: Drupal

agoradesign: Introducing the Outdated Browser module

Drupal Planet - Fri, 2015-03-27 09:25
We proudly present our first official drupal.org project, which we released about two months ago: the Outdated Browser module. It detects outdated browsers and advises users to upgrade to a new version - in a very pretty looking way.
Categories: Drupal

Drupal Watchdog: Entity Storage, the Drupal 8 Way

Drupal Planet - Fri, 2015-03-27 08:18

In Drupal 7 the Field API introduced the concept of swappable field storage. This means that field data can live in any kind of storage, for instance a NoSQL database like MongoDB, provided that a corresponding backend is enabled in the system. This feature allows support of some nice use cases, like remotely-stored entity data or exploit storage backends that perform better in specific scenarios. However it also introduces some problems with entity querying, because a query involving conditions on two fields might end up needing to query two different storage backends, which may become impractical or simply unfeasible.

That's the main reason why in Drupal 8, we switched from field-based storage to entity-based storage, which means that all fields attached to an entity type share the same storage backend. This nicely resolves the querying issue without imposing any practical limitation, because to obtain a truly working system you were basically forced to configure all fields attached to the same entity type to share the same storage engine. The main feature that was dropped in the process, was the ability to share a field between different entity types, which was another design choice that introduced quite a few troubles on its own and had no compelling reason to exist.

With this change each entity type has a dedicated storage handler, that for fieldable entity types is responsible for loading, storing, and deleting field data. The storage handler is defined in the handlers section of the entity type definition, through the storage key (surprise!) and can be swapped by modules implementing hook_entity_type_alter().

Querying Entity Data

Since we now support pluggable storage backends, we need to write storage-agnostic contrib code. This means we cannot assume entities of any type will be stored in a SQL database, hence we need to rely more than ever on the Entity Query API, which is the successor of the Entity Field Query system available in Drupal 7. This API allows you to write complex queries involving relationships between entity types (implemented via entity reference fields) and aggregation, without making any assumption on the underlying storage. Each storage backend requires a corresponding entity query backend, translating the generic query into a storage-specific one. For instance, the default SQL query backend translates entity relationships to JOINs between entity data tables.

Entity identifiers can be obtained via an entity query or any other viable means, but existing entity (field) data should always be obtained from the storage handler via a load operation. Contrib module authors should be aware that retrieving partial entity data via direct DB queries is a deprecated approach and is strongly discouraged. In fact by doing this you are actually completely bypassing many layers of the Entity API, including the entity cache system, which is likely to make your code less performant than the recommended approach. Aside from that, your code will break as soon as the storage backend is changed, and may not work as intended with modules correctly exploiting the API. The only legal usage of backend-specific queries is when they cannot be expressed through the Entity Query API. However also in this case only entity identifiers should be retrieved and used to perform a regular (multiple) load operation.

Storage Schema

Probably one of the biggest changes introduced with the Entity Storage API, is that now the storage backend is responsible for managing its own schema, if it uses any. Entity type and field definitions are used to derive the information required to generate the storage schema. For instance the core SQL storage creates (and deletes) all the tables required to store data for the entity types it manages. An entity type can define a storage schema handler via the aptly-named storage_schema key in the handlers section of the entity type definition. However it does not need to define one if it has no use for it.

Updates are also supported, and they are managed via the regular DB updates UI, which means that the schema will be adapted when the entity type and field definitions change or are added or removed. The definition update manager also triggers some events for entity type and field definitions, that can be useful to react to the related changes. It is important to note that not all kind of changes are allowed: if a change implies a data migration, Drupal will refuse to apply it and a migration (or a manual adjustment) will be required to proceed.

This means that if a module requires an additional field on a particular entity type to implement its business logic, it just needs to provide a field definition and apply changes (there is also an API available to do this) and the system will do the rest. The schema will be created, if needed, and field data will be natively loaded and stored. This is definitely a good reason to define every piece of data attached to an entity type as a field. However if for any reason the system-provided storage were not a good fit, a field definition can specify that it has custom storage, which means the field provider will handle storage on its own. A typical example are computed fields, which may need no storage at all.

Core SQL Storage

The default storage backend provided by core is obviously SQL-based. It distinguishes between shared field tables and dedicated field tables: the former are used to store data for all the single-value base fields, that is fields attached to every bundle like the node title, while the latter are used to store data for multiple-value base fields and bundle fields, which are attached only to certain bundles. As the name suggests, dedicated tables store data for just one field.

The default storage supports four different shared table layouts depending on whether the entity type is translatable and/or revisionable:

  • Simple entity types use only a single table, the base table, to store all base field data. | entity_id | uuid | bundle_name | label | … |
  • Translatable entity types use two shared tables: the base table stores entity keys and metadata only, while the data table stores base field data per language. | entity_id | uuid | bundle_name | langcode | | entity_id | bundle_name | langcode | default_langcode | label | … |
  • Revisionable entity types also use two shared tables: the base table stores all base field data, while the revision table stores revision data for revisionable base fields and revision metadata. | entity_id | revision_id | uuid | bundle_name | label | … | | entity_id | revision_id | label | revision_timestamp | revision_uid | revision_log | … |
  • Translatable and revisionable entity types use four shared tables, combining the types described above: the base table stores entity keys and metadata only, the data table stores base field data per language, the revision table stores basic entity key revisions and revision metadata, and finally the revision data table stores base field revision data per language for revisionable fields. | entity_id | revision_id | uuid | bundle_name | langcode | | entity_id | revision_id | bundle_name | langcode | default_langcode | label | … | | entity_id | revision_id | langcode | revision_timestamp | revision_uid | revision_log | | entity_id | revision_id | langcode | default_langcode | label | … |

The SQL storage schema handler supports switching between these different table layouts, if the entity type definition changes and no data is stored yet.

Core SQL storage aims to support any table layout, hence modules explicitly targeting a SQL storage backend, like for instance Views, should rely on the Table Mapping API to build their queries. This API allows retrieval of information about where field data is stored and thus is helpful to build queries without hard-coding assumptions about a particular table layout. At least this is the theory, however core currently does not fully support this use case, as some required changes have not been implemented yet (more on this below). Core SQL implementations currently rely on the specialized DefaultTableMapping class, which assumes one of the four table layouts described above.

A Real Life Example

We will now have a look at a simple module exemplifying a typical use case: we want to display a list of active users having created at least one published node, along with the total number of nodes created by each user and the title of the most recent node. Basically a simple tracker.

Displaying such data with a single query can be complex and will usually lead to very poor performance, unless the number of users on the site is quite small. A typical solution in these cases is to rely on denormalized data that is calculated and stored in a way that makes it easy to query efficiently. In our case we will add two fields to the User entity type to track the last node and the total number of nodes created by each user:

function active_users_entity_base_field_info(EntityTypeInterface $entity_type) { $fields = []; if ($entity_type->id() == 'user') { $fields['last_created_node'] = BaseFieldDefinition::create('entity_reference') ->setLabel('Last created node') ->setRevisionable(TRUE) ->setSetting('target_type', 'node') ->setSetting('handler', 'default'); $fields['node_count'] = BaseFieldDefinition::create('integer') ->setLabel('Number of created nodes') ->setRevisionable(TRUE) ->setDefaultValue(0); } return $fields; }

Note that fields above are marked as revisionable so that if the User entity type itself is marked as revisionable, our fields will also be revisioned. The revisionable flag is ignored on non-revisionable entity types.

After enabling the module, the status report will warn us that there are DB updates to be applied. Once complete, we will have two new columns in our user_field_data table ready to store our data. We will now create a new ActiveUsersManager service responsible for encapsulating all our business logic. Let's add an ActiveUsersManager::onNodeCreated() method that will be called from a hook_node_insert implementation:

public function onNodeCreated(NodeInterface $node) { $user = $node->getOwner(); $user->last_created_node = $node; $user->node_count = $this->getNodeCount($user); $user->save(); } protected function getNodeCount(UserInterface $user) { $result = $this->nodeStorage->getAggregateQuery() ->aggregate('nid', 'COUNT') ->condition('uid', $user->id()) ->execute(); return $result[0]['nid_count']; }

As you can see this will track exactly the data we need, using an aggregated entity query to compute the number of created nodes.

Since we need to also act on node deletion (hook_node_delete), we need to add a few more methods:

public function onNodeDeleted(NodeInterface $node) { $user = $node->getOwner(); if ($user->last_created_node->target_id == $node->id()) { $user->last_created_node = $this->getLastCreatedNode($user); } $user->node_count = $this->getNodeCount($user); $user->save(); } protected function getLastCreatedNode(UserInterface $user) { $result = $this->nodeStorage->getQuery() ->condition('uid', $user->id()) ->sort('created', 'DESC') ->range(0, 1) ->execute(); return reset($result); }

In the case where the user's last created node is the one being deleted, we use a regular entity query to retrieve an updated identifier for the user's last created node.

Nice, but we still need to display our list. To accomplish this we add one last method to our manager service to retrieve the list of active users:

public function getActiveUsers() { $ids = $this->userStorage->getQuery() ->condition('status', 1) ->condition('node_count', 0, '>') ->condition('last_created_node.entity.status', 1) ->sort('login', 'DESC') ->execute(); return User::loadMultiple($ids); }

As you can see, in the entity query above we effectively expressed a relationship between the User entity and the Node entity, imposing a condition using the entity syntax, that is implemented through a JOIN by the SQL entity query backend.

Finally we can invoke this method in a separate controller class responsible for building the list markup:

public function view() { $rows = []; foreach ($this->manager->getActiveUsers() as $user) { $rows[]['data'] = [ String::checkPlain($user->label()), intval($user->node_count->value), String::checkPlain($user->last_created_node->entity->label()), ]; } return [ '#theme' => 'table', '#header' => [$this->t('User'), $this->t('Node count'), $this->t('Last created node')], '#rows' => $rows, ]; }

This approach is way more performant when numbers get big, as we are running a very fast query involving only a single JOIN on indexed columns. We could even skip it by adding more denormalized fields to our User entity, but I wanted to outline the power of the entity syntax. A possible further optimization would be collecting all the identifiers of the nodes whose titles are going to be displayed and preload them in a single multiple load operation preceding the loop.

Aside from the performance considerations, you should note that this code is fully portable: as long as the alternative backend complies with the Entity Storage and Query APIs, the result you will get will be the same. Pretty neat, huh?

What's Left?

What I have shown above is working code, you can use it right now in Drupal 8. However there are still quite some open issues before we can consider the Entity Storage API polished enough:

  • Switching between table layouts is supported by the API, but storage handlers for core entity types still assume the default table layouts, so they need to be adapted to rely on table mappings before we can actually change translatability or revisionability for their entity types. See https://www.drupal.org/node/2274017 and follow-ups.
  • In the example above we might have needed to add indexes to make our query more performant, for example, if we wanted to sort on the total number of nodes created. This is not supported yet, but of course «there's an issue for that!» See https://www.drupal.org/node/2258347.
  • There are cases when you need to provide an initial value for new fields, when entity data already exists. Think for instance to the File entity module, that needs to add a bundle column to the core File entity. Work is also in progress on this: https://www.drupal.org/node/2346019.
  • Last but not least, most of the time we don't want our users to go and run updates after enabling a module, that's bad UX! Instead a friendlier approach would be automatically applying updates under the hood. Guess what? You can join us at https://www.drupal.org/node/2346013.

Your help is welcome :)

So What?

We have seen the recommended ways to store and retrieve entity field data in Drupal 8, along with (just a few of) the advantages of relying on field definitions to write simple, powerful and portable code. Now, Drupal people, go and have fun!

Images: 
Categories: Drupal

Pages

Subscribe to LinuxColumbus aggregator - Drupal