curl -sS https://getcomposer.org/installer | php
/usr/bin/php composer.phar --version
sudo mv composer.phar /usr/local/bin/composer
composer -version
curl -sS https://getcomposer.org/installer | php
/usr/bin/php composer.phar --version
sudo mv composer.phar /usr/local/bin/composer
composer -version
第一行给伟大的 MOOC学院 。
文章主要内容来自 bestcollegereviews.org. 我也自己添加了一些。大家还知道哪些这个名单里没有的网站,可以回复呀~
新知
艺术与音乐
数学,数据科学与工程
设计,网页设计与开发
综合
大学课程
IT与软件开发
语言类
—
MOOC平台集合请见@watterfall 的帖子。
语言类学习网站主要参考自@玛雅蓝 的帖子“6个网站,12个APP,你想学的小语种都在这里!”
如何查询一个英文期刊是否被ISI的SCI收录?它的影响因子是多少?如何查询它的学科分类和排名,这里用图示方法分步介绍一下如何进行查询。值得注意的是,不同学科分类之间的影响因子不太具有可比性,但是在同类期刊中可以根据impact factor来比较期刊的优劣。
第一步:登录Web of Knowledge的主页:http://isiknowledge.com 或者:http://webofknowledge.com/
第二步:选择“其他资源”中的期刊引证报告“Journal Citation Reports”
第三步:选择“Search for a specific journal”,点击提交“SUBMIT”
第四步:输入要查询的期刊名称,如:“landscape ecology”,提交”SEARCH”
第五步:获得该刊物当年基本信息,如 ISSN号,Impact Factor等
第六步:点击该刊物超链接,获得期刊分类“Subject Categories”等更详细信息,点击“Journal Rank in Categories”进一步了解排名
第七步:获得期刊在某个学科分类中的排名
例如:2010年,Landscape Ecology在“Ecology”130个刊物中排名第35,在“Geography, Physical 自然地理”分类42个刊物中排名第6,在“地球科学交叉学科”167个刊物中排名第17.
原载:http://user.qzone.qq.com/1019216662/blog/1331701931
本文引用地址:http://blog.sciencenet.cn/blog-54418-549417.html 此文来自科学网吴志峰博客,转载请注明出处。
ln -s /opt/lampp/bin/mysql /usr/bin
修改/etc/profile文件使其永久性生效,并对所有系统用户生效,在文件末尾加上如下两行代码
PATH=$PATH:/opt/lampp/bin
export PATH
最后:执行 命令source /etc/profile或 执行点命令 ./profile使其修改生效,执行完可通过echo $PATH命令查看是否添加成功
l2tp :https://teddysun.com/448.html
https://github.com/hwdsl2/setup-ipsec-vpn/blob/master/README-zh.md#%E5%BF%AB%E9%80%9F%E5%BC%80%E5%A7%8B
1.下载安装
apt-get install pptpd
2.配置/etc/pptpd.conf
vim /etc/pptpd.conf
添加下面两行(在配置文件的最后取消注释修改IP即可)
localip 172.16.17.63 #这个就是你当前主机的IP地址
Remoteip 10.0.0.2-100 #这个就是给客户端分配置的IP地址池
3.添加DNS
cd /etc/ppp
vim options
ms-dns 172.16.10.5
ms-dns 8.8.8.8
4.添加服务器的名称
vim pptpd-options
name 172.16.17.63
5.服务端的用户各密码的配置
vim chap-secrets
“tao” 172.16.17.63 “tao” *
这个是用户名 服务器名(可以设置为*) 密码 允许登录的Ip地址
6.查看运行的端口
netstat -tnlpu |grep pptpd
可以看到1723端口开户
echo 1 > /proc/sys/net/ipv4/ip_forward #修改内核设置,使其支持转发
要想永久生效的话
vim /etc/sysctl.conf
net.ipv4.ip_forward = 1 #将后面值改为1,然后保存文件
sysctl –p #马上生效
7,不加这条只能访问内网资源,加了可访问外网
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADE
8.重启pptpd服务否则客户端获取的IP段不生效
/etc/init.d/pptpd restart
系统
# uname -a # 查看内核/操作系统/CPU信息
# head -n 1 /etc/issue # 查看操作系统版本
# cat /proc/cpuinfo # 查看CPU信息
# hostname # 查看计算机名
# lspci -tv # 列出所有PCI设备
# lsusb -tv # 列出所有USB设备
# lsmod # 列出加载的内核模块
# env # 查看环境变量
资源
# free -m # 查看内存使用量和交换区使用量
# df -h # 查看各分区使用情况
# du -sh <目录名> # 查看指定目录的大小
# grep MemTotal /proc/meminfo # 查看内存总量
# grep MemFree /proc/meminfo # 查看空闲内存量
# uptime # 查看系统运行时间、用户数、负载
# cat /proc/loadavg # 查看系统负载
磁盘和分区
# mount | column -t # 查看挂接的分区状态
# fdisk -l # 查看所有分区
# swapon -s # 查看所有交换分区
# hdparm -i /dev/hda # 查看磁盘参数(仅适用于IDE设备)
# dmesg | grep IDE # 查看启动时IDE设备检测状况
网络
# ifconfig # 查看所有网络接口的属性
# iptables -L # 查看防火墙设置
# route -n # 查看路由表
# netstat -lntp # 查看所有监听端口
# netstat -antp # 查看所有已经建立的连接
# netstat -s # 查看网络统计信息
进程
# ps -ef # 查看所有进程
# top # 实时显示进程状态
用户
# w # 查看活动用户
# id <用户名> # 查看指定用户信息
# last # 查看用户登录日志
# cut -d: -f1 /etc/passwd # 查看系统所有用户
# cut -d: -f1 /etc/group # 查看系统所有组
# crontab -l # 查看当前用户的计划任务
服务
# chkconfig –list # 列出所有系统服务
# chkconfig –list | grep on # 列出所有启动的系统服务
程序
# rpm -qa # 查看所有安装的软件包
用find命令查找最近修改过的文件
该文章转载自网络大本营:[url]http://www.xrss.cn/Dev/LINUX/200751213231.Html[/url]
linux的终端上,没有windows的搜索那样好用的图形界面工具,但find命令确是很强大的。
比如按名字查找一个文件,可以用 find / -name targetfilename 。 唉,如果只知道名字,不知道地点,这样也不失为一个野蛮有效的方法。
按时间查找也有参数 -atime 访问时间 -ctime 改变状态的时间 -mtime修改的时间。但要注意,这里的时间是以24小时为单位的。查看man手册后使用,你会很迷惑: -mtime n: File
find ./ -mtime 0:返回最近24小时内修改过的文件。
find ./ -mtime 1 : 返回的是前48~24小时修改过的文件。而不是48小时以内修改过的文件。
返回10天内修改过的文件?find还可以支持表达式关系运算,所以可以把最近几天的数据一天天的加起来:
find ./ -mtime 0 -o -mtime 1 -o -mtime 2 ……虽然比较土,但也算是个方法了。
还有没有更好的方法,我也想知道。。。
另外, -mmin参数-cmin / – amin也是类似的。
原文地址:http://wx.h5.vc/post/translations/2015-12-14
2015年,软件开发界发生了很多变化。有很多流行的新语言发布了,也有很多重要的框架和工具发布了新版本。下面有一个我们觉得最重要的简短清单,同时也有我们觉得值得你在2016年花时间精力去学习的新事物的一些建议。
大趋势
在过去的几年里,有一个越来越明显的趋势是web应用的商业逻辑逐步从后端转移到了前端,然后后端变得只需要处理简单的数据API。这就让前端开发框架的选择变得尤为重要了。
另外一个重要的改变是2015年发布的 Edge 浏览器。这是IE的替代品,拥有全新的界面和更好的性能。跟IE不一样的是它同样采用了跟 FireFox 和 Chrome 一样的快速发布策略。这让JavaScript 开发者社区能够以周为单位获得最新版JavaScript 和 Web标准特性支持而不是像过去一样需要等很多年。
语言和平台
Python 3.5 在今年发布了,带来了很多新特性 比如 Asyncio,为你带来了类似 node.js 的事件机制,还有type hints。 鉴于Python 3 终于真正地火起来了我们强烈建议你替换掉 Python 2。几乎所有的库都已经支持 Python 3 了,所以现在是一个升级历史遗留代码的好时机。
PHP 7 是一个重要的新版本,这个版本修复了很多问题并且带来了新特性和性能提升(看看概览) 。 PHP 7 大约比 PHP 5.6 快2倍, 这对一些大型项目还有WordPress 和 Drupal之类的CMS系统影响很大。 我们强烈推荐 PHP之道,已经更新到最新的PHP7版本。 如果你需要更快的速度并且不介意换一个解释引擎的话,可以试试Facebook在用的 HHVM。
JavaScript 也以ES2015 标准 (大家通常叫做 ES6)的形式发布了更新。 为我们带来了激动人心的新功能。 感谢大多数浏览器版本的快速更新, 对 ES2015 的支持已经非常棒了,并且还有 Babel.js 这样的工具可以让你的新代码跑在低版本浏览器上。
Node.js 在这一年变化很多,开发者社区曾经分裂成 Node.js 和 io.js,然后又再度合并。 经历过这些之后的结局就是我们得到了一个有很多代码贡献者积极维护的项目,并且拥有了两个版本的 Node : 一个稳定的LTS (长期支持) 版本,这个版本注重稳定性,比较适合长期项目和大公司,和一个非长期支持但是最快实现新特征的版本。
Swift 2 在今年初发布了。 这是 Apple 出品的旨在简化 iOS 和 OS X 开发的现代编程语言。 几周前, Swift 正式开源并已经兼容 Linux。这意味着你可以用它来编写服务端应用了。
Go 1.5 在几个月前发布了, 并带来了重大的架构调整。 在 2015 年它变得越来越流行并被早期创业项目和开源项目所采纳。这门语言是 非常简单的,所以花一个周末你就能学会。
TypeScript 是一个可编译成 JavaScript 的静态类型语言 。这是由微软开发的,所以跟Visual Studio 和开源的 Visual Studio Code editors 完美地集成了。它很快就要大红大紫了,因为即将到来的 Angular 2 就是用它写的。静态类型对大型团队的大型项目特别有用,所以如果你正在大型团队中做大型项目,或者仅仅出于好奇,你也应该尝试一下 TypeScript 。
如果为了好玩儿,你也可以试试某种函数式编程语言比如 Haskell 或者 Clojure。也有有趣的高性能语言比如 Rust 和 Elixir。如果你在找一份程序员的工作的话, 职业开发语言比如Java (在第8版中有一些很好的新特性) 和 C# (感谢 Visual Studio Code 和 .net core 实现了跨平台开发和运行) 都值得你在2016年投入时间。
挑一个或几个学习: Python 3, Go, PHP 7, ES2015, Node.js, Swift, TypeScript
JavaScript 框架
JavaScript框架是web开发技术栈中非常重要的一部分,所以我们单独拿一小节来说这个。今年有两个新标准—— Service Workers 和 Web Assembly,基本改变了现代 Web APP的开发方式。还有一些我们觉得你在2016年应该保持关注的框架新版本发布。
Angular.js 已经成为了大型企业首选的 JavaScript 框架。 这个框架即将发布下一个大版本的消息相信大家已经听过一段时间了, 在今年初的时候 Angular 2 发布了开发者预览版。 相对 Angular 1 而言是一次颠覆性的重构,而对我们而言带来了巨大的改进。一旦正式发布就很有可能成为企业应用开发框架的首选,所以 Angular 2 的开发经验将会是你简历里很好的一个加分项。我们建议再等几个月直到最终版正式发布之后才用于生产,不过你不妨现在就读一读他们的 快速上手指南。
React 在2015年里持续升温并且持续升级,越来越多的新项目采用它开发。 几个月前他们发布了新的开发工具 。 Facebook 还发布了用于开发支持 Android 和 iOS 平台原生应用的 React Native 框架,这个框架使用了原生界面配合运行在后台的JavaScript线程实现基于React开发原生应用。 可以参考我们今年发表的 React初级教程 。
Polymer 1.0 是在5月份发布的。 这是第一个稳定且可用于生产的版本. Polymer 主要是基于 Web Components 标准, 这是一份将 HTML, JS 和 CSS 打包成独立组件并便于快速引用的标准。目前只有 Chrome 和 Opera 支持 Web Components标准,但是 Polymer 搞定了浏览器兼容性问题。
Ember.js 也发布了一个新版本。 Ember 2 带来了模块化功能、废弃了一些旧特性并提升了性能。 Ember 遵循语义化版本并且开发团队尽可能保证大家能够平滑升级。如果你需要一个稳定且易于升级的框架的话,Ember是个不错的选择。
挑一个或几个学习: Angular 2, React, Ember.js, Polymer, Web Components, Service Workers
前端
Bootstrap 在过去的一年里变得更加流行了,正在成为Web开发的标配。 使用SASS 并支持 flexbox 的第四版将在几个月之后发布,官方保证可以从V3平滑升级 (不会像2年前从 v2 升级到 v3 那样 ), 所以尽管放心,你学的第3版的相关知识照样会适用于第4版。
Foundation 是另一个可替代Bootstrap的前端框架。 第6版在年初的时候发布了,这个版本主要侧重于模块化,你可以根据需要定制自己需要的部分以便缩短加载时间。
MDL 是一个Google官方发布的用于开发material design web app 的框架。 这个框架在今年初发布并和Polymer目标相似,但是更容易上手。我们曾经写过一篇 精彩的 MDL 与 Bootstrap 差异总结。
CSS预处理器也在持续改良。LESS 和 SASS 是当下最流行的两个,大部分功能都差不多。但是,最新的 Bootstrap 4 都已经转向了 SASS,所以2016年要学的CSS预处理器中 SASS 获得了一些优势。当然了,也还有更新的 PostCSS 工具值得留意,但是我们强烈建议先掌握了预处理器再来学这个。
挑一个或几个学习: Bootstrap, MDL, Foundation, SASS, LESS, PostCSS
后端
这几年的Web开发有一个非常明显的趋势。越来越多的应用逻辑转移到了前端,然后后端仅仅是API。然而传统的后端生成页面的应用依然还有生存空间,所以我们觉得学一个经典的全站框架依然是非常重要的。
关键取决于你更喜欢那种语言,可选择的非常多。用PHP你可以选 Symfony, Zend, Laravel (还有 Lumen, 这是新一代专注于API开发的框架), Slim 等。用Python 有 Django 和 Flask 。用 Ruby 有Rails 和 Sinatra。用Java有 Play 和 Spark。用Node.js你可以选择 Express, Hapi 和 Sails.js ,还有 Go 语言的Revel。
AWS Lambda去年就已经发布了,但是这个概念到现在才稳定并能用于生产。这是一种可无限扩展的完全取代传统后端服务器的云服务。你可以根据API被访问时的特定条件或者路由来定义不同的响应方法。这意味着你可以完全不用管服务器。
另一个趋势是静态站点生成器比如 Jekyll 和 Octopress(这里有一个完整的同类清单)。这类工具的主要功能是把一堆文本和图片文件渲染成一个完整的静态网站。那些以前通常自己搭一个Wordpress博客程序的开发者现在更喜欢事先生成并直接上传一个静态网站。这样会更加安全(没有后端服务器也不需要数据库)并且性能非常好。结合 MaxCDN 或 CloudFlare之类的CDN服务可以让用户就近访问,明显减少等待时间。
挑一个学习: 传统的全栈后端框架, AWS Lambda, 一种静态生成器
内容管理系统(CMS)
我们主要介绍两种最流行的 CMS系统。都是用PHP写的并且易于部署和上手。他们都因为PHP7的发布获得了明显的速度提升。
最近几年 WordPress 已经变得早就不仅仅是个博客程序了。它是一个成熟的 CMS/框架,配合插件可以做任何一种网站。高质量的 WordPress 皮肤是一个巨大的市场,很多自由职业者以 WordPress 相关开发为生。配合类似 WP-API 之类的项目你可以把Wordpress变成一组 REST API 。
Drupal 8 在今年发布了。这是一次侧重现代开发最佳实践的重构。使用了 Symfony 2 组件、 Composer 包管理器 和 Twig 模板引擎。成千上万的网站在使用 Drupal,它确实是以内容为主的门户网站的一个很好的选择。
数据库
这一年Web开发社区对 NoSQL 数据库失去了一些热情,重新回到了关系型数据库比如 Postgres 和 MySQL 身边。这方面著名的例外是 RethinkDB 和 Redis ,他们都很火,我强烈建议你在2016年都试试。
MySQL 是最火并且大部分主机供应商都支持的开源数据库。在5.7版里,MySQL 也提供了JSON columns 来存储非关系型数据。如果你刚开始接触后端开发,你可能正在找连接到服务器已安装的数据库的方法。很可能是旧版本的,所以你没办法尝试 JSON 类型数据。MySQL已经包含在了很流行的 XAMPP 或 MAMP 之类的软件包里,所以上手很容易。
挑一个学习: Redis, RethinkDB, MySQL/MariaDB, PostgreSQL
移动应用
移动平台一直在进步并且智能机的硬件配置现在跟低端笔记本的性能差不多了。这对于 hybrid 移动开发框架来说是个好消息,基于web技术开发的移动应用将得到更加顺滑、更像原生的体验了。
我们曾经写过一篇不错的 Hybrid应用开发框架概览 你或许会感兴趣。最火的 Ionic 框架 和 Meteor 都在最近发布了1.0版本且都适合做移动应用开发。Facebook 开源的 React Native,可以在后台JavaScript进程里运行 React 组件并更新原生的UI界面,让你可以用几乎同一套代码同时写 iOS和Android应用。
挑一个学习: Ionic, React Native, Meteor
编辑器和开发工具
Atom在今年发布了1.0。它是一款使用web技术开发的免费且功能强大的代码编辑器。它背后有一个很多大开发者社区(译者注:github)提供了很多扩展包。它提供好用的自动完成并集成了代码重构和校验工具。差点忘了它还有很多漂亮的皮肤可以选择,并且你可以自己写 CoffeeScript 和 CSS 来定制自己喜欢的皮肤。Facebook 已经这么干了,并且发布了名为 Nuclide 的编辑器。
微软在今年年初发布的 Visual Studio Code 给了大家一个惊喜。这是一款支持多种语言并兼容 Windows, Linux 和 OS X 平台的轻量级IDE。它提供了强大的智能代码检查并集成了 ASP.Net 和 Node.js 的调试工具。
NPM,Node.js的包管理器,火得一塌糊涂并已经成为了前端和node开发者的标准包管理器。这是帮你的项目管理 JavaScript 依赖最简单的方法并且上手很容易。
目前哪怕是一个人开发也有必要用 Git。它的分布式模型让你可以把任何一个文件夹变成一个版本控制仓库,然后你可以把这个仓库发布到 Bitbucket 或 Github,同步到其他电脑上。如果你还没用过 Git,我们强烈建议你把它加入你2016年需要学习的清单里面。
挑一个学习: Atom, Visual Studio Code, NPM, Git
搞物联网
树莓派基金会提前给我们送来了圣诞礼物,Raspberry PI Zero 一种只卖 5 美金的高性能电脑发布了。它搭载了Linux,所以你可以把它变成一台服务器,一个家用自动化装置,一面智能镜子,或者把它集成到别的电器里面打造一个你梦寐以求的能联网控制的咖啡机。2016年是你应该拥有树莓派的一年。
规划一个漂亮的2016年吧!
我们已经度过了非常棒的2015年,看起来2016年会更有意思。那么2016年你会想学些什么呢?
译自:http://tutorialzine.com/2015/12/the-languages-and-frameworks-you-should-learn-in-2016/
Measuring the User Experience on a Large Scale:
User-Centered Metrics for Web Applications
Kerry Rodden, Hilary Hutchinson, and Xin Fu
Google
1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
{krodden, hhutchinson, xfu}@google.com
ABSTRACT
More and more products and services are being deployed
on the web, and this presents new challenges and
opportunities for measurement of user experience on a large
scale. There is a strong need for user-centered metrics for
web applications, which can be used to measure progress
towards key goals, and drive product decisions. In this
note, we describe the HEART framework for user-centered
metrics, as well as a process for mapping product goals to
metrics. We include practical examples of how HEART
metrics have helped product teams make decisions that are
both data-driven and user-centered. The framework and
process have generalized to enough of our company’s own
products that we are confident that teams in other
organizations will be able to reuse or adapt them. We also
hope to encourage more research into metrics based on
large-scale behavioral data.
Author Keywords
Metrics, web analytics, web applications, log analysis.
ACM Classification Keywords
H.5.2 [Information interfaces and presentation]: User
Interfaces—benchmarking, evaluation/methodology.
General Terms
Experimentation, Human Factors, Measurement.
INTRODUCTION
Advances in web technology have enabled more
applications and services to become web-based and
increasingly interactive. It is now possible for users to do a
wide range of common tasks “in the cloud”, including those
that were previously restricted to native client applications
(e.g. word processing, editing photos). For user experience
professionals, one of the key implications of this shift is the
ability to use web server log data to track product usage on
a large scale. With additional instrumentation, it is also
possible to run controlled experiments (A/B tests) that
compare interface alternatives. But on what criteria should
they be compared, from a user-centered perspective? How
should we scale up the familiar metrics of user experience,
and what new opportunities exist?
In the CHI community, there is already an established
practice of measuring attitudinal data (such as satisfaction)
on both a small scale (in the lab) and a large scale (via
surveys). However, in terms of behavioral data, the
established measurements are mostly small-scale, and
gathered with stopwatches and checklists as part of lab
experiments, e.g. effectiveness (task completion rate, error
rate) and efficiency (time-on-task) [13].
A key missing piece in CHI research is user experience
metrics based on large-scale behavioral data. The web
analytics community has been working to shift the focus
from simple page hit counts to key performance indicators.
However, the typical motivations in that community are
still largely business-centered rather than user-centered.
Web analytics packages provide off-the-shelf metrics
solutions that may be too generic to address user experience
questions, or too specific to the e-commerce context to be
useful for the wide range of applications and interactions
that are possible on the web.
We have created a framework and process for defining
large-scale user-centered metrics, both attitudinal and
behavioral. We generalized these from our experiences of
working at a large company whose products cover a wide
range of categories (both consumer-oriented and businessoriented),
are almost all web-based, and have millions of
users each. We have found that the framework and process
have been applicable to, and useful for, enough of our
company’s own products that we are confident that teams in
other organizations will be able to reuse or adapt them
successfully. We also hope to encourage more research into
metrics based on large-scale behavioral data, in particular.
RELATED WORK
Many tools have become available in recent years to help
with the tracking and analysis of metrics for web sites and
applications. Commercial and freely available analytics
© ACM, 2010. This is the author’s version of the work. It is posted here
by permission of ACM for your personal use. Not for redistribution. The
definitive version was published in the Proceedings of CHI 2010, April
10–15, 2010, Atlanta, Georgia, USA.
packages [5,11] provide off the shelf solutions. Custom
analysis of large-scale log data is made easier via modern
distributed systems [4,8] and specialized programming
languages [e.g. 12]. Web usage mining techniques can be
used to segment visitors to a site according to their behavior
[3]. Multiple vendors support rapid deployment and
analysis of user surveys, and some also provide software for
large-scale remote usability or benchmarking tests [e.g. 14].
A large body of work exists on the proper design and
analysis of controlled A/B tests [e.g. 10] where two similar
populations of users are given different user interfaces, and
their responses can be rigorously measured and compared.
Despite this progress, it can still be challenging to use these
tools effectively. Standard web analytics metrics may be
too generic to apply to a particular product goal or research
question. The sheer amount of data available can be
overwhelming, and it is necessary to scope out exactly what
to look for, and what actions will be taken as a result.
Several experts suggest a best practice of focusing on a
small number of key business or user goals, and using
metrics to help track progress towards them [2, 9, 10]. We
share this philosophy, but have found that this is often
easier said than done. Product teams have not always
agreed on or clearly articulated their goals, which makes
defining related metrics difficult.
It is clear that metrics should not stand alone. They should
be triangulated with findings from other sources, such as
usability studies and field studies [6,9], which leads to
better decision-making [15]. Also, they are primarily useful
for evaluation of launched products, and are not a substitute
for early or formative user research. We sought to create a
framework that would combine large-scale attitudinal and
behavioral data, and complement, not replace, existing user
experience research methods in use at our company.
PULSE METRICS
The most commonly used large-scale metrics are focused
on business or technical aspects of a product, and they (or
similar variations) are widely used by many organizations
to track overall product health. We call these PULSE
metrics: Page views, Uptime, Latency, Seven-day active
users (i.e. the number of unique users who used the product
at least once in the last week), and Earnings.
These metrics are all extremely important, and are related to
user experience – for example, a product that has a lot of
outages (low uptime) or is very slow (high latency) is
unlikely to attract users. An e-commerce site whose
purchasing flow has too many steps is likely to earn less
money. A product with an excellent user experience is
more likely to see increases in page views and unique users.
However, these are all either very low-level or indirect
metrics of user experience, making them problematic when
used to evaluate the impact of user interface changes. They
may also have ambiguous interpretation – for example, a
rise in page views for a particular feature may occur
because the feature is genuinely popular, or because a
confusing interface leads users to get lost in it, clicking
around to figure out how to escape. A change that brings in
more revenue in the short term may result in a poorer user
experience that drives away users in the longer term.
A count of unique users over a given time period, such as
seven-day active users, is commonly used as a metric of
user experience. It measures the overall volume of the user
base, but gives no insight into the users’ level of
commitment to the product, such as how frequently each of
them visited during the seven days. It also does not
differentiate between new users and returning users. In a
worst-case retention scenario of 100% turnover in the user
base from week to week, the count of seven-day active
users could still increase, in theory.
HEART METRICS
Based on the shortcomings we saw in PULSE, both for
measuring user experience quality, and providing
actionable data, we created a complementary metrics
framework, HEART: Happiness, Engagement, Adoption,
Retention, and Task success. These are categories, from
which teams can then define the specific metrics that they
will use to track progress towards goals. The Happiness and
Task Success categories are generalized from existing user
experience metrics: Happiness incorporates satisfaction,
and Task Success incorporates both effectiveness and
efficiency. Engagement, Adoption, and Retention are new
categories, made possible by large-scale behavioral data.
The framework originated from our experiences of working
with teams to create and track user-centered metrics for
their products. We started to see patterns in the types of
metrics we were using or suggesting, and realized that
generalizing these into a framework would make the
principles more memorable, and usable by other teams.
It is not always appropriate to employ metrics from every
category, but referring to the framework helps to make an
explicit decision about including or excluding a particular
category. For example, Engagement may not be meaningful
in an enterprise context, if users are expected to use the
product as part of their work. In this case a team may
choose to focus more on Happiness or Task Success. But it
may still be meaningful to consider Engagement at a feature
level, rather than the overall product level.
Happiness
We use the term “Happiness” to describe metrics that are
attitudinal in nature. These relate to subjective aspects of
user experience, like satisfaction, visual appeal, likelihood
to recommend, and perceived ease of use. With a general,
well-designed survey, it is possible to track the same
metrics over time to see progress as changes are made.
For example, our site has a personalized homepage,
iGoogle. The team tracks a number of metrics via a weekly
in-product survey, to understand the impact of changes and
new features. After launching a major redesign, they saw
an initial decline in their user satisfaction metric (measured
on a 7-point bipolar scale). However, this metric recovered
over time, indicating that change aversion was probably the
cause, and that once users got used to the new design, they
liked it. With this information, the team was able to make a
more confident decision to keep the new design.
Engagement
Engagement is the user’s level of involvement with a
product; in the metrics context, the term is normally used to
refer to behavioral proxies such as the frequency, intensity,
or depth of interaction over some time period. Examples
might include the number of visits per user per week, or the
number of photos uploaded per user per day. It is generally
more useful to report Engagement metrics as an average per
user, rather than as a total count – because an increase in
the total could be a result of more users, not more usage.
For example, the Gmail team wanted to understand more
about the level of engagement of their users than was
possible with the PULSE metric of seven-day active users
(which simply counts how many users visited the product at
least once within the last week). With the reasoning that
engaged users should check their email account regularly,
as part of their daily routine, our chosen metric was the
percentage of active users who visited the product on five
or more days during the last week. We also found that this
was strongly predictive of longer-term retention, and
therefore could be used as a bellwether for that metric.
Adoption and Retention
Adoption and Retention metrics can be used to provide
stronger insight into counts of the number of unique users
in a given time period (e.g. seven-day active users),
addressing the problem of distinguishing new users from
existing users. Adoption metrics track how many new users
start using a product during a given time period (for
example, the number of accounts created in the last seven
days), and Retention metrics track how many of the users
from a given time period are still present in some later time
period (for example, the percentage of seven-day active
users in a given week who are still seven-day active three
months later). What counts as “using” a product can vary
depending on its nature and goals. In some cases just
visiting its site might count. In others, you might want to
count a visitor as having adopted a product only if they
have successfully completed a key task, like creating an
account. Like Engagement, Retention can be measured over
different time periods – for some products you might want
to look at week-to-week Retention, while for others
monthly or 90-day might be more appropriate. Adoption
and Retention tend to be especially useful for new products
and features, or those undergoing redesigns; for more
established products they tend to stabilize over time, except
for seasonal changes or external events.
For example, during the stock market meltdown in
September 2008, Google Finance had a surge in both page
views and seven-day active users. However, these metrics
did not indicate whether the surge was driven by new users
interested in the crisis, or existing users panic-checking
their investments. Without knowing who was making more
visits, it was difficult to know if or how to change the site.
We looked at Adoption and Retention metrics to separate
these user types, and examine the rate at which new users
were choosing to continue using the site. The team was
able to use this information to better understand the
opportunities presented by event-driven traffic spikes.
Task Success
Finally, the “Task Success” category encompasses several
traditional behavioral metrics of user experience, such as
efficiency (e.g. time to complete a task), effectiveness (e.g.
percent of tasks completed), and error rate. One way to
measure these on a large scale is via a remote usability or
benchmarking study, where users can be assigned specific
tasks. With web server log file data, it can be difficult to
know which task the user was trying to accomplish,
depending on the nature of the site. If an optimal path exists
for a particular task (e.g. a multi-step sign-up process) it is
possible to measure how closely users follow it [7].
For example, Google Maps used to have two different types
of search boxes – a dual box for local search, where users
could enter the “what” and “where” aspects separately (e.g.
[pizza][nyc]) and a single search box that handled all kinds
of searches (including local searches such as [pizza nyc], or
[nyc] followed by [pizza]). The team believed that the
single-box approach was simplest and most efficient, so, in
an A/B test, they tried a version that offered only the single
box. They compared error rates in the two versions, finding
that users in the single-box condition were able to
successfully adapt their search strategies. This assured the
team that they could remove the dual box for all users.
GOALS – SIGNALS – METRICS
No matter how user-centered a metric is, it is unlikely to be
useful in practice unless it explicitly relates to a goal, and
can be used to track progress towards that goal. We
developed a simple process that steps teams through
articulating the goals of a product or feature, then
identifying signals that indicate success, and finally
building specific metrics to track on a dashboard.
Goals
The first step is identifying the goals of the product or
feature, especially in terms of user experience. What tasks
do users need to accomplish? What is the redesign trying to
achieve? Use the HEART framework to prompt articulation
of goals (e.g. is it more important to attract new users, or to
encourage existing users to become more engaged?). Some
tips that we have found helpful:
• Different team members may disagree about what the
project goals are. This process provides a great
opportunity to collect all the different ideas and work
towards consensus (and buy-in for the chosen metrics).
• Goals for the success of a particular project or feature
may be different from those for the product as a whole.
• Do not get too distracted at this stage by worrying
about whether or how it will be possible to find
relevant signals or metrics.
Signals
Next, think about how success or failure in the goals might
manifest itself in user behavior or attitudes. What actions
would indicate the goal had been met? What feelings or
perceptions would correlate with success or failure? At this
stage you should consider what your data sources for these
signals will be, e.g. for logs-based behavioral signals, are
the relevant actions currently being logged, or could they
be? How will you gather attitudinal signals – could you
deploy a survey on a regular basis? Logs and surveys are
the two signal sources we have used most often, but there
are other possibilities (e.g. using a panel of judges to
provide ratings). Some tips that we have found helpful:
• Choose signals that are sensitive and specific to the
goal – they should move only when the user experience
is better or worse, not for other, unrelated reasons.
• Sometimes failure is easier to identify than success (e.g
abandonment of a task, “undo” events [1], frustration).
Metrics
Finally, think about how these signals can be translated into
specific metrics, suitable for tracking over time on a
dashboard. Some tips that we have found helpful:
• Raw counts will go up as your user base grows, and
need to be normalized; ratios, percentages, or averages
per user are often more useful.
• There are many challenges in ensuring accuracy of
metrics based on web logs, such as filtering out traffic
from automated sources (e.g. crawlers, spammers), and
ensuring that all of the important user actions are being
logged (which may not happen by default, especially in
the case of AJAX or Flash-based applications).
• If it is important to be able to compare your project or
product to others, you may need to track additional
metrics from the standard set used by those products.
CONCLUSIONS
We have spent several years working on the problem of
developing large-scale user-centered product metrics. This
has led to our development of the HEART framework and
the Goals-Signals-Metrics process, which we have applied
to more than 20 different products and projects from a wide
variety of areas within Google. We have described several
examples in this note of how the resulting metrics have
helped product teams make decisions that are both datadriven
and user-centered. We have also found that the
framework and process are extremely helpful for focusing
discussions with teams. They have generalized to enough of
our company’s own products that we are confident that
teams in other organizations will be able to reuse or adapt
them successfully. We have fine-tuned both the framework
and process over more than a year of use, but the core of
each has remained stable, and the framework’s categories
are comprehensive enough to fit new metrics ideas into.
Because large-scale behavioral metrics are relatively new,
we hope to see more CHI research on this topic – for
example, to establish which metrics in each category give
the most accurate reflection of user experience quality.
ACKNOWLEDGMENTS
Thanks to Aaron Sedley, Geoff Davis, and Melanie Kellar
for contributing to HEART, and Patrick Larvie for support.
REFERENCES
1. Akers, D. et al. (2009). Undo and Erase Events as
Indicators of Usability Problems. Proc of CHI 2009,
ACM Press, pp. 659-668.
2. Burby, J. & Atchison, S. (2007). Actionable Web
Analytics. Indianapolis: Wiley Publishing, Inc.
3. Chi, E. et al. (2002). LumberJack: Intelligent Discovery
and Analysis of Web User Traffic Composition. Proc of
WebKDD 2002, ACM Press, pp. 1-15.
4. Dean, J. & Ghemawat, S. (2008). MapReduce:
Simplified Data Processing on Large Clusters.
Communications of the ACM, 51 (1), pp. 107-113.
5. Google Analytics: http://www.google.com/analytics
6. Grimes, C. et al. (2007). Query Logs Alone are not
Enough. Proc of WWW 07 Workshop on Query Log
Analysis: http://querylogs2007.webir.org
7. Gwizdka, J. & Spence, I. (2007). Implicit Measures of
Lostness and Success in Web Navigation. Interacting
with Computers 19(3), pp. 357-369.
8. Hadoop: http://hadoop.apache.org/core
9. Kaushik, A. (2007). Web Analytics: An Hour a Day.
Indianapolis: Wiley Publishing, Inc.
10. Kohavi, R. et al. (2007). Practical Guide to Controlled
Experiments on the Web. Proc of KDD 07, ACM Press,
pp. 959-967.
11. Omniture: http://www.omniture.com
12. Pike, R. et al. (2005). Interpreting the Data: Parallel
Analysis with Sawzall. Scientific Programming (13), pp.
277-298.
13. Tullis, T. & Albert, W. (2008). Measuring the User
Experience. Burlington: Morgan Kaufmann.
14. UserZoom: http://www.userzoom.com
15. Weischedel, B. & Huizingh, E. (2006). Website
Optimization with Web Metrics: A Case Study. Proc of
ICEC 06, ACM Press, pp. 463-470.