毕业设计论文 外文文献翻译 中英文对照 计算机科学与技术 预处理和挖掘Web日志数据网站个性化_第1页
毕业设计论文 外文文献翻译 中英文对照 计算机科学与技术 预处理和挖掘Web日志数据网站个性化_第2页
毕业设计论文 外文文献翻译 中英文对照 计算机科学与技术 预处理和挖掘Web日志数据网站个性化_第3页
毕业设计论文 外文文献翻译 中英文对照 计算机科学与技术 预处理和挖掘Web日志数据网站个性化_第4页
毕业设计论文 外文文献翻译 中英文对照 计算机科学与技术 预处理和挖掘Web日志数据网站个性化_第5页
已阅读5页,还剩4页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、 南京理工大学泰州科技学院毕业设计(论文)外文资料翻译系 部: 计算机科学与技术 专 业: 计算机科学与技术 姓 名: 学 号: 外文出处: dipartimento di informatica, universita di pisa 附 件: 1.外文资料翻译译文;2.外文原文。指导教师评语: 签名: 年 月 日注:请将该封面与附件装订成册。附件1:外文资料翻译译文预处理和挖掘web日志数据网站个性化摘要:我们描述了web使用挖掘活动的一个持续项目要求,我们叫它clickworld3,旨在提取导航行为的一个网站的用户的模型。该模型的推断在访问日志的网络服务器通过数据和web挖掘技术的功能。

2、提取的知识是部署的个性化和主动提供网络服务给用户。第一,我们描述预处理步骤访问日志必要的步骤,选择并准备数据,知识提取。然后,我们表现出两套实验:第一,一个尝试性预测的用户基础上访问的网页;第二,试图预测是否用户可能有兴趣参观的一部分网页。关键词:知识发现,web挖掘,分类。1、导言web挖掘是利用数据挖掘技术在自动化发现和提取信息从网络的文件和服务。一个常见的分类web挖掘的三个主要的研究项目明确的规定:内容分钟法,结构挖掘和使用挖掘。区分这些类别没有一个明确的界限,而是将经常使用的方法相结合区分出不同的类别。内容涵盖数据挖掘技术提取模型,网络对象的内容,包括纯文字,半结构化文件(例如,ht

3、ml或xml语言),结构化文件(数字图书馆),动态的文件,多媒体文件。提取模型被用于分类的网页对象,提取关键字用于信息检索,推断结构的半结构化或非结构化的对象。结构挖掘旨在发掘基本的拓扑结构的互连,筹措之间的网络对象。该模型建立可用于分类和排名的网站,并发现了它们之间的相似性。使用挖掘是应用数据挖掘技术发现使用从网络模式的数据。数据通常是收集用户的互动关系在网上,例如网站/代理服务器日志,用户查询,登记数据。使用挖掘工具发现和预测用户行为,以帮助设计师为改善网站,来吸引游客,或给普通用户的个性化和适应性的服务。在本文中,我们描述了web使用挖掘活动的一个持续项目要求clickworld ,旨在

4、提取模型,以用户的行为为目的的个性化网站。我们从中期全国性大型门户网站vivacity.it收集和预处理访问日志,花费的时间为5个月。该网站包括了民族地区如网址为:www.vivacity.it的新闻,论坛,笑话等,以及30多个地方,例如,www.roma.vivacity.it与城市专用信息,如本地新闻,餐厅地址,戏剧节目,巴士的时间表,ecc等。预处理步骤包括数据选择,清洗和转化和通过验证的用户和用户会话。结果预处理,方法是一个数据集市的网络访问和注册信息。从预处理的数据,web挖掘的目的是发现模式调整方法从统计数据,数据挖掘,机器学习和模式识别。其中基本数据挖掘技术,我们提到的关联规则,

5、发现集团的物体,常常要求用户一起;集群,集团用户提供类似的浏览方式,或集团类似的物体内容或访问的模式;分类,而有利于的用户被分到某一类或类别;和序列模式,即序列请求这是常见的许多用户。在clickworld项目,有几个上述方法,目前被用来提取有用的信息主动提供个性化网页网站。在本文中,我们描述了两套分类实验。第一个,一项旨在提取一分类模型能够性别歧视的用户根据设置的网页访问。第二次试验的目的是提取一分类模型能够歧视这些用户访问的网页有关例如:提供给典型的实验。2、预处理的web个性化我们已经制定了一个数据集市的网页记录特殊的支持网络个人化分析。该数据集市是人口从一个网络日志数据仓库房子,如中所

6、描述的,或更简单地说,从原材料网络/代理服务器日志种来。在这一节中,我们描述了一些预处理和编码步骤进行数据的选择,理解,清洗和转化。虽然其中一些是一般数据准备步骤,web使用挖掘,值得注意的是,在许多人的一种领域知识必须一定要包括以清洁,正确和完整的输入数据根据网页的个性化需求。2.1用户注册数据除了网页访问日志,我们考虑输入包括个人资料的一个子集的用户,即那些谁注册的vivacity.it网站,备注:注册法不是强制性的。对于注册用户,该系统记录了以下资料:性别,城市,省,婚姻状况,出生日期。此信息是提供由用户在一个网页表单在登记时,作为一个可预计,数据的标准是对用户公平。作为预处理步骤,难以

7、置信的数据检测并删除,如出生数据在未来或在遥远的过去。此外,一些额外的投入没有进口的数据信息,因为几乎所有的值分别为左为默认选择的网页表单。换言之,领域被认为是不利于区分用户的选择和喜好。为了避免用户位数的登录名和密码在每个访问vivacity.it网站采用的cookie重复。如果一个cookie是由用户的浏览器,然后认证并不是必需的。否则,身份验证后,一个新的cookie 发送到用户的浏览器。随着这一机制,可以跟踪任何用户只要她删除的cookie的体系。此外,如果用户注册,该协会登录cookie是可以在输入数据,然后可以跟踪用户后,还原她删除的cookie.这种机制使检测非人类的用户,如系统

8、诊断诊断和监测方案。通过检查的数量分配给cookie每个用户,我们发现,用户登录test009被派到以上24.000独特的cookie。这不仅是可能的,如果用户是一些程序,自动删除指定的cookie,例如:系统诊断程序。2.2网站的网址一方面,有一些标准化的网页必须形成的统一的网址,以消除不相关的句法的差异。例如,主机可以在ip格式或自身格式,如1是相同的主机作为kdd.di.unipi.it。另一方面,也有一些网络服务器程序采用非标准格式的参数传递。网站的vivacity.it 服务器程序是其中之一。例如,在以下网址:http:/roma.vivacity.it/spe

9、ciali/editcolonnaspeciale/1,3478,|dx,00.html文件的名字1,3478,|dx,载有00码的地方网站,网页识别码(3478)及其专用的参数(dx型)。上述的形式设计了效率的机器进程。作为一个例子,网页标识是一个关键的数据库表的网页模板发现,虽然参数可以检索的网页内容在一些其他就座。不幸的是,这是一场噩梦时,挖掘点击的网址。句法功能的网址是很少的帮助:我们需要一些语义信息,或本论文指定的网址。在最好的,我们可以预期,一个应用程序级别的日志是,即日志的访问语义相关的对象。例如,应用程序级日志是记录用户进入网站主页,然后参观了体育与新闻页面上足球代表队,等等。

10、这将需要一个系统模块监测用户的步骤在语义水平的力度。在这个clickworld项目中这样一个模块被称为clickobserve。不幸地,然而,该模块是一个可交付的项目,它不适用于在收集数据的开始该项目。因此,我们决定提取两个句法和语义信息从网址通过一个半自动的办法。该办法包括通过在逆向工程的网址,从网站设计者说明这意味着每一个url路径,网页id和网页的参数。使用perl脚本,从设计师的描述,我们从原来的提取网址以下信息:本地网络服务器,即vivacity.it或roma.vivacity.it等,这些亲志愿给我们一些空间信息的用户的利益;第一级分类的网址有24种,其中一些是:家庭,新闻,财政

11、,照片,笑话,购物。论坛,酒吧;第二个级别的网址取决于第一级之一,例如:网址分类版购物可进一步分类版的图书购物或pc购物等;第三级分类的网址取决于第二级之一,例如网址分类版的图书购物可进一步分类版编程该书叙事购物或购物和书籍等;参数信息,还详细介绍了三个层次分类,如网址分类版的编程书籍购物可能的isbn书码作为参数的深度分类,即一日的网址,如果只有一个第一级别分类,如果网址的第一和第二级分类,等等。当然,采取的办法主要是其中的一个启发式,随着本次设计的层次上升。此外,本次设计不利用任何基于内容的分类,即说明新闻分类,如体育新闻的编号为12345的代码,即第一级是新闻,并没有提及的新闻内容。附件

12、2:外文原文preprocessing and mining web log data forweb personalizationm. baglioni1, u. ferrara2, a. romei1, s. ruggieri1, and f. turini11 dipartimento di informatica, universita di pisa,via f. buonarroti 2, 56125 pisa italyfbaglioni,romei,ruggieri,turinigdi.unipi.it2 ksolutions s.p.a.via lenin 132/26, 5

13、6017 s. martino ulmiano (pi) italyferraraksolutions.itabstract. we describe the web usage mining activities of an on-going project, called clickworld3, that aims at extracting models of the navigational behaviour of a web site users. the models are inferred from the access logs of a web server by me

14、ans of data and web mining techniques. the extracted knowledge is deployed to the purpose of offering a personalized and proactive view of the web services to users. we first describe the preprocessing steps on access logs necessary to clean, select and prepare data for knowledge extraction. then we

15、 show two sets of experiments: the first one tries to predict the sex of a user based on the visited web pages, and the second one tries to predict whether a user might be interested in visiting a section of the site.keywords: knowledge discovery, web mining, classification.1 introductionaccording t

16、o 10, web mining is the use of data mining techniques to auto-matically discover and extract information from web documents and services. a common taxonomy of web mining defines three main research lines: content mining, structure mining and usage mining. the distinction between those categories is

17、not a clear cut, and very often approaches use combination of techniques from different categories.content mining covers data mining techniques to extract models from web object contents including plain text, semi-structured documents (e.g., html orxml), structured documents (digital libraries), dyn

18、amic documents, multimedia documents. the extracted models are used to classify web objects, to extractkeywords for use in information retrieval, to infer structure of semi-structured or unstructured objects.structure mining aims at finding the underlying topology of the interconnections between web

19、 objects. the model built can be used to categorize and to rank web sites, and also to find out similarity between them.2 m. baglioni et al.usage mining is the application of data mining techniques to discover usage patterns from web data. data is usually collected from users interaction with the we

20、b, e.g. web/proxy server logs, user queries, registration data. usage mining tools 3,4,9,15 discover and predict user behavior, in order to help the designer to improve the web site, to attract visitors, or to give regular users a personalized and adaptive service. in this paper, we describe the web

21、 usage mining activities of an on-going project, called clickworld, that aims at extracting models of the navigational behavior of users for the purpose of web site personalization 6. we have collected and preprocessed access logs from a medium-large national web portal,vivacity.it, over a period of

22、 five months. the portal includes a national area (www.vivacity.it) with news, forums, jokes, etc., and more than 30 local areas (e.g., www.roma.vivacity.it) with city-specific information, such as local news, restaurant addresses, theatre programming, bus timetable, ecc.the preprocessing steps incl

23、ude data selection, cleaning and transformation and the identification of users and of user sessions 2. the result of preprocessing is a data mart of web accesses and registration information. starting from preprocessed data, web mining aims at pattern discovery by adapting methods from statistics,

24、data mining, machine learning and pattern recognition. among the basic data mining techniques 7, we mention association rules, discovering groups of objects that are frequently requested together by users; clustering, grouping users with similar browsing patterns, or grouping objects with similarcon

25、tent or access patterns; classification, where a profile is built for users belonging to a given class or category; and sequential patterns, namely sequences of requests which are common for many users.in the clickworld project, several of the mentioned methods are currently being used to extract us

26、eful information for proactive personalization of web sites. in this paper, we describe two sets of classification experiments. the first one aims at extracting a classification model able to discriminate the sex of a user based on the set of web pages visited. the second experiment aims at extracti

27、ng a classification model able to discriminate those users that visit pages regarding e.g. sport or finance from those that typically do not.2 preprocessing for web personalizationwe have developed a data mart of web logs specifically to support web personalization analysis. the data mart is populat

28、ed starting from a web log data warehouse (such as those described in 8,16) or, more simply, from raw web/proxy server log files. in this section, we describe a number of preprocessing and coding steps performed for data selection, comprehension, cleaning and transformation.while some of them are ge

29、neral data preparation steps for web usage mining2,16, it is worth noting that in many of them a form of domain knowledge must necessarily be included in order to clean, correct and complete the input data according to the web personalization requirements.2.1 user registration datain addition to web

30、 access logs, our given input includes personal data on a subset of users, namely those who are registered to the vivacity.it website (registration is not mandatory). for a registered user, the system records the following information: sex, city, province, civil status, born date. this information i

31、s provided by the user in a web form at the time of registration and, as one could expect, the quality of data is up to the user fairness. as preprocessing steps, improbable data are detected and removed, such as born data in the future or in the remote past. also, some additional input fields were

32、not imported in the data mart since almost all values were left as the default choice in the web form. in other words, the fields were considered not to be useful in discriminating user choices and preferences.in order to avoid users to digit their login and password at each visit, the vivacity.it w

33、eb site adopts cookies. if a cookie is provided by the user browser, then authentication is not required. otherwise, after authentication, a new cookie is sent to the user browser. with this mechanism, it is possible to track any user as long as she deletes the cookies on her system. in addition, if

34、 the user is registered, the association login-cookie is available in the input data, and then it is possible to track the user also after she deletes the cookies. this mechanism allows for detecting non-human users, such as system diagnosis and monitoring programs. by checking the number of cookies

35、 assigned to each user, we discovered that the user login test009 was assigned more than 24.000 distinct cookies. this is possible only if the user is some program that automatically deletes assigned cookies, e.g. a system diagnosis program.2.2 web urlresources in the world wide web are uniformly id

36、entified by means of urls(uniform resource locators). the syntax of an http url is: http:/ host.domain :port abs path ? querywhere host.domain:port is the name of the server site. the tcp/ip port is optional (the default port is 80), abs path is the absolute path of the requested resource in the ser

37、ver filesystem. we further consider abs path of the form path / filename .extension, i.e. consisting of the filesystem path, filename and file extension. query is an optional collection of parameters, to be passed as an input to a resource that is actually an executable program, e.g. a cgi script.on

38、 the one side, there are a number of normalizations that must be performed on urls, in order to remove irrelevant syntactic differences (e.g., thehost can be in ip format or host format 1 is the same host as kdd.di.unipi.it). on the other side, there are some web server programs that adop

39、t non-standard formats for passing parameters. the vivacity.it web server program is one of them. for instance, in the following url:http:/roma.vivacity.it/speciali/editcolonnaspeciale/1,3478,|dx,00.html the file name 1,3478,|dx,00 contains a code for the local web site (1 stands for roma.vivacity.i

40、t), a web page id (3478) and its specific parameters (dx). the form above has been designed for excient machine processing. for instance, the web page id is a key for a database table where the page template is found, while the parameters allow for retrieving the web page content in some other table

41、. unfortunately, this is a nightmare when mining clickstream of urls.syntactic features of urls are of little help: we need some semantic information,or ontology 5,13, assigned to urls. at the best, we can expect that an application-level log is available, i.e. a log of accesses to semantic-relevant

42、 objects. an example of application-level log is one recording that the user entered the site from the home page, then visited a sport page with news on a soccer team, and so on. this would require a system module monitoring user steps at a semantic level of granularity. in the clickworld project su

43、ch a module is called click observe. unfortunately , however, the module is a deliverable of the project, and it was not available for collecting data at the beginning of the project. therefore, we decided to extract both syntactic and semantic information from urls via a semi-automatic approach. the adopted approach consists in reverse-engineering urls, starting from the web site designer description of the meaning of each url path, web page id and web page parameters. using a perl script, starting from the designer description we extracted from original urls

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论