Data excavation is what tech industry is acquiring into. Companies have one million millions of informations points and looking for a agency to change over it into gross. Data excavation includes tonss of techniques to roll up the information and information and change over it into something from which everyone can profit. This chapter will present you to the study, what all is presented here and what all consequences were derived.

A Network Intrusion Detection System ( NIDS ) is an invasion sensing system which tries to capture the malicious packages such as denial of service onslaughts, port scans or event drudges into computing machines but supervising the web traffic sense. A NIDS senses all the inward coming packages and seek to sort them based on some regulations or signatures. For illustration, there are users who ever log into their system in twenty-four hours timings and all of a sudden some of them entree their system in late dark ; this is considered as leery and needs to be checked. This work is done by NIDS. Another common illustration is of the port scan. If there is big FTP connexion petitions to assorted ports, it can be easy assumed that person might be seeking to make the port scan of the web. NIDS is besides used to observe incoming shellcodes.

GET EVEN A BETTER ESSAY WE WILL WRITE A CUSTOM
ESSAY SAMPLE ON
Data Mining In The Technology Sector... TOPICS SPECIFICALLY FOR YOU

NIDS non merely inspects the entrance traffic but can besides supervise the outgoing traffic, taking to valuable invasion information. Some onslaughts on the web might affect choping into the web utilizing the inside computing machine ; and hence will ne’er be considered as web onslaught. Therefore to forestall such onslaughts we need NIDS to supervise the information from inside the web besides, non merely incoming information.

Largely, web invasion sensing systems are used with other systems as good. They can be used in concurrence with firewalls, Spam filters, anti-viruses etc. They can be used to update IP black book of some firewalls. Besides they can be used to log information into database of user ‘s pick. They can give regular qui vives at clip of invasion sensing via electronic mail, sound beginnings etc. Again all this can be programmed by the decision maker of the web.

This study will chiefly cover the subject of Data Mining. What is information Mining, how we can make it, how we can utilize it and other such facets. It will besides concentrate on Network Intrusion Detection Systems, Snort, how to put in it, how to configure the snicker to acquire it running and how to acquire the end product of snicker into the Chinese gooseberry syslog waiter.

2. Datas excavation with WEKA, Part 1: Introduction and Arrested development

2.1 Introduction

First of all we should understand the construct of informations excavation in a layman linguistic communication. Anyone in today ‘s universe may inquire what are companies like Google, Yahoo! etc are making with the one million millions and one million millions of informations points they have generated about their users. What are their programs sing all this information. It is of no surprises to cognize that Wal-Mart is one of the most advanced companies that apply the constructs of data-mining to acquire important consequences on their concern. In today ‘s universe virtually every company in the universe is utilizing data-mining to progress their concerns, and if they do n’t make so, they will shortly happen themselves in great disadvantages.

So, how can one utilize the power of data-mining to heighten their concern?

This chapter will reply all the initial uncertainties or inquiries you might be holding about data-mining. It will give a important sum of exposure on Waikato Environment for Knowledge Analysis ( WEKA ) , which is free and unfastened beginning package. This package can be used to mine informations, and turn what you know in Numberss to information to increase your grosss. One might believe that adept systems are required to make data-mining but it is non so. After this chapter you will see for yourself that you have learned a pretty-good occupation of data-mining.

This chapter will discourse the first and easiest technique for data-mining, which is Regression. It transforms data/information in a numerical anticipation for future informations. It is so simple that you might hold already encountered such things earlier in the prevalent spreadsheet package in your market, although WEKA can make much more complex computations to assist you. In future chapters, other methods like constellating, nearest neighbour, categorization trees etc. will be touched upon. ( If these footings frighten you, do n’t be, all will discussed as we progress. )

2.2 What is data excavation?

Now, allow ‘s switch our focal point to core constructs of data-mining. What is data-mining? It is the transition of big balls of informations into meaningful forms. Data-mining is of two types: Directed and Undirected. In directed data-mining, we have to foretell a peculiar information point, as in our illustration is monetary value of the house we need to sell ; given the monetary values of other houses in the nearby countries. In adrift data-mining we try to make different groups of informations, or happen forms between them. Examples include making data-mining on consensus information, state populations, seeking to undercover tendencies in life manner, nutrient wonts etc.

The data-mining as we see it started in mid 1890ss. This was merely because the power of calculating increased and cost of calculations and storage reached a point when companies did hold to engage outside human dynamos. They could purchase the equipment easy and make in house data-mining.

The term data-mining refers to tonss of informations, techniques and assorted processs to analyze informations and turn into something utile. So, this study will merely touch the surface of the techniques of data-mining. Experts in the field spend 20-30 old ages for the same. And, they might give you an feeling that this is something which merely large companies can afford.

This study hopes to throw visible radiation on many of the misconception about data-mining, and I will seek to do things every bit clear as possible. It is non every bit simple as utilizing a expression in an excel sheet, but it ‘s non so hard that you can non pull off it yourself. This brings me to 90/10 theoretical account. You can accomplish 80/20 theoretical account easy but to force yourself to the 90/10 theoretical account you have to acquire into the deepnesss of the topic. To bridge the distance between two theoretical accounts it will take you about 20 old ages. So, until and unless you have decided to take it as your calling, “ good sufficiency ” is what you need. Besides it will be better than what you are utilizing right now, so there ‘s no injury in good adequate manner!

End consequence of a data-mining theoretical account is to develop a theoretical account, a theoretical account that can better and propose new ways to construe and cognize your bing informations and the information you still have to come across. Because there are many methods to travel about the whole data-mining thing, foremost you have to take what theoretical account you would be utilizing for your informations, what theoretical account will carry through your data-mining demands to the best. Although, this will come with counsel, experience and pattern. After reading this study you should be able to look at your informations and say “ Yes this is the right theoretical account for my informations set. ” You will be able to do “ good plenty ” theoretical accounts out of your informations.

2.3 WEKA

Data-mining is n’t the country of large companies and expensive package systems. In fact there is an unfastened beginning, freely available package that does same things as that expensive package. This is known as WEKA. WEKA is developed by University of Waikato ( New Zealand ) and was foremost used in its modern signifier in 1997. It uses the General Public License ( GPL ) under GNU. The codification for this package is written in Java and contains graphical user interface for interacting with informations files and gives tabular arraies, curves or graphs as the ocular end product. It besides has the API support so it can be easy embedded into other applications, such as machine-controlled server side data-mining undertakings.

Now delight travel in front and put in WEKA on your system. Its Java based so if you do n’t hold JRE installed on your machine, delight download the WEKA version that has JRE every bit good. When you start the apparatus you will the window as shown below.

Figure1. WEKA startup screen

When the WEKA is started, the in writing user interface window pops up and offers you four ways to work with WEKA and your informations set. For illustrations discussed in this chapter choose merely the adventurer option. This option will be more than sufficient for what is discussed in this chapter.

Figure2. WEKA Explorer

Now that you have learned how to put in and get down up WEKA, we will get down our first data-mining theoretical account that is Regression.

2.4 Arrested development

Arrested development is most likely the least powerful techniques to mine your informations, but is the easiest 1 to utilize ; no inquire these two things go manus in manus at the same time. This can be viewed as an easy one input variable and one end product variable ; called a spread diagram in Microsoft Excel or X-Y diagram in Openoffice.org. It can easy be made complicated by presenting figure of input variables. In the arrested development theoretical accounts, all about fit the same general spiel. There are a figure of independent variables which are available, and utilizing them the theoretical account gets you a dependent end product variable. This theoretical account is used to foretell the consequences given the values of all the independent variables.

Regression theoretical account is new to no-one. Everyone has likely seen or even used the theoretical account before, and may be even created the same in head. We will be discoursing an illustration of house pricing in similar locations. The monetary value of the house here is the dependent variable, which is dependent on a figure of independent variables. These independent variables include size of the batch, the square footage of the house ; bathrooms are upgraded, whether granite is in the kitchen etc. If you have of all time bought or sold any house, it is likely you would hold created such theoretical account already in your head before making so. You must hold created the theoretical account comparing your house to other houses in the same vicinity and the monetary values they have been sold for. You create a theoretical account for already sold out houses and so put parametric quantities for your house into the same theoretical account and acquire the likely monetary value of the house.

Let us now create some existent theoretical accounts based on the assumed informations. Let us presume the informations below is the existent information for my vicinity, and I am seeking to happen the monetary values for my house. This end product can besides be used for belongings revenue enhancement appraisal.

Regression theoretical account hardly scratches the surface of the data-mining ; this can be good or the bad intelligence for you depending on the use and your position. There are complete college semesters giving to this, and they might besides learn you what you do n’t wish to cognize. But, these abrasions we are tilting are good plenty for the WEKA usage within this chapter. If you have continued involvement in WEKA, data-mining or the statistical theoretical accounts you can seek for footings like “ normal distribution ” , “ R-squared and P values ” in your favourite hunt engine.

2.5 Constructing the information set for WEKA

Loading informations into WEKA requires seting up informations into a format WEKA and understand. This preferable format is ARFF ( Attribute-Relation File Format ) , where user can type in the type of informations and the information. In this file one has to give the name of columns being used in the information and what type of informations each column will incorporate. This can be an whole number value, float value, day of the month or a twine. But in instance of arrested development these types are limited to the numeral value or the day of the month values. After this the existent information is supplied in this file. There are rows of informations in a comma-delimited format. The ARFF file we will be discoursing in shown below. Notice that in the informations set we have non included the dependant variable for the house, of which we want to cognize the monetary value. Since it is data input, we will non be come ining the information for the house whose merchandising monetary value is unknown.

Table2: WEKA File format

2.6 Loading the information into WEKA

After we have created the information file, we have to make the theoretical account we will utilize ; in this instance it is the arrested development theoretical account. Start WEKA and take the “ Explorer ” interface. You will be taken to the Explorer screen, shown under the preprocessors tab. Click on Open File and choose the ARFF file which you must hold created earlier. After choosing you should see something similar to as shown below.

Figure3. WEKA with house informations loaded

In the adventurer position, WEKA allows the user to reexamine the information that is being worked upon. In the left subdivision of this window, it shows all the columns ( properties ) that are present in our informations and besides the figure of rows of the supplied informations. When you select the any column the right subdivision shows the information about the informations of that column nowadays in the information set. For illustration, chink on the houseSize column in the left portion of the window, the right portion will now demo you the extra statistical information about the size of the houses. Maximum value, which is 4032 square pess, is shown along with the lower limit and mean values. Standard divergence of 655 square pess is besides calculated and shown along with the above information. If you do n’t cognize what is standard divergence making concern, it is the statistical step of the discrepancy. Not merely this, there is besides a ocular tool available to analyze the information you have entered. Click on Visualize All button. ( Due to the little size of our informations set, the visual image is non as powerful it should be in large informations sets with 100s of 1000s of rows. )

Enough looking at the informations, now let ‘s travel on to making the information theoretical account and a monetary value of my house!

2.7 Making the arrested development theoretical account with WEKA

Click on the Classify check to get down making the information theoretical account. First select the informations theoretical account you want to construct, so WEKA now knows what type of informations it has to work with and how to make the appropriate theoretical account.

Expand the maps subdivision, after snaping the Choose button

LinearRegressing foliage is selected

By now the WEKA already knows that we are constructing a arrested development theoretical account. One can clearly look through that there are a figure of other options besides aˆ¦ tonss of other theoretical account! This will state you that we are truly merely touching the surface. Please note that there is besides another option as SimpleLinearRegression in the same subdivision. Please do certain you do non take this foliage as this theoretical account looks onto merely one variable, and in our informations set we have six variables. When you have done all this, you get a screen like the one shown below in the figure 4.

Figure4. Linear arrested development theoretical account in WEKA

Is all this possible in a spreadsheet?

It has no and yes both as reply. Short reply is no and long reply is yes!

Most of the popular spreadsheet package presently present in the market can non easy make what we merely did. However, if you are non making data-mining on multiple variables, and you are concerned with merely one variable, that is SimpleLinearRegression it can be done. Do n’t experience so weather at this point, it can make arrested development with multiple variables, but it would be excessively hard and confusing and decidedly every bit easy as making it with WEKA.

At this point our desired information theoretical account has been chosen. Now we have to state WEKA where the information is present for constructing this theoretical account. It might be obvious to you that we have already provided the ARFF file, but there are really different options. The options we will be utilizing are more advanced. The three other options nowadays are

Supplied trial set: Here we can provide different sets of informations to construct the theoretical account

Cross-validation: This option lets WEKA construct a theoretical account out of the subsets of the supplied informations and so takes the mean out of them to make the concluding manner.

Percentage split: In this option WEKA takes a percentile of the subset of the supplied information set, and builds a concluding manner.

Actually these three picks are utile with other informations theoretical accounts, which we will see in future chapters. With arrested development theoretical account we can merely take the preparation set. This tells WEKA that we want to utilize the information we supplied in the ARFF file to construct our informations theoretical account.

Last measure in making our informations theoretical account is to choose the dependant variable, which is the column we are looking for anticipation. We know that this column in the merchandising monetary value, but we have to state this to WEKA excessively. There is a combo box, right below the trial options which lets us take the dependant variable for our informations theoretical account. Although the col.umn sellingPrice is selected by default, if non so, please choose it.

After all this, chink on Start. The figure below shows what the end product should look like.

Figure5. House monetary value arrested development theoretical account in WEKA

2.8 Interpreting the arrested development theoretical account

Weka shows the arrested development theoretical account in the end product excessively, and does non mess around.

This is shown clearly in Table 3.

Table 4 displays the consequence, which is the selling monetary values of my house, after seting in the values of the independent variables for my house.

However, if we look back to the beginning of the subject of data-mining, we would detect that data-mining is non about giving a figure as the end product, but it is about placing forms in the informations and different other regulations that can be formulated. It is non used to acquire a figure but instead to develop or make a information theoretical account that helps in anticipation of different forms, detect assorted other parametric quantities and assist us come up with unequivocal consequences. Now let us construe the consequences as shown in the end product window, apart from looking at the merchandising monetary value of the value. Let us look at the expression used for acquiring this merchandising monetary value.

The Granite does non count

To statistically lend to the truth of the theoretical account, WEKA merely uses the columns that add to the truth of the theoretical account created. The columns that deplete this truth are non used. This arrested development theoretical account tells us that whether granite is present or non, it does non lend to the merchandising monetary value of the house.

Bathrooms besides do non count

In this column we have used a simple value of zero or one. Now we use the coefficient we get from the arrested development theoretical account expression to cognize how the value of upgraded bathroom affects the value of the house overall. The theoretical account being discussed Tells us that it adds $ 42,292 to the house value.

Bigger the house, lesser the value

Our theoretical account tells us that if the house has larger country, so it will be holding lower selling monetary value. This is clearly seeable from the negative mark before the houseSize variable. The expression tells that $ 26 is reduced from the house value for each extra square pes of country. But this makes no sense at all. So, what is the right reading for this fact? The size of the house is non the lone independent variable on which the house value is dependent. It is related to the figure of sleeping rooms in the house, because bigger the house, more sleeping rooms it should hold. This clearly indicates that our theoretical account is non perfect. But this is non something that can non be fixed. In the preprocessors tab we can easy take any column from the information set which we do non desire to be lending to our informations theoretical account.

Now allow us see another illustration from the official WEKA web site. This is more complex than our illustration of small figure of houses. This illustration strives to state what will be the stat mis per gallon for the given auto given the assorted other parametric quantities. These parametric quantities will run from supplanting of engine to the HP it produces. Besides, how many cylinders does the engine has, how much does the auto weigh, what is its acceleration, theoretical account and do of the auto, what is its production twelvemonth, state of origin etc. Not merely are these, to perplex our theoretical account, there about four hundred rows in the given informations set. Yes, in theory this looks all complex, but WEKA has no job managing such informations.

To bring forth the informations theoretical account for this set of informations you have to follow the same stairss as shown above for the house illustration. So I will non once more state you the same stairss, and will straight discourse the end product you get after making the theoretical account.

The end product tabular array is shown below.

When you run the theoretical account for the above illustration you must hold noticed that WEKA does non take even a 2nd to calculate the theoretical account. So computationally, it non a job for WEKA to make a powerful and utile arrested development theoretical account for immense sum of informations. Besides, you this theoretical account might look excessively complex to you as compared to the house illustration. But it is non. Let see how to construe the theoretical account formed. Let us take first line of the theoretical account, -2.2744 * cylinders=6,3,5,4, it means that that if the vehicle has three, four, five or six cylinders you would put 1 in this column, otherwise for any other value it will be a nothing. This is made clearer by an illustration. See informations set row figure 10, and set in the Numberss from this row into the information theoretical account. After this you will see that see the end product from the arrested development theoretical account about matches the end product given to us in the information set!

Table6. Example MPG informations

You can seek the same thing with any other informations set row besides. So, what does this means? This means that our informations theoretical account is executing good, and predicts a close end product of 14.2 stat mis per gallon when the existent stat mis per gallon in 15. We can be assured we will acquire an approximative correct value for the informations whose end product dependent variable is non known.

3. Data excavation with WEKA, Part 2: Categorization and bunch

3.1 Introduction

In the old chapter, construct of data-mining was introduced. Besides I made you familiar with the WEKA package, which is unfastened beginning and free to utilize. It helps you to mine your ain information without aid of an foreigner. I besides discussed about the first theoretical account of datamining and likely the easiest 1: Arrested development. This allows you to foretell a numerical value based on the values of the dependent variables. This is the least powerful data-mining algorithm. It shows a good illustration of how the natural informations can be converted to utile information to be used for future intents.

In this chapter we will be discoursing about two more extra algorithms of data-mining that a spot more complex than the method discussed in old chapter. This straight comes from the fact that they are more powerful than the old one and assist you construe your informations in different ways. Besides I have said earlier, the key to utilizing the power of data-mining is to cognize which theoretical account you have to utilize to mine the information you have. If the right theoretical account is non used, it will be nil more than refuse! We all see on assorted sites, that the clients who bought this, or who viewed this besides bought or viewed these articles or points. There is no numerical value associated with this sort of informations. So now lets learn excavation into the other theoretical account you can utilize for your informations.

In this chapter I have besides included parts about the nearest neighbour method but we will non be traveling into inside informations about this method. I have included this to finish the comparings I want to foreground on.

3.2 Classification vs. Clustering vs. nearest neighbour

I think we should seek to understand what each theoretical account strives to carry through before traveling into the elaboratenesss of any theoretical account and practically running those theoretical accounts on WEKA. What type of informations and the ends are addressed by each theoretical account. Let us now get back to our first informations theoretical account – arrested development, so you can associate the new theoretical accounts to the theoretical account we already know of. Here we will be utilizing a practical universe illustration to demo how each theoretical account is different from each other and how it can be of any usage to us. All of my illustrations will be about a local BMW trader, who wants to increase its gross revenues. The shop has all the information about people who have bought a BMW or even had a expression at it and hold gone through the BMW salesroom. Using the data-mining of the available franchise wants to increase the current concern.

3.2.1 Arrested development

Question here is how much the trader charge for a new BMW M6. Regression theoretical account that we have already studied can easy reply this inquiry and give a numerical end product based on the expression derived in the theoretical account. It will utilize information of the past gross revenues of the M6 to find how much trader had been bear downing for the old autos, what were the characteristics available on those autos. The theoretical account will so inquire the trader to set the inside informations of the new auto he is willing to sell and give him the merchandising monetary value.

For illustration: Selling monetary value is $ 25k + $ 2.9k multiplied by litres in engine + $ 9k if it a sedan + $ 11k if it is a exchangeable + $ 100 multiplied by the length of the auto in inches + $ 22k if the auto is exchangeable

3.2.2 Categorization

But now the inquiry of concern is “ What are the opportunities that any given individual X will purchase the BMW M6? ” Such inquiries can be answered by making a categorization tree. It will state us what are the opportunities of any individual purchasing a BMW M6. There can be assorted nodes on this categorization tree were speaking about. Some of such nodes may be age of the individual in inquiry, his one-year income, his gender, what all autos he presently has, figure of childs, whether he own a place or he rent a topographic point etc. These properties can be used in a categorization to cognize the likelihood of him purchasing the new auto.

3.2.3 Clustering

In this facet is what age group of people likes to hold a BMW M6. Again data-mining can be applied to acquire the reply. We already have the informations of past clients, what is their age. From this informations group we can deduce utilizing our informations theoretical account, whether any peculiar age group has higher chance of purchasing a new BMW M6, whether they are likely to order bluish colour BMW. Besides it can be determined what colourss are likely to be ordered by people of other age group. In short, informations when mined will constellate for different age groups, and different coloured autos assisting you to easy find the form between them.

3.2.4 Nearest neighbour

Question here is, when people purchase a new BMW M6, what are the other characteristics or optional things they like to purchase with it? Data-mining can be applied here to cognize the tendencies of other things purchased, which might include fiting manus baggage or a duplicate coloured carpus ticker etc. Using this information the auto trader can do specialised promotional bundles of the points the people tend to purchase along with. This will assist the franchise addition its gross revenues. Besides trader can offer price reductions on these “ other ” points.

3.3 Categorization

Categorization is an algorithm used for data-mining that will do a stepwise usher, which will be used to find the end product of the theoretical account. It is besides known as determination trees or categorization trees. The created tree has nodes, which represent a determination topographic point, i.e. a determination has to be made at a node before traveling farther. This has to be done until and unless you have reached a foliage node, which is the terminal node in the tree and has no kids. It might sound confounding to you but really its simple and consecutive frontward, as shown in the tabular array below.

Now allow us see what is really understood by this illustration. At the root node, or the first node there is a inquiry which asks you whether you will read this subdivision or non, and goes to the reply based on your option. Following if you have chosen yes you will be asked whether you will understand it, or if you have answered no, the foliage node is at that place which says you will non larn it. The chief advantage of this categorization tree is that you do n’t necessitate a batch of information on the information to do this tree construction that is usually right and enlightening.

The arrested development theoretical account and the categorization have a similar construct of “ preparation set ” to bring forth the informations theoretical account. The information set of known end product values is taken and the theoretical account is built. Using this, the expected end product is got for the input variables for which do n’t cognize the end product. This is all similar to what we have done and seen in the old arrested development theoretical account. But, this theoretical account needs an excess measure to do it more helpful and precise. It is recommended that you put about 60 to 80 per centum of the information rows into the information set for preparation intents, which is so used for theoretical account edifice. Staying values are used as proving set. We so instantly utilize this trial set to look into the truth of the theoretical account we have merely created.

Now, you might be inquiring why we are making this excess measure. This is done to get the better of the job of overfitting. If we create a really big informations set, so a theoretical account which is precisely perfect for that information will be created, but merely for that information. We will be utilizing the theoretical account to do anticipations in future excessively, and we want the theoretical account to work mulct for that excessively. To get the better of the job of overfitting and to do certain that the efficiency of our theoretical account is non restricted to the trial set informations, we have divided this information set into two parts. We will see this practically farther on.

Besides we have to discourse one more major construct of categorization trees, known as pruning. Sniping, as it is obvious from the name itself, it is the procedure of taking the categorization tree ‘s subdivisions. Now you would inquire why we would wish to take some of the subdivisions of the categorization tree. Again, the ground here is overfitting. Trees become complex if the rows and columns in out informations are really big. In theory figure of foliages in a tree is generation of the figure of rows and columns in our informations. But once more, it is of no usage to us, as it will non be utile in future anticipations, instead it will suit the present informations absolutely. So, we want to make a balance. A tree with least nodes, doing it the simplest tree is preferred, but we have to carefully pull off the tradeoff between this and truth. We will see it further.

Before get downing the usage of WEKA for this theoretical account, there is one last thing I want to set up before you ; the construct of false positive and false negative. False +ve is a information point where out theoretical account has predicted that it is a positive value, but really it is a negative value. Similarly, a false negative is a information point where out theoretical account has predicted that it is a negative value, but really it is a positive value.

Our theoretical account is falsely sorting the information presented, clearly indicated by the mistakes discussed supra. The interior decorator of the theoretical account has to take into the history up to what per centum of mistakes is acceptable, because mistakes are ever traveling to be at that place. The credence per centum will be dependent on the use of the theoretical account you are making. Let us see the theoretical account is traveling to be used got monitoring bosom rate in some infirmary, evidently, per centum of mistake has to be really less. On the other manus, if you are making the theoretical account to larn about data-mining ( as you are making now ) , the credence per centum of mistakes can be comparatively high. Besides the interior decorator needs to specify what per centum of false negative vs. the false positive can be accepted. Let us see the emailing system. If a existent electronic mail is marked as Spam ( false +ve ) can be highly harmful as compared to the false -ve, that is a Spam coming to your inbox. In this a ratio of 1000:1 may be acceptable, once more depending on the demands.

We have looked plenty on the background and other proficient inside informations of the categorization trees ; now let ‘s leap on the existent universe job, utilizing the existent universe informations set. Let us now put all this into WEKA.

Share this Post!

Kylie Garcia

Hi there, would you like to get such a paper? How about receiving a customized one?

Check it out